Waypoint Group will be at VoC-Fusion, billed as “The World’s Largest Voice-of-Customer Event.” The conference promises to be extremely useful for anyone running a customer feedback/loyalty program, not to mention the invaluable networking opportunities that will take place.
We’re also very pleased that Waypoint Group was asked to create and lead the “Insights to Action” workshop as part of VoC Certification, which will also be held at this event. The Certification will prove invaluable through a series of five well designed and highly engaging courses, where you will learn best practices in loyalty program design and VOC program implementation. Our portion of the Certification will cover best practices in data gathering techniques, analysis (including financial linkage, key driver, and critical statistical methods) and the best ways to turn insights into action.
We hope to see you there!
I had the opportunity to fly Delta airlines recently. Never been on that airline before (really) as I’ve been stuck in a stupid “loyalty” program elsewhere. Imagine my surprise when I found pleasant service-with-a-smile, and genuinely helpful staff! I was in the unfortunate position of having to check luggage this time around. You know what happened next: yes, they lost my luggage. But that’s NOT the interesting part…
Delta’s baggage-service staff were AMAZING. I’d guess those folks have a difficult job, dealing with upset people all day long. These folks were friendly, thorough, showed genuine concern, and very knowledgeable. They alone could’ve made me a Delta “Promoter” BUT THAT’S not the interesting part …
The baggage-service staff knew why my bag was lost: I had to change airplanes in LAX, a huge, complex airport. Lucky for me I only had a 35-minute layover. Unfortunately 35-minutes isn’t enough time to transfer luggage on a busy day. The baggage-service folks frequently see this problem.
Companies spend millions (billions?) on service recovery. Why not invest similar amounts into addressing the root-cause? At minimum, why not warn people when they ticket that short LAX-layover might cause baggage problems (never mind the checked-baggage fee)? Or, why not turn those spammy emails about “my upcoming trip” into a genuine cross-sell? For example, make me aware of this potential problem, suggest some simple work-arounds, and offer me “baggage insurance” or FedEx delivery? Intuit provides a potential example: TurboTax offers an “audit protection” service when filing (seems to me that the $30 could save anyone lot’s of time in that unhappy event).
I’ve written about this before. I’m no airline expert, but with a little cross-functional collaboration and creative thinking I’d think Delta’s Marketing organization could actually be aligned with delivery. At least I’ve now learned never to check any bags with a short layover through LAX.
Earlier this week Temkin Group, a customer experience research firm, released a very interesting report titled, “Customer Experience Expectations and Plans for 2012.” The research was conducted in November and December of 2011 with results from 210 respondents from companies of more that $500 million or more in annual revenues. Focusing on their company’s customer experience results and future plans, there were a few very interesting nuggets in there that they have kindly permitted me to share here.
1. Company’s Customer Experience efforts underperform relative to their plans. Since this was an update of a study that was also conducted in 2010 we see that there was a significant negative-gap in what companies planned to achieve in 2011, vs. what they reported they actually achieved a year later.
2. Most companies seem to be focused on measuring, not improving. The majority of the respondents rate themselves as excellent or good in the area of customer insight & analytics. But the area rated lowest is “Employee communications and engagement.” Driving improvement in the customer experience REQUIRES that employees – especially those on the “font line” that are directly involved critical customer-touchpoints – be bought-in and engaged in the effort. By the way, not surprisingly the respondents here also report that their performance in actually running a “VoC Program” fell year-over-year.
I can’t help but to reference Stephen Covey, who famously tells us to “begin with the end in mind.” There’s no point in churning out analysis and reports without a clear set of business objectives, success measurements, and roadmap. Look for models from other companies that have done this successfully (here’s one or two to get started). The take-away in my mind is simple: Don’t hide behind data – get out and talk to people and use your data to tell a powerful business story.
Blog 3 in 3 Part Series on Analysis of Bias-Filled Data
Visiting a city for three days does not give one enough information to make claims about its country’s weather. It is just as dangerous to make conclusions from customer experience feedback without treating the bias that may lie within. In the first post of this series, I discussed different types of bias and particularly the importance of self-selection bias in customer experience data. In the second post, I offered tips to pre-treat your survey to increase response propensity and identify underlying bias. Today, I will share techniques to adjust your data for this bias in order to minimize its effect on your survey results.
Most of the adjustment techniques common in customer experience surveys center around pinpointing which groups are under-represented in the data and assigning weights to these groups to adjust for their lack of response. The weight is the ratio of the representative count of one subgroup (given from a census or known population parameter) to the actual count. Say you have 100 respondents in a customer survey, but 75 of them are women and 25 of them are men. If you wanted to use this data to generalize to the larger population (or make predictions about future customers), you could multiply all the data for men and for women by the following weights:
Weight (men) = Representative Count (men) / Actual count (men) = 50 / 25 = 2
Weight (women) = Representative Count (women) / Actual count (women) = 50 / 75 = 0.67
While this is a common method (and certainly useful), it has a number of limitations, chief among them the inability to weight for multiple variables simultaneously. Logistic regression is a useful technique in this regard as it can evaluate the relative importance of a large number of independent variables to survey response (the dependent variable). Several other techniques utilize logistic regression in order to correct a predictive model for self-selection bias, including sample selection modeling and Heckman correction modeling. The idea in both of these approaches is to create two models: one predicting survey response (the response model) and one predicting some key outcome (the outcome model). The response model’s regression coefficients are used to correct the outcome model for selection bias. These tools have been established and validated in both academic journals and industry practice.
We took this approach out for a spin with a dataset from a recent client and found some interesting trends. A response propensity (RP) score was calculated for each contact (respondent and non-respondent) in our contact base, based on logistic regression coefficients from the response model. Three segments of contacts were created: those below, at, and above the median RP. The survey data from respondents from each segment were analyzed for differences and while our results are still preliminary, we see definite distinctions for certain questions. The plot above shows that the High-RP group (the contacts who were defined statistically to be likeliest to respond) actually have a lower rating for Ease of Doing Business than the Low-RP group (those contacts defined to be least likely to respond). Without using an adjustment described above, our overall Ease rating will be pulled downwards by the fact that the Low-RP group is so under-represented. Your mileage may vary of course – the simplest way of avoiding this problem is by raising the response rate in the least-likely group.
What do you think? Do you use any other techniques for adjusting for self-selection and other biases?