I caught a neat little article that – while written specifically to the restaurant industry – is applicable to any business. In fact, it ties nicely to a whitepaper we wrote a while back titled, “When the Waiter Brings Bad Food: Measuring the contribution of the Service / Support Organization in an environment of intervening external influences.”
I wanted to call out this part of the article:
” “Own” the dining experience – No, waiters are not the ones cooking the food, and they are in limited position to actually transform the culinary element of the dining experience. But as the face of the restaurant’s service effort, they are the ones linked to the dining experience, and they are the ones the customer immediately holds accountable for the quality of the meal.
“Waiters need to assure they indeed accept that accountability. A pivotal element of delivering a quality customer experience is “thinking like the customers” and understanding what it will take to satisfy them. Food is important, and if waiters do not sympathize with diners who received a bad plate and share in the ecstasy of those who enjoyed their meals, they will be unable to customize their service offerings to the reality of the situation.
“Waiters need to own customers as their own customers, and they need to serve as customer advocates when communicating with the rest of the staff. They might not be making the food, but they are the ones entrusted with keeping the customer happy, and they must do what it takes to get the entire restaurant committed to that quality experience. If the chef drops the ball on a meal, the waiter should not feel immune because he did not make it but should instead demand satisfaction because his customer has been let down.
“Waiters must look at their roles not as middlemen between the culinary artists and patrons but as facilitators between a customer’s desire for a great experience and the restaurant’s success I creating that aura of excellence.”
While this is all accurate (and I’m confident that any service organization already does feel ownership for delivering an excellent customer experience), the fact remains that service organizations can only do so much, yet often take the heat for poor customer feedback (as noted in the above mentioned whitepaper). Service and Support organizations are often blamed for not being able to adequately address customer problems, deal with floods of incoming incidents, or drive cross-sell opportunities. Before blaming “Customer Service,” the business should understand why – the genuine root causes, not symptoms – customers are requiring reactive service in the first place. Instead of relying on service centers to be the “catch all” for customer issues, I think we’d all agree that business needs to place more emphasis on isolating the reasons why customers need reactive service and deal with those situations proactively. Managing to reactive service levels is a business decision, and the dollars spent there can either be allocated to dealing with issues, or be allocated to proactively creating promoters. Which has the better ROI?
The full article is here: 5 Ways to Be a Better Waiter, Deliver a Better Restaurant Experience.
Last week we conducted a completely NON-scientific poll in which we asked people involved in “customer feedback” programs to tell us how they refer to their effort. The question was posed to readers of our blog and through LinkedIn groups that cover “customer feedback” in some way.
It’s interesting to see that most people took the time to click through and look at the 1-question poll and then didn’t respond. Also interesting to see from related research that while most people use Net Promoter they generally don’t refer to their program that way.We won’t infer anything from this simple poll, although perhaps our industry can start rallying behind the most cited “Customer Experience” moniker… thoughts?
Are you interested in leveraging customer feedback to help your organization improve customer loyalty?
As a “customer” do you want companies to improve the experience you have with them?
There are so many different names for our body of work. While the members of this “loyalty” profession understand the nuanced differences in the words we use, I suspect the various labels that refer to *largely* the same thing only help to perpetuate misunderstandings in our end-audience.
Please take a moment to respond to this poll. You’ll see results when you respond, and we’ll also provide full results next week.
Earlier this week Temkin Group, a customer experience research firm, released a very interesting report titled, “Customer Experience Expectations and Plans for 2012.” The research was conducted in November and December of 2011 with results from 210 respondents from companies of more that $500 million or more in annual revenues. Focusing on their company’s customer experience results and future plans, there were a few very interesting nuggets in there that they have kindly permitted me to share here.
1. Company’s Customer Experience efforts underperform relative to their plans. Since this was an update of a study that was also conducted in 2010 we see that there was a significant negative-gap in what companies planned to achieve in 2011, vs. what they reported they actually achieved a year later.
2. Most companies seem to be focused on measuring, not improving. The majority of the respondents rate themselves as excellent or good in the area of customer insight & analytics. But the area rated lowest is “Employee communications and engagement.” Driving improvement in the customer experience REQUIRES that employees – especially those on the “font line” that are directly involved critical customer-touchpoints – be bought-in and engaged in the effort. By the way, not surprisingly the respondents here also report that their performance in actually running a “VoC Program” fell year-over-year.
I can’t help but to reference Stephen Covey, who famously tells us to “begin with the end in mind.” There’s no point in churning out analysis and reports without a clear set of business objectives, success measurements, and roadmap. Look for models from other companies that have done this successfully (here’s one or two to get started). The take-away in my mind is simple: Don’t hide behind data – get out and talk to people and use your data to tell a powerful business story.
Blog 3 in 3 Part Series on Analysis of Bias-Filled Data
Visiting a city for three days does not give one enough information to make claims about its country’s weather. It is just as dangerous to make conclusions from customer experience feedback without treating the bias that may lie within. In the first post of this series, I discussed different types of bias and particularly the importance of self-selection bias in customer experience data. In the second post, I offered tips to pre-treat your survey to increase response propensity and identify underlying bias. Today, I will share techniques to adjust your data for this bias in order to minimize its effect on your survey results.
Most of the adjustment techniques common in customer experience surveys center around pinpointing which groups are under-represented in the data and assigning weights to these groups to adjust for their lack of response. The weight is the ratio of the representative count of one subgroup (given from a census or known population parameter) to the actual count. Say you have 100 respondents in a customer survey, but 75 of them are women and 25 of them are men. If you wanted to use this data to generalize to the larger population (or make predictions about future customers), you could multiply all the data for men and for women by the following weights:
Weight (men) = Representative Count (men) / Actual count (men) = 50 / 25 = 2
Weight (women) = Representative Count (women) / Actual count (women) = 50 / 75 = 0.67
While this is a common method (and certainly useful), it has a number of limitations, chief among them the inability to weight for multiple variables simultaneously. Logistic regression is a useful technique in this regard as it can evaluate the relative importance of a large number of independent variables to survey response (the dependent variable). Several other techniques utilize logistic regression in order to correct a predictive model for self-selection bias, including sample selection modeling and Heckman correction modeling. The idea in both of these approaches is to create two models: one predicting survey response (the response model) and one predicting some key outcome (the outcome model). The response model’s regression coefficients are used to correct the outcome model for selection bias. These tools have been established and validated in both academic journals and industry practice.
We took this approach out for a spin with a dataset from a recent client and found some interesting trends. A response propensity (RP) score was calculated for each contact (respondent and non-respondent) in our contact base, based on logistic regression coefficients from the response model. Three segments of contacts were created: those below, at, and above the median RP. The survey data from respondents from each segment were analyzed for differences and while our results are still preliminary, we see definite distinctions for certain questions. The plot above shows that the High-RP group (the contacts who were defined statistically to be likeliest to respond) actually have a lower rating for Ease of Doing Business than the Low-RP group (those contacts defined to be least likely to respond). Without using an adjustment described above, our overall Ease rating will be pulled downwards by the fact that the Low-RP group is so under-represented. Your mileage may vary of course – the simplest way of avoiding this problem is by raising the response rate in the least-likely group.
What do you think? Do you use any other techniques for adjusting for self-selection and other biases?
Blog 2 in 3 Part Series on Analysis of Bias-Filled Data
Though most people associate the ability to predict the future with their neighborhood fortune-teller, customer experience practitioners are often in the business of forecasting customer behavior. Different flavors of regression models exist that do a great job at this, using current customers’ survey responses to gain insight into how later customers will act.
Unfortunately, self-selection bias (a form of systematic bias outlined in the first blog in this series) violates one of the classical assumptions of regression modeling – that your sample is representative of the population in question. So how does one tackle this issue before it enters the data? And when it’s there, how can the practitioner identify its presence before starting any data analysis?
The lessons learned here all center around the concept of response propensity (RP), which is a customer’s likelihood of responding to the survey. This can be based on, among other things, cultural/geographical factors and communication hindrances (whether this customer is likely to be responsive to email or inundated by their inbox, for example).
Much like pre-treating stains on laundry before tossing it in the washer, pre-treating your survey design to account for differences in RP can result in a cleaner dataset. Though RP is usually calculated after a survey has been administered, drawing insight from past surveys can tell you how this is distributed within your survey’s population. Have you found that Decision Makers are less likely than End Users to respond to surveys? Perhaps this group needs targeted reminders sent to them with language emphasizing the importance their responses hold. Or maybe past projects have shown that one region has a particularly low response rate which contributes to its members’ tiny RPs. Offering incentives personalized to this demographic could yield the response rates you need to use your responses to predict future ones.
Regardless of your steps to pre-treat your survey design, you must identify the extent to which this bias exists in your data. The most common way is to compare response rates for the different subgroups within every variable liable to influence RP. If any subgroup’s response rate is statistically significantly different from another’s, then you will need to correct for this bias before performing any predictive analytics. This method is not foolproof: it assumes that all factors that impact RP are kept for the full population of survey recipients (non-respondents and respondents). Thus, tracking any potentially relevant variables for the entire customer base can help identify self-selection bias and how exactly it impacts your data. You’ll then be ready to attack the self-selection problem head on (how? I’ll explain in the next entry) and use your data as a crystal ball for future customer behavior.
Which tools or techniques do you use to pre-treat for self-selection bias? Do you see different response rates for different groups or customer segments?
Blog 1 in 3 Part Series on Analysis of Bias-Filled Data
So you’ve designed the perfect customer feedback questionnaire, sent it out to your entire customer base and the responses are flying in. You might be getting excited about analyzing the incoming data but not so fast! In any kind of survey endeavor, especially in customer experience feedback, the analyst must be conscious of the bias present in the data collected. Before discussing techniques to identify and then correct for bias in the data (the second and third parts of this blog series, respectively), I’ll outline the different types of bias that are present in our field.
Two types of bias are a part of every data-based experiment: random bias and systematic bias. Random bias is always present when measuring customer experience or any other behavioral process; people will respond differently based on unpredictable processes such as life events, mood or even the weather! Because random biases can be assumed to fluctuate within the sample, they should not slant your data in any one way.
Systematic bias, on the other hand, skews survey results in a particular direction away from true population values. For example, if you only sent questionnaires to clients from a specific region or ethnicity, you can be sure that their answers will vary in a certain way from those of all customers. This systematic bias that is introduced in how a sample is constructed is called sampling bias.
Of course, few market researchers will intentionally omit certain groups from their survey invitations. But because we are rarely able to use probability-based sampling and instead, collect all survey responses that come in (often called convenience sampling), the sample that emerges is far from representative of the overall population. This will always invite another form of sampling bias: self-selection bias, which occurs because the people who elect to respond differ meaningfully from those who do not. Respondents tend to have more favorable opinions of the company than non-respondents and there is still debate on whether certain cultures or ethnicities are more likely to participate in surveys. Regardless, we must understand that when we analyze customer survey data, we are studying the most engaged group of customers and that, unless we utilize techniques to adjust for this bias, we may only generalize our findings to this smaller group.
Waypoint’s focus on the non-respondents is what differentiates our methodology from the rest of market research. Rather than ignoring this group and solely analyzing respondents data, we know that much insight can be found in searching for which customer traits are most significant in predicting survey response. These factors shape the group that the company most needs to energize and engage – the next step is to follow-up with typical non-respondents to see what went wrong in their experiences.
What do you think? Are you aware of different biases in your customer experience data and how do you react to them?
Great article here in VentureBeat, Why the Internet was wrong about Ron Paul. We’ve written many times in the past about how response bias — only looking at survey results from people that respond to your survey — skews customer feedback results (most recently here: Net Promoter & Statistics: When Accuracy Goes Haywire, and 5 Ways to Proceed).
“Paul dominates positive tweets in an atmosphere that is incredibly negative,” said David Rothschild, a Yahoo researcher focusing on event prediction and individual behavior.
“But,” he continued, “tweets originate from an unrepresentative segment of the electorate who can ‘vote’ many, many times… These are not representative samples of the relevant electorate.”
Ever wonder why your company’s financial performance may not be as strong as the marketing hype around your “customer satisfication” would lead you to conclude? Pay attention to who ISN’T responding: there’s gold in understanding who’s engaged with you… and who isn’t.
As a practitioner in the field of Customer Insights / Customer Experience / Net Promoter / Voice-of-the-Customer (what are we supposed to call this field, anyway?!?), I am frequently asked, “How many responses do we need to be statistically significant?”
Statisticians often use a “margin of error” calculation. Depending on your population size this often suggests ~300 responses per analysis segment. But we can answer the question of “how many do we need” in different ways, with pros and cons for each. Here are my findings, based on my 22 years of real-world experience in this area (and this is certainly a larger topic that I think would be better served as a series of discussions!):
Pros: Confidence intervals are generally familiar and accepted by anyone that sees market research data in the media. People seem to appreciate the idea that “we can be 95% certain that the score is X% +/- Y%.” You can report it and move on.
Cons: Confidence measures assume that you have a representative and random population. Much like in the world of Economics, where textbooks start off, “Assuming a rational world…” we know from experience that most customer feedback programs are not based on random samples that represent the total population. Why?
- People are people, not instruments. We have emotions and biases that can’t always be known.
- Who is responding? That is, who is “opting in” to provide feedback? In our experience, scores generally skew positively. That is, happy customers respond more than unhappy customers, who are otherwise likely to be “checked out” or see no reason to participate.
- Whom are you inviting to provide feedback? Many programs suffer from bias and unintentionally select “happy” customers. Face it – where you have good customer contact data, you will tend to also have stronger customer relationships. And, especially if you compensate your employees based on customer feedback scores, then the program is certainly going to try to seek out happy customers to provide feedback. Just use your car-dealer experience as blatant example.
- What is the right confidence level, anyway? We often see statements like, “At 95% confidence…” That ruler can be generally accepted in the research world where we might be making life-or-death decisions. But would you rather base your decision on evidence or just a hunch? Would 50% confidence be better than 0%?
- Pay attention to your sampling strategy – whom are you inviting to provide feedback? – and also examine who responded. Make sure both areas represent your business in ALL segments that you intend to act upon. Are you seeking out and acquiring feedback from those who matter most? (And how do you know…? We’ll have to address a response to that in a separate post…)
- Recognize that some customers simply are more important to your business than others. Especially in business-to-business (B2B) situations with complex buying cycles, make sure you are talking to the people that matter most.
- Pay attention to everyone. While this might seem contradictory to item #2 immediately above, no business wants negative word-of-mouth that destroys growth and profitability. A sample size of 1 can be telling, especially if you leverage that 1 person to understand root-cause (that looks like yet another potential topic for a future post…)
- Leverage your strengths. We often tend to focus on the negative. Now that you’ve identified your promoters, engage them! Whom do they know? What are the cross-sell opportunities? What can those customers tell you about your competition?
- Context is everything; scores can be meaningless. Whatever you use — net promoter, customer effort, customer satisfaction, etc – you will always need relevant metrics for comparison in order to understand what actions to take. Example: If you step on the scale this week and weigh 170 lbs (~77 kilos), and the week before you weighed 168 pounds (~76 kilos), is that a good thing or a bad thing? In order to answer that I’d need to know more – percent of body fat / BMI, goals (‘thinness’? muscle?), and how you compare to your “peers” (defined by your goals). Scores don’t say much on their own. Similarly, in the “Customer Feedback” world you need to understand your sample and make sure you are comparing apples-to-apples.
As one of my mentors always says, there are a lot of edges to this work. One short blog-post isn’t going to close this out. Bottomline for me is that if your primary goal is to present data then use confidence measures. On the other hand, if you want to drive profitable growth then consider doing more. After all, between this ‘word-of-mouth’ age of the Internet and the need to keep our existing customers coming back for more, don’t you ideally want 100% of your customers to be with you (and not against you)?
In our research and (often) statistics-heavy industry it’s easy to go heads-down and just focus on the work. But Net Promoter and customer Insight work can be fun – take a look at this from our friends in Netherlands! Let’s keep an eye on it to see how this goes:
By the way – while you are over there on that site take a look at their very well written and backed article (including great response
from the NPS “Godfather”, Fred Reichheld) on cultural bias: Net Promoter: Is there a “Dutch Effect”?