Waypoint Group will be at VoC-Fusion, billed as “The World’s Largest Voice-of-Customer Event.” The conference promises to be extremely useful for anyone running a customer feedback/loyalty program, not to mention the invaluable networking opportunities that will take place.
We’re also very pleased that Waypoint Group was asked to create and lead the “Insights to Action” workshop as part of VoC Certification, which will also be held at this event. The Certification will prove invaluable through a series of five well designed and highly engaging courses, where you will learn best practices in loyalty program design and VOC program implementation. Our portion of the Certification will cover best practices in data gathering techniques, analysis (including financial linkage, key driver, and critical statistical methods) and the best ways to turn insights into action.
We hope to see you there!
Great article here in VentureBeat, Why the Internet was wrong about Ron Paul. We’ve written many times in the past about how response bias — only looking at survey results from people that respond to your survey — skews customer feedback results (most recently here: Net Promoter & Statistics: When Accuracy Goes Haywire, and 5 Ways to Proceed).
“Paul dominates positive tweets in an atmosphere that is incredibly negative,” said David Rothschild, a Yahoo researcher focusing on event prediction and individual behavior.
“But,” he continued, “tweets originate from an unrepresentative segment of the electorate who can ‘vote’ many, many times… These are not representative samples of the relevant electorate.”
Ever wonder why your company’s financial performance may not be as strong as the marketing hype around your “customer satisfication” would lead you to conclude? Pay attention to who ISN’T responding: there’s gold in understanding who’s engaged with you… and who isn’t.
We’ve previously written about how Hippos – HIghest Paid Person’s Opinion – can damage ROI. It’s worth calling out a recent McKinsey study titled, “A Rising Role for IT”, may have inadvertently shined the light on this through a footnote that I think is worth calling out:
“…respondents say their companies are shifting decision making to incorporate more data and analytics in almost all corporate functions, with the highest share (60 percent) citing marketing and sales as where this is likely to occur. 3”
“3 In our November 2011 survey “What marketers say about working online: McKinsey Global survey results,” we asked respondents (all of whom represent the sales and marketing functions of their companies) which types of data their marketing departments’ typical decisions rely on, and only 14 percent say their data is limited and that decision making relies on management expertise and experience”
We also see the included graphic, which shows that the greatest percentage of respondents feel that the biggest barrier to adoption is “Company culture prioritizes experience over data.”
True, small sample sizes and perhaps a lack of “trustworthy data” in this study (of which we’ve also written) might cloud the results, but at least for this group of respondents it seems ego may be a barrier. What am I missing – am I reading this too fast or do we see an opportunity to manage our corporate culture and defer to our customers to optimize decision making?
As a practitioner in the field of Customer Insights / Customer Experience / Net Promoter / Voice-of-the-Customer (what are we supposed to call this field, anyway?!?), I am frequently asked, “How many responses do we need to be statistically significant?”
Statisticians often use a “margin of error” calculation. Depending on your population size this often suggests ~300 responses per analysis segment. But we can answer the question of “how many do we need” in different ways, with pros and cons for each. Here are my findings, based on my 22 years of real-world experience in this area (and this is certainly a larger topic that I think would be better served as a series of discussions!):
Pros: Confidence intervals are generally familiar and accepted by anyone that sees market research data in the media. People seem to appreciate the idea that “we can be 95% certain that the score is X% +/- Y%.” You can report it and move on.
Cons: Confidence measures assume that you have a representative and random population. Much like in the world of Economics, where textbooks start off, “Assuming a rational world…” we know from experience that most customer feedback programs are not based on random samples that represent the total population. Why?
- People are people, not instruments. We have emotions and biases that can’t always be known.
- Who is responding? That is, who is “opting in” to provide feedback? In our experience, scores generally skew positively. That is, happy customers respond more than unhappy customers, who are otherwise likely to be “checked out” or see no reason to participate.
- Whom are you inviting to provide feedback? Many programs suffer from bias and unintentionally select “happy” customers. Face it – where you have good customer contact data, you will tend to also have stronger customer relationships. And, especially if you compensate your employees based on customer feedback scores, then the program is certainly going to try to seek out happy customers to provide feedback. Just use your car-dealer experience as blatant example.
- What is the right confidence level, anyway? We often see statements like, “At 95% confidence…” That ruler can be generally accepted in the research world where we might be making life-or-death decisions. But would you rather base your decision on evidence or just a hunch? Would 50% confidence be better than 0%?
- Pay attention to your sampling strategy – whom are you inviting to provide feedback? – and also examine who responded. Make sure both areas represent your business in ALL segments that you intend to act upon. Are you seeking out and acquiring feedback from those who matter most? (And how do you know…? We’ll have to address a response to that in a separate post…)
- Recognize that some customers simply are more important to your business than others. Especially in business-to-business (B2B) situations with complex buying cycles, make sure you are talking to the people that matter most.
- Pay attention to everyone. While this might seem contradictory to item #2 immediately above, no business wants negative word-of-mouth that destroys growth and profitability. A sample size of 1 can be telling, especially if you leverage that 1 person to understand root-cause (that looks like yet another potential topic for a future post…)
- Leverage your strengths. We often tend to focus on the negative. Now that you’ve identified your promoters, engage them! Whom do they know? What are the cross-sell opportunities? What can those customers tell you about your competition?
- Context is everything; scores can be meaningless. Whatever you use — net promoter, customer effort, customer satisfaction, etc – you will always need relevant metrics for comparison in order to understand what actions to take. Example: If you step on the scale this week and weigh 170 lbs (~77 kilos), and the week before you weighed 168 pounds (~76 kilos), is that a good thing or a bad thing? In order to answer that I’d need to know more – percent of body fat / BMI, goals (‘thinness’? muscle?), and how you compare to your “peers” (defined by your goals). Scores don’t say much on their own. Similarly, in the “Customer Feedback” world you need to understand your sample and make sure you are comparing apples-to-apples.
As one of my mentors always says, there are a lot of edges to this work. One short blog-post isn’t going to close this out. Bottomline for me is that if your primary goal is to present data then use confidence measures. On the other hand, if you want to drive profitable growth then consider doing more. After all, between this ‘word-of-mouth’ age of the Internet and the need to keep our existing customers coming back for more, don’t you ideally want 100% of your customers to be with you (and not against you)?