Browsed by
Month: January 2016

Revealing Information From Customer Surveys (#retail, #statistics, #business)

Revealing Information From Customer Surveys (#retail, #statistics, #business)

I currently manage a retail enterprise whose customers receive surveys from our product manufacturer in addition to the surveys we solicit from our customers for our own business management purposes. These surveys offer a revealing look into the mind and motivations of both our manufacturer partners and our customers.

First, our manufacturer. Our business is roughly divided into two key operating areas– sales, and service. The sales survey sent by the manufacturer has 6 numerical question categories, most of which are broken down into alphabetical sub-questions. In total the manufacturer is actually soliciting input from a sales customer on 23 (!) different questions, most of which are rated on a 1-10 scale while some are a binary “Yes/No”. The customer is of course invited to provide color commentary on these questions as they like, as well as on the survey overall. Similarly, the service survey has 8 numerical question categories but these are subdivided alphabetically so that the end result is 25 separate questions with 1-10 or “Yes/No” ratings.

The survey questions range across topics such as the timeliness and convenience of the business’s service, the friendliness and knowledgeability of staff, the subjective perception of the value given or fairness of charges, the perceived honesty of the process and people involved, etc., as well as the overall level of satisfaction and the willingness to recommend to others. Using a specific weighting formula (where some questions actually receive 0% weight, considered ancillary in nature, and others receive a relatively heavy weighting), the manufacturer arrives at a composite score on a 100 point scale (accurate within 1 decimal place) of the business’s overall “Customer Experience” index score. The bottom 2% of survey scores are thrown out at the end of each month and then the manufacturer provides bonus funds to the business if the composite score is above an arbitrary hurdle.

The national average for all related businesses in terms of both sales and service is ended up at four tenths of one percent above the hurdle for the year ended December 31st, 2015, and the hurdle is being moved up this year to one tenth of one percent above that!

The first interesting thing about all of this that I would note is the concept of false precision. The multiplicity of dimensions against which the business can be rated and the fractionality of the composite scoring system suggest an extremely precise, professionally-calculated measuring tool which itself suggests a customer experience that is almost scientifically specific in nature which, at end we would hope, reflects a consumer demographic that is nuanced, discerning and tasteful in character.

All three of these things are false. The measuring tool’s complexity is its own undoing in that customers rarely seem to understand what they’re rating or why (more on that below) and the surveys are sent out to the fraction of total customers who provide an e-mail at time of purchase, of which a still-smaller fraction actually bother to respond to the survey. Instead of measuring incremental behavior per thousand, for example, which might accurately capture meaningful changes in trend, the tool is instead measuring “fractions of a person’s experience” per tens in a given month in a given business… significantly meaningless specificity. The customer experience process is not as specific as the survey would suggest, many of the items being surveyed are accidents of history and essentially not controllable by the business without undue capital investment to change them. And finally, most of the customers are crude rubes who leave the business either gushing about how great it was, or pounding their keyboards in rage behind a Yelp review page trying to convince everyone that the business should be burnt to the ground and its employees mutilated on the public square in retribution for some minor slight or hiccup. There isn’t a middle ground and as far as the manufacturer’s scoring criteria is concerned, the middle ground isn’t valuable real estate anyway. As you will learn in a moment, there are entire categories of customers who don’t know or don’t care about many of the sub-questions on the survey which means the tool captures little more than their ignorance or angst.

The surveying system, both its conceptualization, construction and monetary reward system, betray a highly bureaucratic mind completely detached from both business reality and customer capability. The bureaucratic mind sees the world as a series of levers to be pulled, with no easy answers, simple solutions or “good enough” approaches. The bureaucratic mind seeks to measure everything, regardless of how valuable it is. The bureaucratic mind ignores the variability in quality and capability of human response (the customer) and tries to slice and dice a bunch of statistical averages rather than being merely curious about something resolute like “Were you completely satisfied? Why or why not?”

The fact that the survey system is tied to a monetary reward means there is a strong incentive for the business to find ways to game the system (coach customers — even if “illegal” — and input fake emails or remove them entirely when a bad survey is likely), especially as the manufacturer moves the hurdle ever closer to 100. The bar being set as high as it is (95) betrays both a kind of cluelessness concerning how simple it is for slight mishaps in the customer experience to bomb the score below that and an undue sense of ambition that a “truly great brand” would have nothing less than perfect scores. “If we just keep moving our standards up, our customers are bound to think more of us!” Meanwhile, setting a monetary reward above a hurdle turns the survey system into the equivalent of a binary “Were you/weren’t you satisfied overall?” despite the 20+ questions because anything less than the hurdle is essentially a penalty. And without a statistically significant sample size the manufacturer’s agents have no real place in advising the business’s management team about responses to perceived trends in the data.

So, what about the customers?

There is great confusion on the part of the customer about who he is responding to and what the consequence of his response will be. Many customers can’t differentiate in their mind between the manufacturer’s brand and the business’s brand, and a common lament when the latter of a pair of surveys sent from the couple is received is “I already filled out your survey!” Few customers who had a positive experience understand how important it is (for the economics of the business) that they register their complete satisfaction by completing the survey. And fewer still who had a negative experience understand that by completely bombing the survey they’re increasing the likelihood that their survey gets thrown out and therefore has no impact to the business whatsoever. These disgruntled customers also don’t understand that their individual complaints are read not by the manufacturer, who is only concerned with the statistical averages, but by the business they dealt with, as they are often filled with specific pleas to right some wrong or to put the business out of commission.

The way customers respond to the survey questions is also revealing.

Some customers reveal what angry, destructively vengeful people they are. They will rate the entire experience poorly (for example, rating a 0 for honesty of personnel) because one aspect of it wasn’t to their satisfaction (for example, the product wasn’t received in the condition expected, or they paid more than they would’ve liked, etc.) Or they will rate negatively and cite as their reason a small slight or problem they could’ve easily brought to the attention of the business and had resolved with little cost or inconvenience. This suggests a personality obsessed with power and control that is easily touched off and uses the “tattle” opportunity as a kind of political leverage to punish the perceived wrong-doer.

Other customers will rate the experience a 7 or 8 with comments about never rating 9 or 10 because “nobody is perfect.” These customers seek to use the survey to make grandiloquent philosophical statements about the state of metaphysical reality and can think of no better place to register their beliefs than on a business survey. Their comments are edifying, perhaps, but again completely useless from the point of view of the manufacturer and the business being held financially hostage.

Some customers are incompetent. They will rate the questions all 10s and then rate the final “overall satisfaction” question a 5. When contacted, they’ll express surprise or confusion and say that they “gave you a great survey”, not realizing that final 5 drops the overall score down to a 90% and thus a failing grade, if they can even explain why their “overall” score was inconsistent with the rest of the data they relayed about the specific parts of their experience (they usually can’t). Others will put negative color commentary and express unresolved problems but rate the sections of the survey highly. Others will write very positive comments, including a willingness to recommend to others, but then provide mediocre scores, especially on the willingness to recommend question.

Then you have the “deep thinkers.” They will get extremely granular on every question, providing a specific rationalization for each score given. Sometimes, when questions ask for similar information about a part of the experience, they will take the time to repeat themselves at length but using slightly different words. One gets the impression of a person who takes themselves and everything they do much, much too seriously. Undoubtedly hemming, hawwing and head-scratching were the prelude to the pages-long survey submission.

Everybody shows a bit of themselves and their values with a survey, both the survey maker and the survey taker. The particular survey world I inhabit leaves a lot to be desired in terms of making the survey a useful, honest tool for managing my business. At the very least, however, it provides a good chuckle now and then in reading an inane response or contemplating the unknowable mysteries of the workings of the manufacturer agent’s mind that thought a 20-some item questionnaire would provide invaluable insight into the customer experience. Ignoring the signal that profitability sends in a competitive market, I guess it’s still better than some I’ve heard about wherein the manufacturer’s scoring system revolves around customer responses to the prompt, “Can you imagine a world without [the manufacturer’s product]?”

That’s a real epistemological misfire right there!