Would You Give Me 10 out of 10?

After a recent intercontinental flight, my luggage didn’t turn up on the carousel. Not a great feeling! I was eventually reunited with my bag – more about that in a future post. The airline sent me a survey about the flight and offered a small incentive to complete it. I felt I had something to say and so clicked the button to answer the ‘short’ survey. It went on for page after page asking about the booking process, how I obtained my boarding card, whether my check-in experience was acceptable, on-board entertainment, meals etc. etc. After the first few pages I gave up. And I’m sure I’m not the only one to give up part way through. Why do companies go over the top when asking for feedback? What do they do with all the data?

I’ve come across a number of examples where data from surveys is not really used. At one company, whenever someone resigned, they were asked to complete an exit survey online. I asked HR if I could see the results from the survey as we were concerned about staff retention and I wondered if it might be a useful source of information. They said they had no summary because no-one had ever analysed the data. No-one ever analyses the data? It is disrespectful of people’s time and also misleading them to ask them to complete a survey and then ignore their responses. What on earth were they running the survey for? This is an extreme version of a real danger with surveys – doing them without knowing how you plan to use the data. If you don’t know before you run the survey, don’t run it!

Of course, there are also cases where you know the survey data itself is misleading. I heard a story of someone who worked as a bank teller and was asked to make sure every customer completed a paper survey. They had to get at least 10 completed every day. These were then all forwarded to head office to be entered into a system and analysed. The problem was that the customers did not want to complete the surveys – they were all too busy. So what did the bank tellers do? They got their friends and family to complete them so that they met their 10 per day target. I wonder how many hours were spent analysing the data from those surveys, reporting on them, making decisions and implementing changes. When running a survey, be mindful of how you gather the data – using the wrong incentives might lead to very misleading results.

Another way that incentives can skew your data is by tying financial incentives to the results. At Uber (in 2015 at least) you need an average driver score of 4.6 out of 5 to continue as a driver. So if a passenger gives you 4 out of 5 (which they might think of as a reasonable score), you need another two passengers to give you 5 out of 5 to make up for it. And if a passenger gives you a 3 you need another four passengers to give you a 5 to get you back to 4.6 average. What behaviour does that drive? Some good for sure – trying to improve the passenger experience. But could there also be drivers who make it clear to the passenger how their livelihood depends on getting a top mark of 5 as is apparently common in car dealerships? This data set is surely skewed.

It’s easy to come up with questions and set up a survey. But it’s much more difficult to do it well. Here’s a great article on the “10 big mistakes people make when running customer surveys” along with great suggestions on how to analyse your survey data using Excel.

Talking of surveys, please make sure you ‘like’ this post!

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.