Which customers do I survey? How do I survey them? When do I survey? These are some of the customer satisfaction programme questions I am often asked. I will therefore over the coming weeks publish a series of blogs providing my views on some of these questions. This first blog will look at key elements regarding building a customer satisfaction programme and lessons learned by working with different companies.
NPS or CSI or NSS: Which is the best methodology?
There are many different methodologies in the market such as Net Promoter Score (NPS), Customer Satisfaction Index (CSI), etc. One thing they all have in common is that none of them are “magic wands” and none of them will solve your problems – they are only measuring the temperature of your business! Recently a company who used a Net Satisfaction Score (how satisfied are you with X) as their main questions started to discuss adopting the NPS methodology (how likely are you to recommend X) – assuming this would mean better insight into their problems, better understanding of the programme in the business and better utilisation of the customer satisfaction data etc.
In reality they would switch one question for another and by the end of the day they would identify the same problems, have the same level of insight and have the same issues with turning insight into action. However, the benefit of choosing one of the established methodologies and using one of the recognised agencies to implement it is that you can establish credibility quicker and ensure the necessary trust in the method. Using an established methodology often means spending less time ensuring buy-in to the overall method and more time on getting the operational elements right (sample selection, communication initiatives, close-loop processes etc.).
Which question is the best to use as our key metric?
There are a few elements to consider when choosing one way of assessing customer satisfaction over another. The assessments that rely on the recommended question as key metric (like NPS) will be influenced by the customer’s ability to recommend (despite only asking for customers intention to recommend) as some customers won’t distinguish between being able to recommend and their intention to do so. So if your customers work in the public sector (who often are not allowed to recommend) they are likely to incorporate that factor into their answers and we often find discrepancy between overall satisfaction and intention to recommend. The same goes for single proprietors/SME or even consumers who would like to recommend but not sure to whom – as the product/service might be unique to their needs but irrelevant to their peers, family, friends etc. An easy way of assessing this is to include 3 or 4 variations of key metric questions and a clarifying verbatim question. Afterwards you analyse if there a significant differences in the results and look for an explanation in the comments – before deciding which should be the key metric.
Some companies use a combination of different metrics that often are weighted according to different business or satisfaction drivers. The benefit is that is that the overall benchmark metric is tailored to the business and incorporate assessment of several drivers – but the disadvantage is the complexity for people in the business to understand and relate to it. If you are embarking on the satisfaction journey, I would recommend keeping it simple and as your programme matures, you can look at developing or add more complex metrics to support the main metric.
Customer satisfaction capabilities: Build in-house or go to an agency?
If you are about to embark on the journey and don’t have resources in the company with prior satisfaction programme experience I would recommend that you involve an agency to setup a robust programme for you and help you through the initial hurdles. Getting it right from the beginning is essential. I once took over a programme for a country where the initial setup had gone wrong – fundamental errors such as account managers selecting customers to participate in the survey causing voluntary gaming (I choose those customers who I know are satisfied) as well as involuntary gaming (choosing those clients that I know of). Several years after this process had been changed; the results were still questioned by senior management based on the initial error. So if you get it wrong in the beginning, it can take a long time to create trust in programme and results.
How do we ensure business ownership of our satisfaction results?
The danger in using an agency is ownership of the survey and outcome. Ownership is easy to spot – as in many of the companies who are using an agency the survey is referred to by the vendor name and not the company name. If the agency is too visible in the branding and/or managing the survey in front of the business, you risk that the business is not going to take ownership of the results and drive changes. So if you are using an agency – make sure that you utilise their experience; but when it comes to branding the survey, it’s your company survey.
An easy trick is to remove all agency logos in the results presentation and ensure it is presented by an internal person. When choosing standard methodology and/or solutions, make sure you are able to adapt it to your business and their future needs for insights. Many companies benefit from established methodologies that have box standard survey, measurement scale, close loop processes etc. These are good when you embark on the journey but as your programme grows over time, your need for tailoring your programme also grows. So make sure that the programme size, survey content, number of survey, results platforms etc. can easily be changed to meet changing business needs.
The next blog will address questions around what type of survey to develop, how often to survey, developing key benchmark metrics, etc.