2. Clarissa Valiquette
VP, Research & Insight @Pearson
Clarissa leads research and insights for
Pearson's new Global Marketing function in
London, where she is setting up a Centre of
Excellence to foster an insight and data-
driven culture and ensuring progress is
measured consistently across a highly-
matrixed organization in 70 countries.
Prior to Pearson, she built and led the
Customer Experience Insights function at
Rogers Communications, one of Canada's
largest Telco and Media companies. As the
Voice of the Customer, she was a thought-
leader enabling a fundamental culture shift
in the organisation.
Thanks for joining this session on customer metrics for success.
Let’s start with why you should measure customer success at all. Simple: You can’t afford not to!
Your competitors are already doing it or starting to
Customers have a much wider reach now with social media. Before an unhappy customer would tell 9 or 10 people, now they can tell hundreds or thousands at the same time.
It’s not that hard. And also highly likely that people in your organization are already doing it in some way, just not consistently or with a proper structure in place, which wastes time and effort
Besides the fact that understanding your customers and actively measuring your impact is now expected, it also has benefits to your business.
With consistent metrics you can more easily align everyone to the same goal and you focus the entire organization on
The right kind of measurement will give you a view to continuously improve your end-to-end customer experience, not just customer service, which will lead to better company performance in the long run. You will have to spend less to acquire new customers, your existing customers will contact you less so cost you less to service and they are more likely to buy more from you in the future.
In the end, if you want to remain competitive you need to deliver a great customer experience and you can only do that if you know what drives it for your unique business. You move what you measure.
So let’s get down to some details. Here is some practical advice from my personal experience both at Pearson and Rogers, 2 very different companies in very different sectors and at different stages of maturity.
First off, what should you even measure?
Honestly, it doesn’t matter if you choose to focus on customer satisfaction, NPS, likelihood to repurchase, or even preference for dinner tonight!
What matters is that you know what drives that measure so you can fix the underlying issues.
It’s also really important to pick a metric that everyone can rally around. When we started the CX journey at Rogers 6 years ago now, we started with a custom index of 3 questions that directly aligned with our customer experience strategy (which itself was built using a bottom up insights approach on what we know is important to customers). We did that to start because we had many teams measuring some form of customer satisfaction, but all using different methodologies, question wording, time frames and so on. When we agreed to bring all measurement together under the same framework all the groups were on board…. as long as we chose their way to do things… so in the end we chose none of them and went with an index that we could all agree on was important. A few years later we moved to NPS for simplicity.
At Pearson right now, we are on a dual path of improving our brand perception and customer experience, but really just starting that journey. From a customer experience perspective we will use NPS, again for simplicity and because it’s so well known even among folks who don’t have any experience. A much easier metric to sell into your organization if you’re just starting. There is also tons of publicly-available research out there on why NPS works, how to operationalize it and how to identify the business opportunities with it, that again, it makes it a very easy metric to sell into your leadership team and across the company.
Once you know what to measure, you need to determine the how.
Any measurement is better than none because at least it provides the customer lens alongside your regular metrics and forces the business to think beyond sales figures and ‘right now’.
That said, ideally you set up a continuous tracking framework. This means you are collecting customer feedback all the time and, more importantly, are analysing it all the time.
This allows you to take a long-term view ahead, not in a rearview mirror that isn’t actionable, but rather lets’ you react to issues quickly both internally and externally with your customers, as well as allowing you to assess any external factors that might be driving your business, like seasonality.
Pearson for example is an education company that provides textbooks and digital learning and teaching services, among many others, but that means our business is highly cyclical. Back-to-school time will have a much bigger impact on our brand recognition because students and parents are off buying new learning materials for the school year.
At Rogers, we had a very detailed matrix of what we measured at each point of a customer’s journey across the different experiences (not just sales or support channels), so as we saw dips or rises in our customer experience scores we could more easily tie it back to a specific channel, product, day of the week, experience type, and so on to help isolate the problem and then focus on root cause analysis
Online is an easy, non-intrusive way to collect customer feedback.
What you need for a successful measurement program, that drives action, are 3 things:
The right people
A consistent and short survey that doesn’t cause customer irritation
Real-time reporting and action planning
By the right people I mean a mix of program managers, data analysts, insights specialists and product or marketing managers who are invested in driving the customer’s agenda. Everyone’s team will vary in size depending on your need. At Rogers, there were 6 of us on the insights side alone, plus an entire army of customer experience folks who would take what we found and turn them into action. At Pearson, there will likely be 3 insights folks when we’re up and running who will work with the various business units to drive action. This is also closer to how it started at Rogers when we started our journey there.
Regardless of the team size, you need skills that will help you pull out the insights you’re not looking for, so you need curiosity, and then motivated people who will take the data and attack root cause issues.
The survey itself should be short, no more than 5 minutes but ideally no more than 10 questions all told. Make sure you set in place a proper contact strategy so customers don’t get bombarded with surveys every time they contact you, perhaps only making sure you survey they same customer every 6 months.
Also ensure you have a good mix of closed-ended and open-ended questions, leaving you with both structured and unstructured data.
Structured data is of course much easier to analyse: you can quickly say that X% of your customers experienced a technical problem today, but it only confirms or sizes things you already know.
Unstructured data is really important because, for one, it let’s you hear the customer speak in their own words (a very useful exercise for product and marketing people who can use a lot of jargon that regular people don’t understand), but it also let’s you listen for the things you’re not tracking: new or emerging themes, issues or kudos. As you see patterns emerge from this unstructured data you can choose to add a closed-ended question in the survey to size how big an issue it really is or keep track of it more easily if it has become a priority for you to fix.
And then the reporting. In an ideal world everyone has access to the data in real-time and everyone is empowered to act on items within their control. This requires an online dashboard and training to all the various levels (i.e. what do you want them to do with the data). At a minimum though, make sure you report on results weekly with some indication as to what is driving the shift if there is some. You can then bring this data into a more robust ‘customer experience engine room’ where you discuss customer issues, but also other metrics that impact your business. You can use the same forum to prioritize what you work on based on how big an issue it is, how much or how long it will be to fix, and so on.
This engine room should have a good representation from the business with at least the director level partners. So from insights to marketing to product, to IT and engineering. Anyone who touches your customer directly or indirectly should be represented so you can make decisions together and faster.
Incentives are a great way to encourage your customers to provide feedback and give them a sense that you value their time. Generally a chance to win one or a few bigger prizes is a better incentive than giving everyone a small token of appreciation. You could opt for a chance to win one of five $100 gift cards vs. giving everyone a pound off their next bill.
Incentives also raise response rates and attract a wider mix of customers: not just the usual negative or highly positive. With incentives you’re also encouraging the neutral or somewhat positive folks to tell you about their experience, arguably a more representative view of your customer.
The trick is to not change your incentive approach in the middle of your study. Either have it from the beginning or never add it.
Personally I learned that the hard way. In my previous life we had incentives at the start of our VoC program. A few years in, we switched vendors and chose to not include incentives to roll out the new vendor program faster. Everything in the program except for incentives stayed exactly the same and yet our scores plummeted for 4-6 weeks before coming up again slightly and then stabilizing. Now, we did have other known items impacting our customer experience which did contribute we just how much was driven simply by the incentives we couldn’t tell. Response rates also fell by almost half.
One year later we had a big push to get more surveys completed for each rep, but we were already sending out all the available sample we had after all of our do not contact lists, 6-month lockout rules and other scrubbing, so we decided to reinstate incentives. Learning from last time we chose to test it in just one channel and were then able to properly isolate how much of an impact it would have on response rates and the scores themselves. In both cases, the numbers went up: 50% in response rates and about 2 points in NPS.
Other things that can impact response as well as scores are how you word the invitation to your customers. Again mid-program using a test and control group, we decided to adopt a more informal, conversational tone in the email invitation and saw an uptick on both items.
When we changed who signed the email invitation, response rates stayed the same, but the scores were lower when it came from a leader in Customer Experience vs. the more generic customer experience team. Customers saw the name of a VP and were more confident their story would get heard so more ‘help me!’ scenarios were captured in the feedback.
So lots of little things can have unintended consequences. For that reason, either stick with the methodology you chose at the start or ensure a very small pilot to isolate the impact of your change before you launch, especially if you do have anyone’s compensation tied to your customer metric.
Deciding who should be accountable to whatever customer metric you have agreed to can also be a challenging part to work through.
Some will say everyone in the company should have a cx metric as a target.
Personally I think only those with actual decision-making power should have a target, so likely directors and up. They will be already working across the business to get stuff done so understand how their and their team’s work can impact the entire customer experience.
And a company like Bain will tell you to never give individual reps a customer sat or NPS target. This isn’t because these folks don’t have a role to play, but rather because it’s much too easy to game the system, which leads to spending all your time on that vs. actually improving things. How often have you heard a representative say at the end of a call ‘What else can I do for you to give me a 10 out of 10?’ or seens a cashier circle a link to a survey on a receipt for the person in front of you but not for you?
These are all subtle ways people can influence the results: only mentioning the survey to those who had a good interaction, planting the seed that if a customer doesn’t need anything else right now they must be so satisfied to score the rep a 10… With that, it’s much easier to go back to what we said earlier: know what drives your customer success metric and then incent your people on something that ties to it but is within their control but can’t be gamed… this is likely an internal metric like resolution rates in a call centre.
So once you’ve established your program, have people analysing the data and acting upon customer feedback, you do to your program what your program is driving for your customers: Improve upon it!
That can look like a number of ways:
First and foremost, once you have it up and running, establish a close-loop framework. Set alerts for customer results that fall below a certain threshold, for example, all detractors for your priority products. Then have a manager or trained staff call that customer to find out what went wrong and try to fix it. Now fixing it may include compensation, but in some cases a simple ‘sorry’ can be enough to repair the relationship and make the customer feel like you care and are committed. By calling the customer you can ask more pointed questions to determine root cause, you can fix what went wrong, and you do give the customer a sense that their responses aren’t going into a black hole but you are listening. And as a nice side benefit, customers are usually shocked when someone does call them back… and very quickly you can turn a negative experience into surprise and delight. When you have the bandwidth, you can also call back promoters for example, and there the purpose is to validate their choice. Say thank you and again, surprise and delight to build brand equity.
You can link your internal metrics to your customer metrics to determine how customer behaviour actually ties to customer atititudes and sentiment. Some examples might be linking call handle time or interaction reason for a call centre to NPS, the number of times a customer changes their service offering with you, how many times your marketing team has contacted a customer and so on.
And add your employee voice to the mix. You can do that in 2 ways:
1. we’ve probably all heard that happy employees make happy customers, so prove it out. Link frontline customer feedback to the frontline satisfaction scores with your organization as a place to work for example, and then use that for the business case on improving employee engagement programs.
2. but also empower your employees to feed back customer input their hearing. So set up a system or ambassador program where employees can log customer issues they’ve heard from friends, family or anyone they meet, and have the confidence that it will get resolved.
That was a lot of information so I will leave you with just one thing:
It all starts and ends with people. To be successful here you need to hire motivated and curious people who are looking for change.
And with that, I’ll turn it over to questions.
Questions:
You say you need to hire the right people. What kinds of skills would you recommend here?
- Great question! Forrester has a great article on the perfect type of mix of people to run insights and, by extension, customer experience. Those are really market researchers, data analytics folks who can manipulate, link data and do some predictive analysis and then you’ll need product and marketing expertise so you have close alignment to the business and can translate your findings into relevant actions for the business.
2. What would you say to a sceptical leadership who doesn’t want to make the investment to build a full-fledged program?
- Having top-down sponsorship and support is vital to making these programs successful. If you have pushback, perhaps suggest a pilot program with one area, product or service only and really go deep on that one item. So set up the feedback system, analyse the data, and work with the teams to improve on the things your customers are saying are painpoints. Then link it to your internal metrics and get a data scientist to do some modelling on what one point in movement is worth to you from a financial perspective. You can do that by linking it to churn, re-purchase, upgrades, cost to serve, cost to acquire, etc. Once you start seeing upticks in your customer success metrics and you can say that each point is worth x number of dollars you will have a much stronger business case to build it out across the rest of your organization.