Professional Documents
Culture Documents
Customer satisfaction is at the core of human experience, reflecting our liking of a company’s business activities.
For instance, Americans will mention a positive experience to an average of nine people and a negative experience
to an average of 16.
High levels of customer satisfaction (with pleasurable experiences) are strong predictors of customer and client
retention, loyalty, and product repurchase. Data that answers why a customer or client enjoyed their experience
helps the company recreate these experiences in the future. Effective businesses focus on creating and reinforcing
pleasurable experiences so that they might retain existing customers and add new customers.
Consumers expect an exceptional experience from organizations, and unfortunately, people talk about bad customer
experiences more than they’ll brag about good ones. Research has found that 86 percent of buyers will pay more for
a better brand experience, but only 1 percent feel that vendors consistently meet expectations.
Customer satisfaction (CSAT) surveys are used to understand your customer’s satisfaction levels with your
organization’s products, services, or experiences. This is one type of customer experience survey and can be used to
gauge customers needs, understand problems with your products and/or services, or segment customers by their
score. They often use rating scales to measure changes over time, and gain a deeper understanding of whether or not
you’re meeting the customer’s expectations.
Example question: Overall, how satisfied are you with “ABC Restaurant”?
This question reflects the overall opinion of a consumer’s satisfaction experience with a product he or she has used.
The single greatest predictors of customer satisfaction are the customer experiences that result in attributions of
quality.
i. Overall quality
ii. Perceived reliability
iii. Extent of customer’s needs fulfilled
1
It is commonly believed that dissatisfaction is synonymous with purchase regret while satisfaction is linked to
positive ideas such as “it was a good choice” or “I am glad that I bought it.”
Example question: Would you recommend “ABC Restaurant” to your family and friends?
This single question measure is the core NPS (Net Promoter Score) measure.
Customer loyalty reflects the likelihood of repurchasing products or services. Customer satisfaction is a major
predictor of repurchase but is strongly influenced by explicit performance evaluations of product performance,
quality, and value.
Loyalty is often measured as a combination of measures including overall satisfaction, likelihood of repurchase, and
likelihood of recommending the brand to a friend.
A common measure of loyalty might be the sum of scores for the following three questions:
i. Overall, how satisfied are you with [brand]?
ii. How likely are you to continue to choose/repurchase [brand]?
iii. How likely are you to recommend [brand] to a friend or family member?
Example question: How satisfied are you with the “taste” of your entree at ABC Restaurant?
Example question: How important is “taste” in your decision to select ABC Restaurant?
Affect (liking/disliking) is best measured in the context of product attributes or benefits. Customer satisfaction is
influenced by perceived quality of product and service attributes, and is moderated by expectations of the product or
service. The researcher must define and develop measures for each attribute that is important for customer
satisfaction.
Consumer attitudes toward a product developed as a result of product information or any experience with the
product, whether perceived or real.
Again, it may be meaningful to measure attitudes towards a product or service that a consumer has never used, but it
is not meaningful to measure satisfaction when a product or service has not been used.
Cognition refers to judgment: the product was useful (or not useful); fit the situation (or did not fit); exceeded the
requirements of the problem/situation (or did not exceed); or was an important part of the product experience (or
was unimportant).
Judgments are often specific to the intended use application and use occasion for which the product is purchased,
regardless if that use is correct or incorrect.
Affect and satisfaction are closely related concepts. The distinction is that satisfaction is “post experience” and
represents the emotional effect produced by the product’s quality or value.
Example question: Do you intend to return to the ABC Restaurant in the next 30 days?
When wording questions about future or hypothetical behavior, consumers often indicate that “purchasing this
product would be a good choice” or “I would be glad to purchase this product.” Behavioral measures also reflect the
consumer’s past experience with customer service representatives.
Satisfaction can influence other post-purchase/post-experience actions like communicating to others through word
of mouth and social networks.
2
Additional post-experience actions might reflect heightened levels of product involvement that in turn result in
increased search for the product or information, reduced trial of alternative products, and even changes in
preferences for shopping locations and choice behavior.
Respondents give a rating between 0 (not at all likely) and 10 (extremely likely) and, depending on their response,
customers fall into one of 3 categories to establish an NPS score:
- Promoters respond with a score of 9 or 10 and are typically loyal and enthusiastic customers.
- Passives respond with a score of 7 or 8. They are satisfied with your service but not happy enough to be
considered promoters.
- Detractors respond with a score of 0 to 6. These are unhappy customers who are unlikely to buy from you
again, and may even discourage others from buying from you.
Transactional vs Relational NPS Programs: Relational NPS surveys are deployed on a regular basis (i.e.
quarterly or annually). The goal is to get a periodic pulse on your customers and understand how they feel about
your company overall. This data can be used to check the health of customer year-over-year and provide a
benchmark for company success. Transactional NPS surveys are sent out after the customer interacts with your
company (i.e. a purchase or support call). It’s used to understand customer satisfaction on a granular level and
3
provide feedback about a very specific topic. It’s best to use both types to understand your customer at macro and
micro levels.
CES surveys typically ask the question, “on a scale of ‘very easy’ to ‘very difficult’, how easy was it to interact with
the co.” The idea is that customers are more loyal to a product/service that is easier to use. Research by Gartner
shows that reducing your customer effort can increase repurchase rates, lower service costs, and reduce employee
attrition.
Customer churn is a key business driver and customer effort is a great indicator of loyalty. This measurement is
quick and easy for customers to evaluate, and it’s simple to implement across different service and survey channels.
CES correlates with business outcomes and is easy to track over time.
CES surveys should be deployed immediately after interactions or specific touch-points like a product purchase or
an interaction with customer service. When a customer interacts with your company, simply asking them how easy it
was to get their issue resolved can indicate if they’ll return as a customer. To take the survey one step further, you
could ask why they rated the interaction easy or difficult so you know how to improve or close the loop on the
interaction.
Automated Triggers – Surveys should be automatically sent out after an interaction with a customer service
representative or specific touch-point.
Keep it Simple – The survey should only be one or two questions and you should avoid using any leading questions.
Share Your Data – Results should be shared with those who can take action and leadership across multiple
departments should collaborate to implement a strategy. Additionally, customer service representatives should be
empowered to follow-up with the customer and resolve any issues that weren’t solved in the original interaction.
- Provide multiple channels for contact/feedback – Your business should be meeting customers in all
channels of digital support, so they can choose the realm they feel the most comfortable with. E.g. social
media support, email, chat, in-person support centers, and call centers.
- Use self-service tools – Many customers would rather solve the issues themselves instead of speaking to a
customer service representative. Providing self-service options makes it easier for customers to get their
question answered, reducing effort. Forms and self-help articles are a great place to start.
- Reduce wait times – Customers want issues resolved quickly and don’t want to wait on the phone to talk to
a live representative. If you have high wait times, use a callback system or employ more staff during your
busiest hours.
4
It’s important to always close the loop with a customer if they’re unhappy and fully understand what you could do
better to keep their business. Also keep making general improvements to products or customer service programs
based on cumulative survey results.
Do’s
- Ask for overall company rating first – This satisfaction survey question gives one great initial insight and
allows one to compare to industry and internal benchmarks over time.
- Allow for open text feedback – Open text questions allow one to collect open-ended responses from
respondents. These may extract more details about the customer’s experiences while uncovering
new/unexpected insights.
- Optimize for mobile – Many consumers are now completing surveys on mobile devices or within mobile
apps, hence surveys today must be optimized for mobile devices. If it is too complicated for a mobile
respondent, survey participation will decrease.
Don’t
- Ask double-barrel questions1 – These questions touch on more than one issue, but only allow for one
response. They are confusing for the respondent and you’ll get skewed data because you don’t know which
question the respondent is answering.
1
A double-barreled question constitutes of more than two separate issues/topics, but which can only have one answer. These questions are
also known as a compound question or double-direct question. In research, they are often used by accident. Surveyors often want to explain or
clarify certain aspects of their question by adding synonyms or additional information. Although this is often done with good intentions, this
tends to make the question confusing and, of course, double-barreled. There's no way of discovering the true intentions of the respondent
from the data afterward, which basically renders it useless for analyses.
Examples in research --
(Q) - Is this tool interesting and useful?
This question has two parts embedded. Hence the word “double-barreled”. Even though interesting and useful are both positive attributes,
they are not interchangeable. Some respondents might find the tool interesting, but not useful. While other might find it use ful, but not
interesting. But how should they answer? And more importantly, how can the surveyor interpret these answers?
It would be better to ask two separate questions: such as – (Q) - Is the tool interesting? (Q) - Is the tool useful?
Other examples:
Q - How often and how much time do you spend on each visit to a dentist?
Should be
Q - How often do you visit a dentist?
Q - How much time do you spend on a visit to a dentist?
5
- Make the survey too long – The majority of CSAT surveys should be less than 10 questions. People won’t
finish long surveys.
- Use internal or industry jargon – Your customers must be able to clearly understand each question without
hesitation and using internal or industry jargon is confusing to respondents.
Nevertheless, timing surveys is extremely important. The experience should be fresh in the customer’s mind so one
gets the most honest answers. One can solicit feedback face-to-face when customers leave the store, email, online
survey, phone, or within company’s mobile app.
For example, let’s look at the airline industry. Customer satisfaction surveys can be sent at every touch-point
(customer interaction point) in the process (consumer journey).
- After the customer books their flight – Feedback after the initial purchase is important because you want to
understand if the person was satisfied with their checkout or purchase experience. Send an email with a link
to an online survey after the customer purchases their flight to find out how satisfied they were with the
booking process. Consumers want easy transactions, so look for ease-of-use in your data.
- After the actual flight – Post-purchase evaluations reflect the satisfaction of the individual customer at the
time of product or service delivery (or shortly thereafter). This can be a transactional NPS or CSAT survey
and sent by email.
- After a customer service encounter – If the customer initiates contact with customer service, a CES survey
should be sent immediately after the issue was resolved. For airlines, this could be a call to change a flight
date or report lost baggage. The goal is to see how much effort it took to resolve the issue.
- Six months after the flight – To measure the long-term customer loyalty, relational NPS or CSAT surveys
can be sent months after the transaction occurred to see if your customers are still loyal to your brand.
- In-app mobile feedback – You can request customer feedback on the mobile app or customer experience
through a feedback tab in the app. Getting mobile app feedback is important only your customers can tell
you what will make them more satisfied with their experience.
6
- Age
- Gender
- Education
- Employment Status
- Household Income
- Marital Status
- Children/dependents
- Location (PIN Code)
3) SATISFACTION CATEGORY QUESTIONS: This helps identify satisfaction drivers highlighting the
areas of a customers’ experience that are important, allowing one to align product & service priorities
around that. Below are potential categories of drivers -
- Overall Quality
- Value
- Purchase Experience
- Installation/Onboarding
- Warranty/Repair Experience
4) OPEN TEXT FEEDBACK QUESTION: This question allows customers to provide unsolicited open-text
feedback and mention specific topics or experiences for one’s team to review.
e.g. Please tell us if you would like to share any additional comments or experiences about us.
5) ACTION/ FOLLOWUP QUESTIONS: This is a simple (Y/N type) question asking if it is okay if a
representative reaches out to the respondent to try and understand and resolve any pain points.
- Close the loop – Respond quickly after receiving negative feedback from your customers. This is a chance
to keep your customer loyal. 70 percent of consumers said they would be more likely to do business with
an organization again if their complaint was handled well the first time.
- Analyze for trends – Understand what metrics you’re looking to improve and see if there are patterns on
these specific items. For instance, if 30 percent of respondents say the customer service wait time was too
long, you know you need to improve in that area.
- Company-wide effort - Every department must be on board to keep the customer satisfied. If customers
complain about a product feature, the product department must be willing to receive the data and fix. If
customers complain about the service, customer service representatives need to understand how to fix the
issues better. Make sure the right people have the right visibility with role-based CX dashboards and
analytics