Customer support is always a story about people. Your customers have questions or difficulties, and the support team helps them to cope – it’s simple. But any team needs to grow, and while it’s usually easy for a sales or marketing team to measure productivity (all KPIs are easy to quantify), it’s harder to set support team goals (you’re not asking them to increase their empathy by 20%).
The quality of care needs to be expressed numerically. Large companies use a standard set of metrics: response rate, number of resolved questions, number of missed questions, quality of operators’ work (many express this indicator through NPS). We will tell you how it all works for us GotYourBack Support Company, and how you can apply the same tools for yourself.
You can track the quality of the work of the entire team for a certain period, the result of each operator and channel separately, or how your users evaluate the work of support.
It is logical to first look at the big picture and then study the individual indicators.
What does this metric mean: How quickly the team responds to user questions.
How to evaluate it: The less time it takes to answer, the better.
How can you improve: Find out what affects the speed (perhaps the operators do not have time to take a new question into operation, because they are loaded by others, or the statistics are spoiled by complex questions that cannot be answered quickly) and try to fix it. Set up an auto-answer if the operator does not have time or cannot answer.
Speed is everything. Imagine that the user has 3 more tabs open with the sites of your competitors in parallel: whoever answers first wins. You can even start chatting directly from the search results – for this you just need to connect the integration with one of the chatbots.
If an existing client writes to you, the stakes are even higher: he paid and wants his problem to be resolved quickly. If this does not happen, he will leave, and you know that regular customers bring in more profit than new ones?
It is best if the response is instant – up to 10 seconds (we are talking about the response to the first message from the user in the chat). It is very important that the first answer is at least minimally personalized.
Bad Example: “Good afternoon! We are processing your request, please wait. ”
Good Example: “Hello John! I literally need half a minute to look into the catalog and orient you on the product. ”
The time between subsequent replies is also important to consider, but try to maintain a balance. There are really difficult questions that require a long and detailed answer. But it is very important to show the user that you have not forgotten about him. Auto-replies will help keep his attention – if the operator does not have time to answer, the user will receive an auto-reply and understand that everything is under control.
Do not forget to set up auto-replies during non-working hours (they will not only say that the operators are not there now, but will also help you collect contacts to answer all in the morning). Then you can check the response rate during business and non-business hours.
What does this metric mean: Do users ask about the same number of questions every day, or on certain days there are more / fewer questions?
How to evaluate it: It’s hard to say unequivocally, but it’s still better if there are not too many questions.
How can you improve: Optimize the support schedule based on the time when the most new questions arrive.
On the one hand, a large number of new questions means, firstly, that your chat is working, and secondly, that users are interested in dialogue and are ready to solve problems instead of leaving everything and leaving. On the other hand, people do not understand a lot about your site – this is a red flag.
Analyze not only the number of questions, but also their nature: perhaps one specific function causes difficulties. If the number of questions has increased dramatically, this is also a reason to think seriously.
You can also see the distribution of the number of new questions by day to highlight the days with the maximum and minimum load. For example, you may find that on release days you have more questions.
If you see a pattern (on the last Thursday of each month, the number of questions rises sharply), identify the reason and try to help the support team. During a sharp jump, they may simply not cope with the load, and this will spoil your statistics and reputation among users.
What does this metric mean: How many questions one user has on average.
How to evaluate it: Less is better.
How can you improve: Track bots; analyze the questions that arise most often and customize prompts.
Each client can ask an unlimited number of questions. It is best when there is exactly one problem per question: then it will be more convenient for operators to solve it, and it will be more convenient for you to evaluate their work. It also happens that a user fits 8 complaints into one question – for us it’s still one question (just a big one).
Therefore, there will always (or almost always) more started questions than there will be users opening them. But if you see a clear skew, for example, that there are 50 dialogs and 2 users, pay attention to this. Check if such talkative users are bots and if they ask reasonable questions. If someone is chatting out of boredom and is taking your support time, consider how you can shield your team from their messages.
What does this metric mean: How many user questions the team manages to solve for a certain period.
How to evaluate it: More solved issues is better.
How can you improve: If the team can’t handle all the questions, it might be time to expand the staff. If certain questions are difficult (and these questions are often repeated), conduct additional training on specific issues.
This metric will help answer the question of how many questions your support team has time to process. You can analyze the number of resolved issues by day or week.
It is logical that the number of closed questions should tend to the number of open ones. If the gap is too wide, your support team is not keeping up with the load.
Often this metric is expressed as a resolution ratio – the percentage of successfully resolved issues out of the total number of issues.
What does this metric mean: How many users have you helped?
How to evaluate it: More users with resolved issues is better.
How can you improve: Constantly analyze errors and not close the question until the user is satisfied.
This number shows how many people you made happy by solving their problem. It can be analyzed in relation to the total number of solved dialogues. If the numbers are too different, something is wrong. Perhaps users have a lot of questions along the way, or the operator closes the dialogue before making sure that the client understands everything and is completely satisfied.
On the other hand, customers who ask a lot of questions may be the most loyal users who have decided to thoroughly understand the product. Anyway, it’s worth analyze what happens.
What does this metric mean: At what time the support is most loaded.
How to evaluate it: Do not evaluate, just remember🙂
How can you improve: Distribute the load taking into account the most loaded hours.
Track support workload by hour. This is an important metric because you can use it to optimize your entire team.
Knowing at what times agents are processing the most requests, you can adjust the operating mode and direct the maximum resources to those hours when the load is at its highest.
Keeping track of the individual performance of each team member is just as important as their overall performance. This is how you can make support work better by rewarding the best workers and helping the laggards.
We recommend paying attention to the following individual indicators:
◼️ Operator response time – when you know the average response time for a command, estimate the response time for each operator separately during business and non-business hours.
◼️ Average operator rating – shows how users rate the support work. If employees from different departments communicate with users, you can assign them to channels and evaluate the work of each department. It is convenient to look at the overall rating, and then analyze the ratings separately for each operator
◼️ Number of questions taken to work – here you can see the number of open dialogues in which this operator participates. Pay attention to whether he copes with the number of dialogues that are assigned to him (compare with the number of resolved issues).
◼️ Issues resolved – the figure helps to estimate the amount of work that the employee is doing.
If you use several channels to communicate with users (email, chat, instant messengers, channels of different departments), it makes sense to collect data for each channel. Statistics grouped by channel helps you analyze the performance of agents in each channel. For this, there are all the necessary indicators: new and resolved issues, users with new and resolved issues, the speed of response during working and non-working hours and the length of the dialogue.
After the operator has closed the question, the user can rate his work. For example, you can offer only 3 ratings: “Excellent”, “Normal” and “Bad”. The best place to start is with unsatisfactory ratings. Questions with the rating “Normal” also deserve your attention, but the rating “Excellent” is solely for your joy 🙂 The user can comment on the rating: comments on the ratings “Bad” and “Normal” can be found in the same section.
By the way, in the same section you can open a completed question (any, regardless of the grade). This will help to restore justice: did the operator really work badly or was the user in a bad mood.
Support Metrics help to objectively assess the situation and react in time if something goes wrong. Do not forget to monitor the quality of user support (all the more you can conveniently and quickly conduct all analytics in one place), and they will thank you for this with loyalty and high LTV.