I’ll give you an example of a customer we engage with. This is a medium-sized customer doing e-commerce with about 25 million in annual revenue in a highly competitive market. These guys have twenty chat agents during the year, they ramp up and almost double during the holidays so it’s largely seasonal work, a lot of what they are doing is project based. Imagine putting a photo on a throw pillow, mug, or blanket for one of your loved ones. A lot of their business comes in November and December. This type of live chat agent training is very hard. Doing QA— maintaining levels of quality and brand voice is essential to live chat agent performance is incredibly challenging when basically all of their business comes in two months of the year.
They gave us a bunch of chat data, consisting of about two hundred thousand visits. A medium size data set —three million messages— but certainly enough to draw some insights. They first informed us that we’re concerned with customer satisfaction surveys. They wanted to make sure they were providing top tier service, better than any of their competitors. Those feedback surveys were categorized overall, as well as an evaluation of friendliness, knowledge and responsiveness.
Later, having seen the results, they said, “Okay, but what about orders?” They gave us data on 1.4 million product orders with average order size in the range of $70, and 35% of chats resulting in a sale. They gave us all this data —we didn’t have to integrate with any of their systems— we categorized the data and did keyword analysis to develop a customized approach to live chat agent training for the company.
We think about our world as things that are and are not affected by live chat agent performance. So in the Non-RapportBoost bucket are age demographics, how many people are chatting at any given time, the learning curve, and the chat volume. We’re looking at visitor demographics —when they visited, where they’re coming from, what part of the country they’re in, and the number of times they visited— these are all visit stats. Additionally, we do some clustering of message topics.
We think there are four variables that are affected by live chat agent performance. Number one is effort. There are dozens of variables that fall into this bucket, such as formal language. Am I asking enough questions? What’s my word count? How long are the words I’m using? Am I using a sixth or eighth grade reading level? Friendliness, the use of words like I, we, you, assent words, gotcha, totally, cool—all these things have a big impact on customer experience. Responsiveness —whether I respond more quickly to you or more slowly— that can have an impact on the outcome. Finally, emotion —using all those sentiment analysis APIs I mentioned, but also looking for things like exclamation marks, positive emotion, negative emotion, emoticons, whether or not I’m laughing at your jokes. These are all rapport building tools we can incorporate into a live chat agent training strategy that will ultimately boost numbers.
Transcribed from Dr. Michael Housman’s Lecture at UC Berkeley Business School in May 2017.
Learn more about the live chat agent training from the team at RapportBoost.