Skip to content

Why Deep Learning Will Fail To Replace The Live Chat Agent

Share this article: Facebooktwitterredditlinkedinmail

Deep Learning is a wildly popular technique used to develop chatbots. But in its current state of development, it is not sufficiently advanced to replace the live chat agent in providing customer service. Deep Learning comes from Machine Learning (ML), which focuses on developing algorithms that can help computer systems learn automatically without being explicitly programmed. A wide range of Machine Learning algorithms have been developed such as Linear Regression, Logistic Regression, Support Vector Machines (SVM), K-Means, Decision Trees, Random Forests, Naive Bayes, PCA and Artificial Neural Networks (ANN). Deep Learning (DL), a technique born out of Artificial Neural Networks, is slowly wiping out the others. Deep Learning uses multi layered neural nets and learns by crunching a large amount of data. Though the core idea of DL was developed in the 60s, it has only recently become successful due to the availability of powerful Graphical Processing Units (GPUs). Machine vision, machine translation, speech recognition, automated game playing, and self driving vehicles all use Deep Learning.

Live Chat Agent RapportBoost.AIMany companies want to develop chatbots that have natural conversations with humans to replace the live chat agent, including Microsoft, Facebook (Messenger), Apple (Siri), Google, WeChat, and Slack. A new wave of startups trying to change how consumers interact with services by building consumer apps that take the place of customer / live chat agent interaction, like Operator or, bot platforms like Chatfuel, and bot libraries like Howdy’s Botkit. Microsoft recently released their own bot developer framework.While these companies claim to be using Deep Learning and Natural Language Processing techniques to develop chatbots, it’s questionable whether Deep Learning alone will yield results that make a conversation with a chatbot indistinguishable from a conversation with a human.

If you use Deep Learning to train a neural net on a corpus to generate sentences (i.e. by training the model to predict the next words given a history of words), the chatbot will produce a lot of garbage. It might do things like repeat the same token many times, or insert random words that don't make any sense (i.e. predicting “unknown” tokens). Without imposing significant structure on the sort of sentences the model can generate, minimizing cross-entropy can lead to some pretty funky results. Natural Language Processing can only approximate meaning. Bots such as Siri, Echo, Viv, Hound, Skype and others fall off a cliff the moment they receive a command that is not an exact match for the engine. Counting words and tracking word order, or even parsing by syntax, results in probability — guesswork, at best.

Instead, what you might do is use the neural net as a language model to score a set of sentence hypotheses generated by another system (i.e. a N-best list of sentence candidates). You can do this with a rule-based text generation system or a statistical text synthesis. Data Scientists use this technique in speech recognition, and it works equally well in text generation. Alternative approaches exist, such as filling in the words in a POS-tagged sentence. In all of these approaches, the key idea is finding a way to impose (sensible) sentence structure on the generated sentence.

Deep Learning AI is successful when supervised, but the human brain builds categories in a mostly unsupervised way. For example, Google Brain - an AI machine with 16,000 cores - can merely recognize cats and human faces with abysmal accuracy. This is partly because Deep Learning uses highly unstructured activations (i.e. the high level representations of "dog" and "cat" in a neural network classifier don't have to be similar at all). In contrast, the brain uses inhibitory neurons to create sparse, distributed representations that can be decomposed into their semantic aspects. This feature of the human brain is important for abstraction and reasoning by analogy.

While the human brain has many different parts that work together, Deep Learning researchers are only just beginning to integrate memory and attention mechanisms into their architecture. The brain integrates information from many different senses. Most Deep Learning applications use just one type of input, like text or pictures. The brain can model sequences as categories. For example, every verb names a temporal category. These categories are then arranged into long-term hierarchical plans. Thus far, no similar capabilities have been modeled in Deep Learning.

Deep learning is a vast field, but significant research remains to be done. In its current development, DL alone cannot execute an engaging, accurate conversation between a live chat agent and a consumer.

Speak with the team of Data Scientists at RapportBoost.AI to learn more about our live chat agent training solutions.

Follow us:Facebooktwitterlinkedinrssyoutube
Dr. Michael Housman

About Dr. Michael Housman

Michael has spent his entire career applying state-of-the-art statistical methodologies and econometric techniques to large data-sets in order to drive organizational decision-making and helping companies operate more effectively. Prior to founding RapportBoost.AI, he was the Chief Analytics Officer at Evolv (acquired by Cornerstone OnDemand for $42M in 2015) where he helped architect a machine learning platform capable of mining databases consisting of hundreds of millions of employee records. He was named a 2014 game changer by Workforce magazine for his work. Michael is currently an equity advisor for a half-dozen technology companies based out of the San Francisco bay area: hiQ Labs, Bakround, Interviewed, Performiture, Tenacity, Homebase, and States Title. He was on Tony’s advisory board at Boopsie from 2012 onward. Michael is a noted public speaker and has published his work in a variety of peer-reviewed journals and has had his research profiled by The New York Times, Wall Street Journal, The Economist, and The Atlantic. Dr. Housman received his A.M. and Ph.D. in Applied Economics and Managerial Science from The Wharton School of the University of Pennsylvania and his A.B. from Harvard University.

Leave a Comment

Scroll To Top