The worst chatbot fails…. and how to avoid them

Chatbot Fails

Having been around for a few years, chatbots have had some pretty unfortunate incidents. From all-out racism brought on by ruthless trolls to flat-out terrible user experiences, chatbot fails are all too common.

That said, not all companies adopting virtual agents are doing so with “chatbots.” Chatbots are simplistic, rules-based programs. Companies like WestJet are using more versatile Conversational AI agents built using advanced machine learning and natural language understanding to provide human-like conversation and reasoning. (Learn more about the difference between chatbots and conversational AI).

Customers, however, often don’t know if a company is leveraging a chatbot or AI. People just expect to get their questions answered and issued to be resolved.

The trouble with chatbots 

Chatbots are manually programmed and usually follow a decision-tree. The creators explicitly map out actions for the chatbot to take based on what a user says.

But communication is a complex thing; consider slang, misspellings, intonation, humor and syntax. It’s pretty much impossible to pre-train a chatbot to work perfectly in every situation. I hate to admit it, but I’m a human and I don’t always pick up on sarcasm or humor in my own interactions.

As a result, chatbots trip up. For a variety of reasons we outline below, we see customers frustrated or annoyed, brand reputation hurt or flat-out bad user experiences.

The top 5 chatbot fails 😱😱

 

Chatbot Fail #1: Engaging in inappropriate conversation

This is what every brand fears. A “rogue” chatbot that engages in offensive conversation.

Chatbots may be pre-programed to unknowingly respond yes or no to a question that it doesn’t understand for the sake of carrying on a conversation. There are numerous other ways trolls can make chatbots say inappropriate things that damage your brand.

Fix: Don’t train your bot to respond to a question it doesn’t understand or repeat something blindly.

Chatbot Fail #2: Not Understanding the Basics

“I’m sorry. I don’t understand. 🙈 Can you try again?”

If you’ve interacted with a basic chatbot, I’m sure you’ve stumped it, even if you’re following prompts or asking a question that should be in its scope of knowledge.

For example, the Poncho bot was one of the first launched on Facebook Messenger. It was designed to tell people the weather based on their location. One problem – while it could respond to questions related to “Monday” or “Thursday,” it got thrown off by the word “weekend”.

The source of the frustrating chatbot loops and failure to understand originates from needing to be pre-programed to understand every utterance and word. It’s looking for specific prompts that have been thoroughly trained in order to know what to do next.

Alternatively, Conversational AI Agents leverage semantics to accurately understand the context of what a person is saying, even if it has not been explicitly trained with the exact phrasing.

Fix:  Take a significant time to train your chatbot on as many utterances and situations as possible. Crowd-source to gather additional training data. Monitor engagement to continuously enhance training.

Chatbot Fail #3: Not adapting to the channel

When you send a text to a friend, you probably avoid typing really long paragraphs that are hard for your friend to read. Long-form doesn’t work in a chat interface.

Yet a chatbot fail we see frequently is long messages. If important information is buried in between the lines of long text, it will likely be skimmed over. And users prefer to digest smaller messages in sequence.

This is an easy fix for even the most basic chatbots. Learn more about our recommendations for how to write copy for a chat interface here.

Fix: Limit the length of chatbot replies to one or two sentences.

Chatbot Fail #4: Lack of Contextual Awareness

To carry on a conversation goes much further than knowing the dictionary meaning behind words. The words must be understood in the context of the conversation. For example, a chatbot can get confused, or completely ignore something that was said previously. This is a very hard one to fix in chatbots. As we note in our eBook, broadband provider Comcast found that there are 1,700 different ways to say “I’d like to pay my bill.”

Fix: Take time to map out more detailed conversational trees. Communicate to your customers the limitations of chatbots to manage expectations.

Chatbot Fail 5: No Margin for Error

With chatbots, there is no margin for human error or an ability to wander off-script.

It’s human nature to mess up. Sometimes we mistype something, misspell something or simply change our minds. If a person tries to go backwards in a conversation, or immediately fix a solution with “Oops I actually meant..” chatbots can’t handle it. Even a common misspelling “i need to lpace an order” could cause the failure of a bot to classify an intent correctly.

Chatbots are manually programmed and follow strict rules. There’s simply no way for chatbots to be pre-programmed to manually understand every instance in which a person might mess up or need to correct themselves. On the other hand, Conversational AI supports multi-turn dialogue, or the ability to switch between various user questions within a single conversation.

Fix: There isn’t one. This is a basic limitation of chatbots.

Chatbot fails are here to stay. Look to Conversational AI for superior customer experiences.

The rules-based nature of chatbots makes it impossible for many of these fails to be avoided. Companies looking to provide great user experiences that provide real value need to look to Conversational AI. It continuously improves and provides a near human-like experience to customers. This is what’s critical to providing high customer satisfaction today.

Continue Reading: To read more about how Conversational AI and chatbots differentiate, click here. You might also be interested in learning how to maintain brand safety with Conversational AI.

Leave a Reply