How Do You Maintain Brand Safety with Conversational AI?

Written by Emily Peck on Apr 23, 2019

To some, AI seems like the wild, wild west with risk lurking behind every swinging door…It doesn’t have to be. 

We’ve seen the headlines.

“AI goes rogue!”

“AI causes PR nightmare.”

“Bot is racist / offensive / rude.”

A few companies have certainly had nightmares with bots. In perhaps the most well-known example, users got Microsoft’s Tay to say things like “Hitler was right” and other highly inappropriate comments.  The bot was repeating comments that users had told it to say, which is no excuse and never should have happened.

We live in a time where online trolls set out to wreak havoc on the internet, including trying to trip up bots. This is a shame as I would surmise that 99% of people engage with AI the way in which it was designed. Because of this small group, however, companies need to set up the proper safeguards to ensure their brand stays protected.

A few of the ways that we help our customers maintain brand safety include:

  • Explicitly train an AI Agent to understand red flag content and trigger words: Being prepared for inappropriate topics is a great first step. We like to train AI Agents to know what is inappropriate to ensure it never engages in a conversation that it shouldn’t. For instance, if a user brings up anything related to drugs or crimes, or uses profanity, an AI Agent would be trained to say along the lines of “This is not a topic that is appropriate. We need to change the subject or else I will need to say goodbye.”
  • Implement a two strikes and you’re out policy: If a user continues to engage in inappropriate conversations, an AI Agent should have the authority to end the conversation and essentially “turn off” for that individual. Depending on the company’s culture and position, some are more lenient than others, but we recommend that companies have some sort of “strike policy.” If something inappropriate is said, there is a warning, and if the user continues, the conversation ends. The user may continue to send messages to the AI Agent, but they will never get a response and will eventually give up.  Some companies put a limit on how long it is turned off – we’ve done a 24 hour quiet period to indefinite.
  • Never, ever, have a bot repeat what the user says: In Microsoft’s example, the bot was trained to repeat whatever the user said. In no circumstances should a company’s AI Agent say what a user asks them to or follow commands like “Repeat after me” “Say XYZ…”, etc. There is no benefit to a brand. Take this as an opportunity to be witty and respond with something like “No one tells me what to say” or “I think for myself, thank you.”

Brand safety goes beyond protecting against trolls, though. It’s also about the experience that you are providing your customers.  A frustrating, simpleton bot that gets stuck in loops or can very rarely understand what your customers are asking is not protecting your brand’s image…. it’s tarnishing it. Here are some of our best practices:

  • Outline how to get the best experience: Be upfront with your customers on how an AI Agent can help and the best way to engage with it. AI Agents have difficulty classifying the intent of long-form questions and paragraphs, so ask your customers to keep their questions brief. If a customer asks something that the bot can’t understand or help with, you can ask her to try phrasing their question in a different way but always have the ability to elevate the customer to a human agent if necessary.
  • Have human escalation protocols in place: If a customer asks something your AI Agent doesn’t understand two times, you should loop in a human agent. Also think about other situations that might indicate that the person is not getting the help that they need, like asking the same question twice in a row and getting the same response from the bot.
  • Context is king: When an AI Agent analyzes a person’s message, it should be done through a contextual lens of the entire conversation. A person might ask a follow on question that would not have meaning as a standalone message. For example, if a user asks about the return policy and the AI Agent responds with the terms, a user might say “OK let’s do it.” The conversational AI would not have the ability to understand what the person’s intent looking at the phrase by itself, but within the context of the conversation, the AI Agent would understand that they would like to initiate a return and need a return shipping slip.
  • Sentiment monitoring: Similarly to using context, is taking into account the customer’s sentiment to decide on the next course of action. If someone is typing in all caps and using a lot of punctuation, they are likely upset or emotional and need to be treated differently. Let’s take a look:
    • Sentiment 1: WHERE’S MY ORDER????!!!!!
    • Sentiment 2:  Where’s my order?

From this very basic example, you could presume that the person behind Sentiment #1  is frustrated and potentially had expected their order to arrive already. You might want to get this person to a human agent immediately, while the customer behind sentiment #2 is more likely to be asking generally about their order status and an AI Agent could pull up the shipping updates directly.

  • Optimize your AI over time: You should be reviewing conversations and providing feedback on where an AI could have acted differently or said something else. Ongoing learning is key to a successful automated Customer Service implementation.  

Brand safety is critical when companies adopt AI within their workforce, both in terms of protecting against trolls and providing a great user experience. Work with your technology provider so everyone understands what brand safety means to your organization, and how you can not only protect this, but also quantify and measure it.  

Questions about AI or a chatbot platform? We’d love to chat.

For more information on conversational AI, discover how to provide brilliant AI-powered salesforce chatbot solutions to every customer, every time.