In a recent announcement from the White House, the Biden-Harris administration revealed its plans to make bad chatbots a thing of the past. Part of a government-wide effort called “Time Is Money,” the idea is to take on the “limitations and shortcomings” of customer-facing chatbots. As anyone who’s used a bot can tell you, their limitations and shortcomings are indeed very real, but so are their benefits to businesses. So how can forward-looking business leaders achieve the ROI of AI and automation without incurring the wrath of regulators—or even worse, their own customers?
First, we need to look at the “Time Is Money” initiative as a wake-up call. In fact, the text of the announcement identifies specific problems customers are asking political leaders to fix when it comes to automated experiences:
- Bots that provide inaccurate information
- Giving the run-around to customers seeking a real person
- Using ineffective and time-wasting chatbots in lieu of customer service
- Situations where customers believe they are speaking with a human being
If your customer engagement channels are populated with AI assistants, virtual agents, or chatbots that sound anything like this, you’ve got a problem on your hands. You’ve become one of the companies that “add unnecessary headaches and hassles to people’s days and degrade their quality of life,” as the White House puts it. The good news is that with today’s technology, business leaders can stay compliant with regulations, increase efficiency and revenue, and give customers better experiences—all at the same time.
Let’s flip the four main problems that “Time Is Money” identifies: Imagine bots that help you provide accurate information, get customers to agents when needed, save everyone time, and don’t lie about being human. Delivering these experiences is not only possible today, it’s exactly what you need to do to build trust and loyalty with your customers.
- Provide accurate information
Bots that don’t help a customer resolve their issues are bad enough, but bots that deliver inaccurate, outdated, or flat-out wrong information are even worse. Human oversight can make all the difference here.
While any solid customer engagement solution should include guardrails for large language models (LLMs) and AI hallucinations, some of the world’s top brands are taking the approach of using generative AI on the back-end first as they ramp up to automated experiences. This comes to life when you give human agents AI-powered recommendations, rewrites, summaries, and translations that they can use or tweak to a customer’s specific situation. - Connect to human agents when needed
While the current generation of AI-powered bots can handle many tasks on their own, there will always be times when a customer needs the human touch. One of the main criticisms coming from both customers and regulators is that too many chatbot experiences make it difficult for customers to connect with a live person.
To avoid this, your customer conversations should always run through a system that allows human agents to seamlessly jump in and help out. Even better, the technology you’re using should proactively flag problems for agents in real time, ideally routing them to the best person for the task at hand. - Save everyone time
The purpose of deploying AI and automation in customer service is to create efficiency, not add more layers of complexity. For example, one of the worst things you can do to a customer is make them repeat information to a human agent that they just typed out for a bot. After all, time is money.
On top of that, your employees’ time is quite literally money when it comes to your business, so the automation and AI you add to your customer engagement systems should be targeted at answering frequently asked questions, resolving simple issues through self-service, and helping human agents become more efficient. The goal is to deliver the right answer or connect the customer with the right person as quickly as possible. - Don’t lie about being human
Finally, one of the more serious complaints that regulators are set to address is the issue of chatbots misleading customers into thinking they are talking to a human when, in fact, they are not. This sleight-of-hand is usually unconvincing, but even if your chatbots can pass the Turing Test, it’s never a good idea to deceive the people who buy your products and services.
The advice here is simple: Don’t lie to your customers. It’s a no-brainer, both in the sense that you shouldn’t have to think about it, and because any business leader who thinks this is an okay way to do business has no brain. Simple disclosures like “I’m a virtual assistant” at the top of a conversation can clear this right up. Transparency builds trust and loyalty; deception kills them.
A common thread ties all of these recommendations together: We’re not replacing human interactions, we’re making sure bots and AI work in tandem with people to get the job done. Simply put, the solution is not to turn off bots and regress to the human-only era of customer engagement. Leaving the political connotations aside for the moment, we’re not going back, no matter who ends up in the White House next.
Remember: the government isn’t cracking down on all chatbots—only bad ones. We need to look at AI and automation as more than just cost-cutting tools. They are also powerful levers for delivering better, faster, and more human experiences. Leaders that effectively blend the strengths of both human and AI agents will not only meet regulatory expectations, but exceed their customers’ as well.
John Sabino is CEO of LivePerson.
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup
An online spat between factions of Donald Trump’s suppo
U.S. tech investor Cathie Wood is calling on
Visiting adult and gambling websites doubles the risk of inadvertently installing malware onto work devices, according to a new study.
There are certain social media rules we can all agree on: Ghosting a conversation is impolite, and replying “k” to a text is the equivalent of a backhand slap (violent, wrong, and rude). But what
For Makenzie Gilkison, spelling is such a struggle that a word like rhinoceros might come out as “rineanswsaurs” or sarcastic as “srkastik.”
The 14-year-old from
Japan Airlines said it was hit by a cyberattack Thursday, causing delays to