There’s nothing more overhyped and less understood in the business world right now than artificial intelligence (AI). In a time when every new flashy startup claims to be an AI vendor, there are a ton of myths that need to be busted for us to apply AI where it matters most. Perhaps even more concerning: Many of these myths obscure what we should actually be learning about—responsible use of AI—and who should have a hand in building it.
As one of the first companies to bring AI to many of the world’s biggest brands, LivePerson (where I work as CMO) has been deeply involved in the fight against bias for years, including as a founding member of EqualAI. Since EqualAI’s launch five years ago, I’ve found that this nonprofit—which brings together experts across disciplines to reduce bias in the technology—always has new things to teach us about what’s real and what’s not in the AI space.
Recently, one of our LivePerson HR leaders, Catherine Goetz, completed EqualAI’s Badge Program, which educates leaders across wildly different functional areas and industries about how they can establish responsible AI governance frameworks at their own organizations. Catherine’s cohort included experts and executives from across telco, consumer packaged goods, defense, security, and tech companies, just to name a few. Together, they learned about some AI myths that all of us need to understand as we apply AI to our own spheres of influence:
Myth 1: AI isn’t my problem
Reality: AI is an “everyone” problem and an “everyone” opportunity. All companies should now consider themselves AI companies to some degree, because we should all be exploring and testing how it can help us do what we do best. But with AI more accessible than ever, there is no company that can avoid thinking deeply about the potential misuse of AI and its negative impacts.
Myth 2: Okay, so we need to do something, but our tech guys will handle it
Reality: We’re not going to code or program our way to responsible AI. Leaders across all functions (not just tech) need to play a part in establishing governance frameworks within their organizations. This obviously includes putting standards in place for product design, data integrity, and testing, among other things, but it also includes areas for teams like legal, HR, and recruitment to lead. Have you considered applicable laws around privacy? Do you have a designated point of contact for employees and customers? All functional areas have a role to play in making sure that you can safely stand by any AI you put out into the world.
Myth 3: We don’t need DEI to make AI
Reality: Diversity, equity, and inclusion (DEI) help make AI better. Full stop. One of the ways that we can be sure we don’t perpetuate historical and new forms of bias in AI is by making sure that the people developing these systems reflect the world at large, especially the populations that will use them to live, work, and play. Do you have the necessary diverse workforce to understand how your products and services impact different kinds of people that will—if you’re a successful business—use them every day?
Putting these myths to bed requires buy-in and action from cross-functional leaders at all levels of your business. That’s why several LivePerson leaders like Catherine are now badge-certified in responsible AI governance. They’ve learned about operationalizing AI principles, implementing tools to detect risks and biases, ensuring accountability, and creating a cohesive process to address potential harms. And their roles at our company are similarly wide-ranging, including leaders from HR, legal, product development, and engineering.
Today, there’s a serious lack of consensus when it comes to creating (let alone following) responsible AI standards, but leaders like Catherine are helping us make progress. Most recently, she coauthored a first-of-its-kind whitepaper from EqualAI called An Insider’s Guide to Designing and Operationalizing a Responsible AI Governance Framework. Working with cross-sectoral leaders in business (including Google DeepMind and Microsoft), government, and civil society, she helped develop a framework meant to apply to organizations of any size, industry, and maturity level. Their hope is that this framework can serve as a resource for any professional on the journey toward making the world better through more responsible AI.
I think this new whitepaper is also a powerful sledgehammer for busting persistent myths about AI in general, and who is responsible for making it responsible. AI can serve as a force for good in our world, and for our businesses, but there are profound implications if we fail to govern it effectively. Understanding that we’re all in this together will help usher us all into a safer and more responsible, AI-enabled future.
Ruth Zive is chief marketing officer at LivePerson and host of the Generation AI podcast.
Login to add comment
Other posts in this group

The first 27 satellites for Amazon’s Kuiper broadband internet constellation were launched into space from Florid

There are so many ways to die. You could fall off a cliff. A monk could light you on fire. A bat the size of a yacht could kick your head in. You’ve only just begun the game, and yet here you are,

Former Tinder CEO Renate Nyborg launched Meeno less than two years ago with the intention of it being an AI chatbot that help

The most indelible image from Donald Trump’s inauguration in January is not the image of the president taking the oath of office without his hand on the Bible. It is not the image of the First Lad

Ernest Hemingway had an influential theory about fiction that might explain a lot about a p

The first 100 days of Trump’s second presidential term have included a surprising player that doesn’t seem likely to go away anytime soon: Signal.
The encrypted messaging pl

Cancer research in the U.S. doesn’t rely on a single institution or funding stream—it’s a complex ecosystem made up of interdependent parts: academia, pharmaceutical companies, biotechnology start