Nobody’s talking about this hidden threat in generative AI

Companies today are grappling with a monumental challenge: the relentless accumulation of data. According to the latest estimates, 328.77 million terabytes of data are created each day. Around 120 zettabytes of data will be generated this year, an almost incomprehensible amount.

Every sector, from industry giants to small businesses, confronts the daunting task of managing this deluge of text, audio, and video content, to name a few formats.

Managing internal and external data helps companies glean market insights, drive innovation and, importantly, protect against business risk. For instance, it allows them to monitor brand conversations to stay ahead of negative sentiment, whether directly from customers or indirectly from partners. Concerns about brand safety and suitability is a serious enough problem that marketers, media agencies, and their respective industry associations created the Global Alliance for Responsible Media (GARM) to tackle the issue.

Discovering, and monitoring, brand mentions for suitability has recently become one of the primary use cases for AI technology. With data creation accelerating at an unprecedented pace and showing no signs of slowing down, more and more companies are leaning on AI to detect brand suitability red flags and ultimately prevent reputational risk.

While companies in all industries face the challenge of managing this ever-increasing amount of data, the level of potential risk varies. It’s one thing to use technology to glean that customers are making fun of an advertising campaign, but quite another to pick up a conversation around product safety, or to discover that the host of the podcast you’ve partnered with to promote your brand is openly sharing ideas that go against company values.

Most companies want to avoid these problems for the fear of losing customers they have spent so much time and money to acquire. But for highly-regulated industries in the U.S. such as financial services, insurance, and pharmaceuticals, unsavory brand impressions can have even more devastating and long-lasting effects on a company’s reputation and bottom line—and can lead to prolonged regulatory scrutiny.

That helps to explain why only a few months ago biopharmaceutical company Gilead Sciences and New York University Langone Hospital immediately took action to suspend advertising on X when the nonprofit Media Matters for America flagged that their ads were appearing next to content celebrating Hitler and the Nazi party. Financial services company RBS took similar action in 2017 when The Times found its ads also appeared next to extremist content.

In addition to these examples of inappropriate placement on high-profile social media networks and search sites, the vastly growing set of user-generated content makes it simply unfeasible for humans to manually review each piece of content an ad may land in. AI is often touted as the hero that can quickly sift through oceans of information, identifying patterns, trends, and anomalies that might otherwise go unnoticed.

Within a programmatic buying environment in which transactions and placements happen in milliseconds, AI is required to ensure brands can manage reputational risk on a daily basis. There are dozens of companies that analyze a variety of content types (display, CTV, social, audio, and so on), and each is looking for a different set of features that may be relevant to advertisers. Companies including DoubleVerify, IAS, Barometer, Channel Factory, and Zefr specifically measure for the GARM Brand Safety and Suitability framework across content types so advertisers can successfully target content that meets their brand guidelines and ensure their standards are being upheld throughout the campaign. Here, AI is the main reason large brands can comfortably buy programmatically at scale in mature channels like display, and with growing confidence in emerging channels like social and digital audio.

However, just as AI has the potential to help companies prevent risk, generative AI, its most recent and considered most transformative form, creates machine-generated data that actually contributes to brand suitability and reputational risk. Therein lies the paradox.

The secret sauce of generative AI is web scraping—the collection of data from unknown, decentralized, internet sources. In its current state, the machine-generated content created by generative AI falls short of being a dependable item of verified data for those seeking a source of truth. It has almost no data quality control, posing immense risks when it comes to brand reputation.

One out of many open lawsuits around generative AI even claims OpenAI’s ChatGPT and DALL·E collect people’s personal data from across the internet in violation of privacy laws. As data curveballs, like deepfakes—the manipulation of facial appearances through deep generative methods—pile up, it will be trickier and tricker to understand what natural language prompts an AI was fed, and therefore nearly impossible to get to the root of the reputational risk attached to it—a brand nightmare.

How will companies manage the paradox of AI as it relates to reputational risk? We don’t yet have industry standards or regulation around the engineering of AI prompts, or web scraping for AI use, though such regulation nears.

Today, brands already leverage AI to ideate, draft sketches or summaries, and assist with other tasks. However, for the majority of use cases, an additional AI or manual pass should be taken afterward to ensure alignment with brand standards.

In the future, if generative AI environments are to become ad-supported, this will be predicated on the availability of brand safety and contextual targeting tools akin to those available in existing channels. In the meantime, the best thing for all parties to do is to start testing and getting familiar with new AI approaches for risk management.

As with any groundbreaking technology, AI doesn’t just solve problems—it also creates them. Reputational risk takes many forms, and, while it is a concern for all companies, those operating in highly-regulated, trust-based industries face even more serious consequences if they are unable to manage it.

Given AI’s dual role as a hero and villain when it comes to reputational risk, businesses should develop a brand management strategy taking into account both factors. Doing so as soon as possible is key to keeping up with the explosion in data and sustainable enterprise risk management.

Anna Garcia is the founder and general partner of Altari Ventures.

https://www.fastcompany.com/90983147/hidden-threat-generative-ai-brand-reputation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 1y | 17 nov. 2023 à 13:40:05


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

5 time-saving Alexa commands you’re probably not using yet

Even if you’re a regular Alexa user, there’s a good chance you haven’t discovered some of its most efficient features.

Actually, strike that: There’s a good chance you’

25 févr. 2025 à 07:50:02 | Fast company - tech
Why today’s youth need more math, logic, and grammar skills

The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience. Members pay annual

25 févr. 2025 à 03:10:10 | Fast company - tech
Here are crypto’s biggest heists after Bybit’s $1.5 billion hack

Cryptocurrency exchange Bybit said last week hackers had stolen digital tokens worth around $1.5 billion, in what researchers called the biggest crypto heist of all time.

Bybit CEO Ben Z

24 févr. 2025 à 22:30:07 | Fast company - tech
‘We are never going to stop existing’: Hunter Schafer called out Trump’s passport policy on TikTok

“I had a bit of a harsh reality check today, and felt like it’s important to share with whoever is listening,” model and actress Hunter Schafer said in an eight-minute

24 févr. 2025 à 20:20:06 | Fast company - tech
Anthropic’s new Claude AI model can decide between speed and deep thinking

Anthropic released on Monday its Claude 3.7 Sonnet model, which it says returns results faster and can show the user the “chain of thought” it follows to reach an answer. This latest model also po

24 févr. 2025 à 20:20:05 | Fast company - tech
What to know about Apple’s biggest-ever U.S. investment

This morning, Apple announced its largest spend commitment to da

24 févr. 2025 à 20:20:04 | Fast company - tech
Ai2’s Ali Farhadi advocates for open-source AI models. Here’s why

A year before Elon Musk helped start OpenAI in San Francisco, philanthropist and Microsoft cofounder Paul Allen already had established his own nonprofit

24 févr. 2025 à 17:50:07 | Fast company - tech