What managers should know about the secrets threat of employees using ‘shadow AI’

ChatGPT became the poster child for generative AI earlier this year. From writing up business plans to explaining complex topics in layman’s terms, ChatGPT has been drafted in to help with just about anything and everything. And companies small and large have been scrambling to explore and reap the benefits of generative AI ever since.

But as this new chapter of AI innovation progresses at a dizzying pace, CEOs and leaders are at risk of overlooking a form of the technology that’s been slowly creeping in through the back door: Shadow AI.

Shadow AI is dangerously overlooked

Put simply, Shadow AI is when staff bolt AI tools onto their work systems to make life easier, unbeknownst to management. This quest for efficiency is, in most cases, well intentioned, but it’s opening companies up to a new realm of cyber security and data privacy issues.

Shadow AI is typically being embraced by staff looking to improve process efficiency and productivity, particularly when it comes to navigating monotonous tasks or laborious processes. That might mean they’re asking AI to scan through hundreds of PowerPoint decks to find key information, or asking it to synthesize the key points from meeting minutes.

As a rule, employees aren’t purposefully making your organization vulnerable. Quite the opposite. They’re simply streamlining tasks so they can tick more off their to-do list. But with over 1 million UK adults having already used generative AI at work, the risk is that more and more workers use models which have not been authorized for safe use by their employer, and risk data security in the process.

Two major risks

The risk of Shadow AI is two-fold.

First, employees may feed such tools sensitive company information, or leave company information open to be scraped whilst the technology is running in the background. For example, when an employee is using ChatGPT or Google Bard to streamline their productivity or clarify information, they could be inputting sensitive or confidential company information in the process. Sharing data is not always an issue in itself – companies often rely on third party tools and service providers with their information – but issues can occur when the tool in question and its data handling policies haven’t been assessed and approved by the business.

When this is the case, there’s no guarantee where company information will end up after it is fed into an ‘insecure’ AI tool. Often, company information will be used to train the model and help shape answers for other users, and could even become vulnerable in the case of a cyber attack or leak. In March, for example, OpenAI confirmed that a chatbot’s source code bug caused a data leak, with chat histories potentially exposed.

The second risk of Shadow AI is that, because companies are typically unaware these tools are being used, they’re unable to gauge the dangers and take steps to mitigate the risks. (This could also include employees sourcing and then using inaccurate information in their work.) By definition, this is something that happens in the shadows – out of business leaders’ sight. According to Gartner research, 41% of employees acquired, modified, or created technology outside of IT’s visibility in 2022. That number is expected to climb to 75% by 2027.

Shadow AI presents data and cyber security risks

And therein lies the crux of the problem. How can organisations monitor and assess the risks of something they don’t know about?

Some organisations, such as tech giant Samsung, have gone as far as  banning ChatGPT from its offices after employees uploaded proprietary source codes and leaked confidential company information through the public platform. And companies like Apple and JP Morgan have also limited employee use of ChatGPT. Others are burying their heads in the sand, or failing to spot the existence of the issue entirely.

What then, should business leaders be doing to combat the risks of Shadow AI, while simultaneously ensuring that they and their teams are able to benefit from the efficiencies and insights which artificial intelligence can offer?

Firstly, leaders should educate teams on what safe AI practice looks like, and the risks that come with Shadow AI, as well as providing clear guidance on when ChatGPT can and can’t be used safely at work.

For cases which fall into the latter camp, companies should consider offering staff private, in-house generative AI tools instead. Llama 2 and Falcon AI are both examples of models that can be downloaded and used securely to power generative AI tools. Azure Open AI offers an alternative halfway house, where data remains within the company’s Microsoft ‘tenancy’. These options avoid the risk to data and IP which comes with public Large Language Models like ChatGPT—whose different uses of our data aren’t yet known—while enabling employees to yield the benefits of generative AI.

Leaders must take control of the AI agenda in their organizations—and they must do so before staff do it for them. This way, business leaders can leverage generative AI in a way which alleviates pain points for employees, improves productivity and performance and, crucially, puts data protection above all else.


Steve Salvin is the founder and CEO of Aiimi.


https://www.fastcompany.com/90972657/what-managers-should-know-about-the-secrets-threat-of-employees-using-shadow-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2y | 26 oct. 2023, 09:50:04


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

The other Blue Sky is getting tons of traffic

There’s Blue Sky and then there’s Bluesky.

Blue Sky, a paper goods company

25 apr. 2025, 15:30:05 | Fast company - tech
Google’s profits skyrocketed 50% in Q1, beating expectations

Google’s profits soared 50% in this year’s opening quart

25 apr. 2025, 15:30:04 | Fast company - tech
Here’s how top chief product officers are getting AI right

The AI revolution is redefining business and tech leadership—and no one is standing more squarely on the front lines than product leaders.

Once seen as a behind-the-scenes role, the CPO

25 apr. 2025, 13:10:13 | Fast company - tech
Dubai is opening an influencer academy—and they’ll pay you to join

Dubai, the go-to destination for influencers, is now doubling down on its biggest market with the launch of its very own “influencer academy.”

Jointly funded by the Dubai Department of E

25 apr. 2025, 13:10:12 | Fast company - tech