The next wave of AI is here: Autonomous AI agents are amazing—and scary

The relentless hype around AI makes it difficult to separate the signal from the noise. So it’s understandable if you’ve tuned out recent talk about autonomous AI agents. A word of advice: Don’t. The significance of agentic AI may actually exceed the hype.  

An Autonomous AI agent can interact with the environment, make decisions, take action, and learn from the process. This represents a seismic shift in the use of AI and, accordingly, presents corresponding opportunities—and risks.

The P in GPT

To date, generative AI tools, largely subject to human supervision, have been designed to function by being pretrained (the P in GPT) on vast amounts of data such as large language models (LLMs) or other defined data sources and then to provide responses to inputs or prompts (a question or instruction) provided by users. This has proven to be an impressive way to come up with humanlike responses to queries or prompts—like a baby imitating sounds or words without really knowing what it is saying. Kind of adorable, but unlikely to conjure Newton’s Principia or a Beethoven symphony. So, are these generative tools really functioning as creative, independent beings? Doubtful. But that may be changing dramatically.

A new approach allows AI to interact directly and more autonomously with data and react in a dynamic way—a lot more like what humans do. This technology relies on autonomous AI agents, which Bill Gates believes are going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons. And that may be an understatement.

AI Agents

AI agents are designed to make decisions without human intervention to perform predefined (for now) tasks. They can reach into the outside world, find data they hadn’t previously encountered, analyze it, then take action—far more like human interaction with the environment and less like relying on the fixed data universe of a chess program or a chatbot and an LLM that cannot go beyond its pretrained knowledge. Sounds great. What could possibly go wrong?

This is a major step forward, replacing a clever statistical approach to replicating human expression with something capable of taking in previously unknown outside stimuli, processing it, and taking action without having to be pretrained or retrained. We are removing our intermediate role creating and governing AI’s conceptual and decision-making universe. 

That’s both the point and the problem. It’s fair to say the AI baby is not just on its way to taking a few steps; it could be speeding down the highway in your new car, music blaring, swigging a bottle of tequila. 

The upside is clear. Less need for specific training and oversight. Scalability is only limited by compute resources. You can remove the human intermediary and send out agents to go and complete vast amounts of tasks on their own. After all, they are agents, they have agency—the ability to make decisions and choices. And mistakes.

What could possibly go wrong?

As software rather than a human actor, AI agent mistakes can be instantly and almost infinitely compounded, replicated, and cascaded. It is also a target for hackers. There are obvious doomsday scenarios like a rogue AI agent improperly triggering a massive wave of securities trading or unintentionally launching a military retaliation. When it comes to decisions with potentially catastrophic consequences, human oversight is by no means perfect, but most of us feel at least a modicum of comfort knowing there’s an expert human hand hovering over the go button.

There are less dramatic yet still highly impactful effects in the legal and compliance sphere that pose significant business risk. More and more companies are using AI-driven tools across the entire employee lifecycle, from selecting candidates to interviewing and hiring and continuing through performance assessment (raises, promotions, and termination). These tools are increasingly deploying AI agents. Providers often tout AI agents as supporting and improving the quality of critical HR decisions. But subtle errors in system design or implementation could lead to unfair outcomes. There’s a name for this phenomenon: algorithmic bias. At the same time, states are adopting laws penalizing both developers and users of such tools if their use results in unfair treatment of employees. And naturally, litigation is likely to follow.  

Risky Business

It is undeniable that AI agents present a significant opportunity to increase productivity by automating routine tasks and freeing people up for more creativity and problem-solving. But the risks are just as undeniable.

While jettisoning supervision and oversight may be a necessity with kids at a certain point, the metaphor only goes so far when it comes to the emancipation of AI through autonomous agents. For now, as we gleefully remove the training wheels, we should be mindful of balancing our understandable enthusiasm with reasonable caution to avoid any catastrophic crashes. 

https://www.fastcompany.com/91281577/autonomous-ai-agents-are-both-exciting-and-scary?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

созданный 10h | 22 февр. 2025 г., 12:40:05


Войдите, чтобы добавить комментарий

Другие сообщения в этой группе

Apple’s hidden white noise feature may be just the productivity boost you need

As I write this, the most pleasing sound is washing over me—gentle waves ebbing and flowing onto the shore. Sadly, I’m not actually on some magnificent tropical beach. Instead, the sounds of the s

22 февр. 2025 г., 12:40:06 | Fast company - tech
This slick new service puts ChatGPT, Perplexity, and Wikipedia on the map

I don’t know about you, but I tend to think about my favorite tech tools as being split into two separate saucepans: the “classic” apps we’ve known and relied on for ages and then the newer “AI” a

22 февр. 2025 г., 12:40:03 | Fast company - tech
The government or 4chan? The White House’s social media account is sparking outreach

The official White House social media account is under fire for posts that resemble something typically found on the internet forum 4chan.

A post shared on February 14, styled like a Val

21 февр. 2025 г., 20:30:04 | Fast company - tech
How Wikipedia became a political lightening rod

Wikipedia has faced political threats for years, but this time, it may be at a breaking point.

Republicans have ramped up attacks against Wikipedia as yet another “

21 февр. 2025 г., 18:10:17 | Fast company - tech
Trump’s China tariffs will hit small device makers hardest

The day after the Super Bowl, ZapperBox quietly raised the price on Amazon of its over-the-air DVR.

ZapperBox offers one of the best means of recording local channels from an antenna, an

21 февр. 2025 г., 13:30:05 | Fast company - tech
This new AI tool helps Walmart’s merchandising team plan what’s in stores

Within Walmart, employees known as merchants make decisions about which products the company carries online and in stores, as well as pricing for those items.

Naturally, the job involves

21 февр. 2025 г., 11:10:07 | Fast company - tech