Anthropic is giving its new Claude 3.5 Sonnet model the ability to control a user’s computer and access the internet. The move marks a major step in generative AI models’ capabilities—and raises questions about AI companies’ ability to properly mitigate the risks of more autonomous AI.
According to a series of example videos from Anthropic posted Tuesday on X, Claude users might now ask the AI to follow the steps needed to create a personal website. In another example, a user asks Claude to help with the logistics of a trip to watch the sunrise from the Golden Gate bridge. The user describes what they want the model to do by giving it text prompts.
AI companies have been stressing a desire to push large language models to become more “agentic” and autonomous. Doing so means extending the ability of the AI to control not only its own functions but also external devices.
“Instead of making specific tools to help Claude complete individual tasks, we’re teaching it general computer skills—allowing it to use a wide range of standard tools and software programs designed for people,” Anthropic said in a statement on X.
The new computer control capabilities are being rolled out to developers through an API, as a public beta. Anthropic says it wants to collect feedback on the performance and usefulness of the new capabilities.
The company acknowledged that Claude 3.5 Sonnet’s current ability to use computers isn’t perfect and will make some mistakes (especially when it comes to scrolling and dragging), but the company expects this to rapidly improve in the coming months.
With greater power comes greater responsibility. Anthropic has some explicit instructions on how to mitigate the risk of giving an AI control over a computer. In the user guide, the company advises avoiding giving Claude access to sensitive data such as user passwords, and to limit the number of websites the AI can access.
Its fourth point under minimizing risks states: “Ask a human to confirm decisions that may result in meaningful real-world consequences as well as any tasks requiring affirmative consent, such as accepting cookies, executing financial transactions, or agreeing to terms of service.”
Anthropic has taken a first cautious step into more autonomous AI. But the ability to manage some basic tasks on a PC will expand to greater and larger tasks and a wide array of devices, including phones and even home appliances. As this control extends, the extent of the risk increases, too. Autonomous AI could deliver a lot of convenience, but may have the ability to do lots of harm.
Expect other AI companies to begin rolling out similar functionality in the near future as part of a general move toward more agentic AI.
Accedi per aggiungere un commento
Altri post in questo gruppo


Welcome to the world of social media mind control. By amplifying free speech with fake speech, you can numb the brain into believing just about anything. Surrender your blissful ignorance and swall

Few periods in modern history have been as unsettled and uncertain as the one that we are living through now. The established geopolitical order is facing its greatest challenges in dec

Substack and Patreon are vying to become creators’ primary revenue stream.
For most influencers, payouts from platforms like Meta or Google aren’t enough to build a sustainable career. R

The European Commission is coming for “SkinnyTok.”
EU regulators are investigating a recent wave of social media videos that promote extreme thinness and “tough-love” weight loss advice,

The infamous “Am I The A**hole?” subreddit is making its way to the small screen.
Hosted by Jimmy Carr, the new game show for Comedy Central U.K. will feature members of the public appea

Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to s