Google launches Gemini 2.0 AI models, and showcases their powers in new agents

Google on Wednesday gave the public and developers a taste of the second generation of its Gemini frontier models, and a preview of some of the agents it will power. 

The new Gemini 2.0 family of models is designed to power new AI agents that understand more than just text, and reason and complete tasks with more autonomy. Google described how the new models will improve an experimental agent called Project Astra, which lets AI process information seen through a camera. It previewed another experimental agent, now called Project Mariner that’s designed to perform web tasks on behalf of the user.

“[O]ur next era of models [is] built for this new agentic era,” said Google CEO Sundar Pichai in a blog post Wednesday. “With new advances in multimodality–like native image and audio output–and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.” The term “universal assistant” implies an AI agent with artificial general intelligence (AGI), or the ability to do most tasks as well or better than humans. Experts say the industry is anywhere from two to 10 years away from realising that aspiration.

Google isn’t yet unveiling the largest and most capable of its Gemini 2.0 models. That may come in another announcement in January. For now it’s releasing to developers an experimental version of a smaller and faster variant called Gemini Flash 2.0. “It’s our workhorse model with low latency and enhanced performance at the cutting edge of our technology, at scale, Google Deepmind CEO Demis Hassabis says in a blog post

Gemini 2.0 Flash, Hasabis says, is twice as fast as its predecessor model, 1.5 Flash, and significantly smarter. He says the new model is multimodal, meaning it can process and output text, images, and audio. The “experimental” version supports multimodal input but only text output. Flash 2.0 is also capable of calling on external tools like Google search, or tools made by other companies, as well as execute computer code. 

Consumer users can get in on the fun, too. Gemini chatbot users can now choose to have the chatbot powered by the Flash 2.0 (experimental) model. Google says it’ll put Gemini 2.0 models under the hood of more of its apps and services next year.

Gemini’s second generation is focused on powering AI agents capable of taking steps on their own and calling on resources they need. The models can take a very large set of instructions and (multimodal) file inputs from the user, then use planning, reasoning, and function-calling (such as conducting a web search) to produce an answer. 

The wider skill set is showcased in a couple of experimental agents, one for a mobile device and one for a web browser.

At the company’s developer event earlier this year Google demonstrated a multimodal agent called Project Astra that can react and reason on real-time video seen through a phone camera, as well as audio (including language) it hears through the device’s microphones. Gemini 2.0, Google says, will give the agent better conversational skills and the ability to call on Google Search and Maps. Astra is nowhere near being released to the public, however.

Gemini 2.0 will enable another experiment called Project Mariner, an agent that understands the images, text, code, and other elements within a browser window, then performs tasks based on that input via a Chrome browser extension. Google says the agent, which is available only to a group of “trusted testers,” is often slow and inaccurate today, but will improve rapidly. 

“If Gemini 1.0 was about organizing and understanding information,” Pichai said in his blog post, “Gemini 2.0 is about making it much more useful.”

https://www.fastcompany.com/91245005/google-launches-gemini-2-0-ai-models-and-showcases-their-powers-in-new-agents?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 2mo | 11 dic 2024, 16:30:05


Accedi per aggiungere un commento

Altri post in questo gruppo

Frustrated with today’s ‘attention economy’? You’re really going to hate what comes next

In the 1990s, the internet was a bit of a wonderland. It was new and liberating and largely free of

25 gen 2025, 12:20:09 | Fast company - tech
Why tech in Congress lags  behind the modern world

On a typical day, you can’t turn on the news without hearing someone say that Congress is broken.

25 gen 2025, 12:20:08 | Fast company - tech
$TRUMP was just the beginning: The new administration is finding all sorts of ways to cash in

At President Donald Trump’s inauguration on Monday, Detroit pastor Lorenzo Sewell took the stage to pray for the incoming administration, peppering his

25 gen 2025, 12:20:07 | Fast company - tech
Did you show ‘negative sentiment’ for insurance companies after the UHC CEO shooting? Police were watching

When news broke that the United Healthcare CEO was shot in broad daylight early last month, outrage erupted online. But it wasn’t aimed at the assassin. Instead, it was directed at the broken U.S.

25 gen 2025, 00:50:02 | Fast company - tech
How an AI-generated ‘expert’ sank into media deadlines

Ashley Abramson first came across Sophie Cress in a cold pitch to her work email. Cress was asking to be an expert source for any stories Abramson was working on as a freelance reporter. “I’ve got

24 gen 2025, 22:30:03 | Fast company - tech
Meta’s Threads is finally getting ads

Threads, Meta’s X and Bluesky rival, is testing ads with certain brands in the United States and Japan, the company said Friday.

“We know there will be plenty of feedback abo

24 gen 2025, 20:10:07 | Fast company - tech
How the broligarchy is imitating Trump in more ways than one

Sooner or later, the politicians who most admire Donald Trump begin to emulate him. They

24 gen 2025, 17:50:03 | Fast company - tech