Google’s Gemini 2.5 Pro could be the most important AI model so far this year

Google released its new Gemini 2.5 Pro Experimental AI model late last month, and it’s quickly stacked up top marks on a number of coding, math, and reasoning benchmark tests—making it a contender for the world’s best model right now.

Gemini 2.5 Pro is a “reasoning” model, meaning its answers derive from a mix of training data and real-time reasoning performed in response to the user prompt or question. Like other newer models, Gemini 2.5 Pro can consult the web, but it also contains a fairly recent snapshot of the world’s knowledge: Its training data cuts off at the end of January 2025.

Last year, in order to boost model performance, AI researchers began shifting toward teaching models to “reason” when they’re live and responding to user prompts. This approach requires models to process and retain increasingly more data to arrive at accurate answers. (Gemini 2.5 Pro, for example, can handle up to a million tokens.) However, models often struggle with information overload, making it difficult to extract meaningful insights from all that context.

Google appears to have made progress on this front. The YouTube channel AI Explained points out that Gemini 2.5 fared very well on a new benchmark test called Fiction.liveBench that’s designed to test a model’s ability to remember and comprehend context information. For instance, Fiction.liveBench might ask the model to read a novelette and answer questions that require a deep understanding of the story and characters. Some of the top models, including those from OpenAI and Anthropic, score well when the amount of stored data (the context window) is relatively small. But as the context window increases to 32K, then 60K, then 120K—about the size of a novelette—Gemini 2.5 Pro stands out for its superior comprehension.

That’s important because some of the most productive use cases to date for generative AI involve comprehending and summarizing large amounts of data. A service representative might depend on an AI tool to swim through voluminous manuals in order to help someone struggling with a technical problem out in the field, or a corporate compliance officer might need a long context window to sift through years of regulations and policies. 

Gemini also scored much higher than competing reasoning models on a new benchmark called MathArena, which tests models using hard questions from recent math Olympiads and contests. The test also requires that the model clearly show its reasoning as it steps toward an answer. Top models from OpenAI, Anthropic, and DeepSeek failed to break 5% of a perfect score, but Gemini 2.5 Pro model scored an impressive 24.4%.

The new Google model also scored high on another superhard benchmark called Humanity’s Last Exam, which is meant to show when AI models exceed the knowledge and reasoning of top experts in a given field. The Gemini 2.5 scored an 18.8%, a score topped only by OpenAI’s Deep Research model. The model also now sits atop the crowdsourced benchmarking leaderboard, LMArena.

Finally, Gemini 2.5 Pro is among the top models for computer coding. It scored a 70.4% on the LiveCodeBench benchmark, coming in just behind OpenAI’s o3-mini model, which scored 74.1%. Gemini 2.5 Pro scored 63.8% on SWE-bench (measures agentic coding), while Anthropic’s latest Claude 3.7 Sonnet scored 70.3%. Finally, Google’s model outscored Anthropic, OpenAI, and xAI models on the MMMU visual reading test by roughly 6 points. 

Google initially released its new model to paying subscribers but has now made it accessible by all users for free.


https://www.fastcompany.com/91311063/google-gemini-2-5-pro-testing?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 18h | 3 apr. 2025, 22:10:02


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

How I wrote the notes app of my dreams (no coding required)

For years, I’ve had a secret ambition tucked away somewhere near the back of my brain. It was to write a simple note-taking app—one that wouldn’t be overwhelmed with features and that would reflec

4 apr. 2025, 14:20:04 | Fast company - tech
The AI tools we love right now—and what’s next

AI tools are everywhere, changing the way we work, communicate, and even create. But which tools are actually useful? And how can users integrate

4 apr. 2025, 14:20:04 | Fast company - tech
How this former Disney Imagineer is shaping the next generation of defense technology

The way Bran Ferren sees it, the future of warfare depends as much on creativity as it does on raw firepower.

The former head of research and development at Walt Disney Imagineering—the

4 apr. 2025, 11:50:04 | Fast company - tech
How AI is steering the media toward a ‘close enough’ standard

The nonstop cavalcade of announcements in the AI world has created a kind of reality distortion field. There is so much bu

4 apr. 2025, 09:40:02 | Fast company - tech
TikTok Notes is shutting down as Lemon8 steps in

TikTok is shutting down TikTok Notes—wait, you didn’t even know it existed? Well, that explains a lot.

TikTok Notes, the platform’s short-lived attempt to take on Instagram (just as Inst

3 apr. 2025, 19:40:05 | Fast company - tech
Women dominate online influencing. So why are they paid less?

Influencing has a major pay gap, and it’s not what you might expect. 

A new report from Collabstr, based on over 15,0

3 apr. 2025, 19:40:04 | Fast company - tech
An OpenAI ‘open’ model shows how much the company—and AI—has changed in two years

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week 

3 apr. 2025, 17:20:11 | Fast company - tech