Earlier this week via blog post, OpenAI released their newest AI models: o3 and o4-mini. These models are the company’s “smartest and most capable models to date” and their first reasoning models that can also reason when it comes to images.
What does that mean? In short, these AI models can use an image—such as a photograph or a sketch—as part of an analysis. The models can also adjust, zoom in on, and rotate an image during reasoning.
Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date.
— OpenAI (@OpenAI) April 16, 2025
For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation. pic.twitter.com/rDaqV0x0wE
Both o3 and o4-mini can do a lot more than that, too. “For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation,” tweeted OpenAI.
Both o3 and o4-mini AI models have been made available to paying ChatGPT Plus, Pro, and Team users, while the old o1, o3-mini, and o3-mini-high AI models have been removed. OpenAI plans to release the more powerful o3-pro model to Pro users within a few weeks.
Further reading: I tried ChatGPT Pro and it honestly isn’t worth it
Inicia sesión para agregar comentarios
Otros mensajes en este grupo.

Reolink has become the first security camera manufacturer to obtain W

There’s nothing new about Philips Hue lights working with Matter, the



E-ink screens are easy on the eyes, but a lack of backlighting and ex

Last week, OpenAI released its new o3 and o4-mini reasoning models, w

Intel is allegedly releasing a “free” overclocking tool called “200S