OpenAI’s “deep research” gives a preview of the AI agents of the future

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

OpenAI’s “deep research” gives a preview of the AI agents of the future

OpenAI announced this week its AI research assistant, which it calls “deep research.” Powered by OpenAI’s o3-mini model (which was trained to use trial and error to find answers to complex questions), deep research is one of OpenAI’s first attempts at a real “agent” that’s capable of following instructions and working on its own. 

OpenAI says deep research is built for people in fields like finance, science, policy, and engineering who need thorough, precise, and reliable research. It can also be useful for big-ticket purchases, like houses or cars. Because the model needs to spin a lot of cycles and tote around a lot of memory during its task, it uses a lot of computing power on an OpenAI server. That’s why only the company’s $200-per-month Pro users have access to the tool, and they’re limited to 100 searches per month. OpenAI was kind enough to grant me access for a week to try it out. I found a new “deep research” button just below the prompting window in ChatGPT. 

I first asked it to research all the nondrug products that claim to help people with low back pain. I was thinking about consumer tech gadgets, but I’d not specified that. So ChatGPT was unsure about the scope of my search (and, apparently, so was I), and it asked me if I wanted to include ergonomic furniture and posture correctors. The model researched the question for 6 minutes, cited 20 sources, and returned a 2,000-word essay on all the consumer back pain devices it could find on the internet. It discussed the relative values of heated vibration belts, contact pad systems, and Transcutaneous Electrical Nerve Stimulation (TENS) units. It even generated a grid that displayed all the details and pricing of 10 different devices. Not knowing a great deal about such devices, I couldn’t find any gaps in the information, or any suspect statements. 

I decided to try something a little harder. “I would like an executive overview of the current research into using artificial intelligence to find new cancer treatments or diagnostic tools,” I typed. “Please organize your answer so that the treatments that are most promising, and closest to being used on real patients, are given emphasis.”

Like DeepSeek’s R1 model and Google’s Gemini Advanced 2.0 Flash Thinking Experimental, OpenAI’s research tool also shows you its “chain of thought,” as it works toward a satisfying answer. While it searched it telegraphed its process: I’m working through AI’s integration in cancer diagnostics and treatment, covering imaging, pathology, genomics, and radiotherapy planning. Progressing towards a comprehensive understanding. OpenAI also makes a nice UX choice by putting this chain-of-thought flow in a separate pane at the right of the screen, instead of presenting it right on top of the research results. The only problem is, you only get one chance to see it, because it goes away after the agent finishes its research. 

I was surprised that OpenAI’s deep research tool used only 4 minutes to finish its work, and cited only 18 sources. It created a summary of how AI is being used in cancer research, citing specific studies that validated the AI in clinical settings. It discussed trends in using AI in reading medical imaging, finding cancer risk in genome data, AI-assisted surgery, drug discovery, and radiation therapy planning and dosing. However, I noticed that many of the studies and FDA approvals cited didn’t occur within the past 18 months. Some of the statements in the report sounded outdated: “Notably, several AI-driven tools are nearing real-world clinical use—with some already approved—particularly in diagnostics (imaging and pathology),” it stated, but AI diagnostic tools are already in clinical use. 

Before starting the research, I was aware of a new landmark study published two days ago in The Lancet medical journal about AI assisting doctors in reading mammograms (more on that below). The deep research report mentioned this same study, but it outlined preliminary results published in 2023, not the more recent results published this month.

I have full confidence in OpenAI’s deep research tool for doing product searches. I’m less confident, though, about scientific research, only because of the currency of the research it included in its report. It’s also possible that my search was overbroad, since AI is now being used on many fronts to fight cancer. And to be clear: Two searches certainly isn’t enough to pass judgement on deep research. The number and kinds of searches you can do is practically infinite, so I’ll be testing it more while I still have access. On the whole I’m impressed with OpenAI’s new tool—at the very least it gives you a framework and some sources and ideas to start you off on your own research.

AI is working alongside doctors on early breast cancer detection

A study of more than 100,000 breast images from mammography screenings in Sweden found that when an AI system assisted single doctors in reviewing mammograms, positive detections of cancer increased by 29%. The screenings were coordinated as part of the Swedish national screening program and performed at four screening sites in southwest Sweden. 

The AI system, called Transpara, was developed by ScreenPoint Medical in the Netherlands. Normally, two doctors review mammograms together. When AI steps in for one of them, overall screen reading time drops by 44.2%, saving lots of time for oncologists. The AI makes no decisions; it merely points out potential problem spots in the image and assigns a risk score. The human doctor then decides how to proceed. With a nearly 30% improvement in early detections of cancer, the AI is quite literally saving lives. Healthcare providers have been using AI image recognition systems in diagnostics since 2017, and with success, but the results of large scale studies are only now beginning to appear. 

Google touts the profitability of its AI search ads

Alphabet announced its quarterly results earlier this week and hidden among the other results was some good news about Google’s AI search results (called AI Overviews). Some observers feared that Google would struggle to find ad formats that brands like within the new AI results, or that ads around the AI results would cannibalize Google’s regular search ads business. But Google may have found the right formats already, because the AI ads are selling well and are profitable, analysts say. “We were particularly impressed by the firm’s commentary on AI Overviews monetization, which is approximately at par with traditional search monetization despite its launch just a few months ago,” says Morningstar equity analyst Malik Ahmed Khan in a research brief. 

Khan says Google’s AI investments paid off in the company’s revamped Shopping section within Google Search, which was upgraded last quarter with AI. The Shopping segment yielded 13% more daily active U.S. users in December 2024 compared with the same month a year earlier. Google also says that younger people who are attracted to AI Overviews end up using regular Google Search more, with their usage increasing over time. “This dynamic of AI Overviews being additive to Google Search stands at odds with the market narrative of generative AI being the death knell for traditional search,” Khan says.

Google also announced that it intends to spend $75 billion in capital expenditures during 2025, much of which will go toward new cloud capacity and AI infrastructure.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91273605/openais-deep-research-gives-a-preview-of-the-ai-agents-of-the-future?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 4h | Feb 6, 2025, 6:40:10 PM


Login to add comment

Other posts in this group

Bluesky photo-sharing app Flashes launches in beta

Flashes, a photo-sharing app that’s linked to X-alternative Bluesky, launc

Feb 6, 2025, 9:10:05 PM | Fast company - tech
‘It’s giving drained’: Liberal TikTok users are mocking ‘conservative girl’ makeup

An intentionally bad “conservative girl” makeup technique is taking the internet by storm.

In the wake of the 2024 election and

Feb 6, 2025, 9:10:03 PM | Fast company - tech
This startup can measure custom insoles with just an iPhone camera

The Brannock device—that sliding metal gadget used in shoe stores to measure the dimensions of your feet—was invented

Feb 6, 2025, 6:40:09 PM | Fast company - tech
Hundreds of rigged votes can skew AI model rankings on Chatbot Arena, study finds

The generative AI revolution has turned into a global race, with mixtures of mode

Feb 6, 2025, 2:10:06 PM | Fast company - tech
Try these tips to help your parents stay safe online

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

Feb 6, 2025, 11:40:08 AM | Fast company - tech
Airlines are finally embracing Apple’s air tags—which means lost luggage could be a thing of the past

There’s nothing more annoying than arriving at your destination and finding that your checked baggage didn’t make the trip. But thanks to Apple’s new partnership with 15 different airlines,

Feb 6, 2025, 11:40:07 AM | Fast company - tech
Oracle’s HR software now has AI to help with taxes and career planning

Oracle’s new AI will answer employee questions about everything job-related, from hiring to retiring. 

Oracle has embedded artificial intelligence capabilities into its Human Capital Man

Feb 6, 2025, 9:30:04 AM | Fast company - tech