Google is doubling down on generative AI healthcare, with a batch of new tools

On Tuesday, Google announced a suite of new and updated generative AI and LLM tools to expand its reach into the red hot algorithmic medicine arena. The products range from personalized health coaching for Fitbit users, to modified versions of Gemini AI to examine medical images (a tool which scored a cool 91.1% on the kind of exam medical image technicians would have to take as part of the U.S. Medical Licensing Exam), and a massive, voluntary public dermatology database called the Skin Condition Image Network (SCIN) where users can upload images of their skin (freckles, blemishes, bumps, and other unique characteristics) to expand what is still a limited medical database by reaching across racial, geographic, and gender demographics.

One key leap among this motley mix of algorithmic medical programs is a shift into a real-world setting–taking an LLM that, to date, had only been tested in a simulated environment with actors and tossing it into an actual hospital for experimental use by doctors and patients. And Greg Corrado, senior director at Google Research, has an interesting caveat for that step-wise upgrade: It might prove useless and wind up in the dustbin.

[GIF: Google]

“If patients don’t like it, if doctors don’t like it, if it’s just not the sort of thing that language models of today are able to do, well, then we’ll back away from it,” said Corrado of the large language model (LLM) tool called AMIE (Articulate Medical Intelligence Explorer), part of its umbrella HealthLM med tech ecosystem, that is now being tested in an unnamed healthcare organization to mimic doctor-patient interactions and guide medical diagnoses, during a press webinar last week leading up to Google’s Health Check Up event at its New York City headquarters on Tuesday, where the company unveiled a raft of new tech tools across the medical spectrum that leverage everything from generative AI to LLMs based on Google’s marquee Gemini AI mothership.

Corrado’s asterisk is a sign of the delicate dance tech companies scrambling into the medical AI race must play to stay within regulatory bounds in the still-nascent AI-guided medical technology device space, which brushes across fundamental healthcare privacy protection issues and, of course, the question of whether the bot is accurate enough to be entrusted a guiding role in diagnosing a medical condition.

In this real-world case study, Corrado said that Google is hewing to all regulatory bounds because the AMIE tool isn’t actually making a diagnosis–it’s just asking questions to patients that a clinician might normally ask (and while that flesh-and-blood doctor is standing by to assess how the algorithm is doing). In fact, it’s not technically even meant to provide the diagnostic assistance service that would, ostensibly, be its ultimate goal–Google’s just seeing if the bot is useful and natural to interact with at all, as Corrado puts it.

“We’re not talking about giving advice. We’re not talking about making a decision or sharing a result or anything like that. It’s actually in the conversation part where you’re where the doctors gathering information is asking you about what’s going on with you,” he said. “We think that that scope of asking questions, is the right kind of scope where we can explore how do we do in terms of being helpful and an empathetic and useful to people, but in a way where the, we’re not giving information, we’re just trying to elicit the right sort of conversation. So we think that that’s a safe space to get started.”

But it’s a bit more complicated than that. If an AI is asking questions to a patient to try and ascertain a result, of course some sort of diagnostic framework must be guiding how its questions progress, or why it might ask one question in response to something a patient mentions. For now, however, Google’s approach is what the company is dubbing a learning experiment in a gradual step-wise process that might not ultimately work at all if it’s not intuitive, a natural fit for doctors or patients, or just plain useless.

The caution, however, isn’t exactly limiting the scope of Google’s ambition to extend its reach in healthcare AI alongside Apple, Amazon, and Microsoft to carve its own niche in the scorching space.

https://www.fastcompany.com/91063451/google-generative-ai-healthcare-new-tools?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 1y | 19 mar. 2024, 19:10:02


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

These tech companies are building healthier social media habits for kids

The last year has seen a global reckoning with the effects of social media on kids. Australia banned

23 mar. 2025, 12:30:02 | Fast company - tech
This wellness app is like TikTok for your feelings

Would you share the pages of your journal with a bunch of strangers, because that’s the idea behind social wellness app Exist. 

The new

23 mar. 2025, 05:30:03 | Fast company - tech
Yes, Apple is delaying some AI features. But does it really matter?

Earlier this month, Apple officially announced that it would be postponing the launch of some planned Apple Intelligence features to a later, unspecified date in the future. These features mainly

22 mar. 2025, 10:50:06 | Fast company - tech
Suffering from loneliness? These businesses may have a cure

Loneliness isn’t just a lingering by-product of COVID lockdowns—it’s a public health crisis. The impacts of social isolation are said to be as detrimental to human health as

22 mar. 2025, 10:50:06 | Fast company - tech
Anthropic is adding web search to its Claude chatbot in a very smart way

Anthropic announced Thursday that it has added web search capability to its Claude chatbot. It’s not a new feature to the AI world—but the company’s approach stands as one of the most thoughtful t

21 mar. 2025, 23:20:06 | Fast company - tech
In this horror game, the monster can see you through your webcam

If the thought of being hunted by something that can see your every move makes your skin crawl, you might want to steer clear of Eyes Never Wake.

This viral horror game takes im

21 mar. 2025, 21:10:03 | Fast company - tech
Fewer than 500 people are responsible for $3.2 trillion of artificial crypto trading

Market manipulation in the cryptocurrency world is rampant—and fewer than 500 people are responsible for as much as $250 million a year in profits and over $3.2 trillion in artificial trading, acc

21 mar. 2025, 18:40:04 | Fast company - tech