Google is doubling down on generative AI healthcare, with a batch of new tools

March 19, 2024

Google is doubling down on generative AI healthcare, with a batch of new tools

From medical imaging to personalized health coaching, Google is testing new AI tools with patients and in hospitals.

BY Sy Mukherjee

On Tuesday, Google announced a suite of new and updated generative-AI and large language models (LLM) tools to expand its reach into the red hot algorithmic medicine arena. The products range from personalized health coaching for Fitbit users to modified versions of Gemini AI to examine medical images (a tool that scored a cool 91.1% on the kind of exam medical image technicians would have to take as part of the U.S. Medical Licensing Exam), and a massive, voluntary public dermatology database called the Skin Condition Image Network (SCIN), where users can upload images of their skin (freckles, blemishes, bumps, and other unique characteristics) to expand what is still a limited medical database by reaching across racial, geographic, and gender demographics.

One key leap among this motley mix of algorithmic medical programs is a shift into a real-world setting—taking an LLM that, to date, had only been tested in a simulated environment with actors, and tossing it into an actual hospital for experimental use by doctors and patients. And Greg Corrado, senior director at Google Research, has an interesting caveat for that step-wise upgrade: It might prove useless and wind up in the dustbin.

Google is doubling down on generative AI healthcare, with a batch of new tools

[GIF: Google]

“If patients don’t like it, if doctors don’t like it, if it’s just not the sort of thing that language models of today are able to do, well, then we’ll back away from it,” says Corrado of the LLM tool called AMIE (Articulate Medical Intelligence Explorer), part of its umbrella HealthLM med tech ecosystem, that is now being tested in an unnamed healthcare organization to mimic doctor-patient interactions and guide medical diagnoses, during a press webinar last week leading up to Google’s Health Check Up event at its New York City headquarters on Tuesday, where the company unveiled a raft of new tech tools across the medical spectrum that leverage everything from generative AI to LLMs based on Google’s marquee Gemini AI mothership.

Corrado’s asterisk is a sign of the delicate dance tech companies scrambling into the medical AI race must play to stay within regulatory bounds in the still-nascent AI-guided medical technology device space, which brushes across fundamental healthcare privacy protection issues and, of course, the question of whether the bot is accurate enough to be entrusted a guiding role in diagnosing a medical condition. 

In this real-world case study, Corrado says that Google is hewing to all regulatory bounds because the AMIE tool isn’t actually making a diagnosis—it’s just asking questions of patients that a clinician might normally ask (while that flesh-and-blood doctor is standing by to assess how the algorithm is doing). In fact, it’s not technically even meant to provide the diagnostic-assistance service that would, ostensibly, be its ultimate goal—Google’s just seeing if the bot is useful and natural to interact with at all, as Corrado puts it.

“We’re not talking about giving advice. We’re not talking about making a decision or sharing a result or anything like that. It’s actually in the conversation part where you’re where the doctors gathering information is asking you about what’s going on with you,” he says. “We think that that scope of asking questions is the right kind of scope where we can explore how do we do in terms of being helpful and empathetic and useful to people; but in a way where we’re not giving information, we’re just trying to elicit the right sort of conversation. So we think that that’s a safe space to get started.”

But it’s a bit more complicated than that. If an AI is asking questions of a patient to try and ascertain a result, of course some sort of diagnostic framework must be guiding how its questions progress, or why it might ask one question in response to something a patient mentions. For now, however, Google’s approach is what the company is dubbing a learning experiment in a gradual step-wise process that might not ultimately work at all if it’s not intuitive or a natural fit for doctors or patients, or just plain useless.

The caution, however, isn’t exactly limiting the scope of Google’s ambition to extend its reach in healthcare AI alongside Apple, Amazon, and Microsoft to carve its own niche in the scorching space. 



Sy Mukherjee is a freelance journalist who has covered the health industry for more than a decade. 


Fast Company – technology