Contact Us

Leana Wen: The new Dr. Google is in. Here’s how to use it.

Thought Leader: Leana Wen
October 14, 2025

Yes, artificial intelligence can help patients with their health, when used with caution.

Chances are high that you have used artificial intelligence to research your own medical concerns or will be tempted to in the future. A recent survey found that more than 1 in 3 Americans have already used chatbots for this purpose, including nearly half of those under 35.

This shouldn’t be surprising. Doctors themselves are increasingly turning to AI in their work, including to help them improve the accuracy of their diagnoses.

But chatbots come with real risks. Without access to a person’s full medical history, they can miss critical context and give misleading advice. They can also generate “hallucinations”: plausible-sounding statements that have no basis in fact. And because they deliver responses with conviction, wrong answers can sound convincing and lead patients to delay needed care.

Used thoughtfully, though, AI can help patients manage their health more effectively. Here is some guidance on how you can best use these tools to support your care — and when you should proceed with caution.

Adam Rodman, an internist and director of AI programs at Beth Israel Deaconess Medical Center, believes the most valuable way patients can use AI is to better understand their health. Medical visits are often short, and patients might not have time to absorb all the information discussed. Afterward, they can turn to a chatbot to explain a diagnosis, clarify medical jargon and expand on points that need more context.

“It’s actually probably a good idea to put your last doctor’s note in and say, ‘Can you explain this to me?’” Rodman said.

Chatbots tend to excel at providing clear, accessible explanations. They can be especially helpful for people with rare diseases, for whom specialists themselves frequently turn to reference materials, and for patients with high stakes diagnoses such as cancer or dementia. These tools can help patients gain a more complete understanding of their condition, including how it progresses, which treatments are most effective and what new therapies should be considered.
Beyond post-visit explanations, AI can also help patients prepare for medical appointments. They can ask as many questions as they want in advance and have the chatbot organize and prioritize them for discussion with their doctor. Because time with the clinician is so limited, they can practice with the chatbot to describe their symptoms concisely. And those referred to specialists can turn to AI to learn what the visit might involve and what tests or procedures to expect.

Whether AI should be used for self-diagnosis is far less clear. On one hand, chatbots can serve as a useful second opinion that confirms an existing diagnosis and offers reassurance. Sometimes they could surface other possible explanations that patients can discuss at their next medical visit.

Eric Topol, a cardiologist and director of the Scripps Research Translational Institute, told me that in his practice, he has seen patients use chatbots to identify conditions that other doctors had missed. He considers AI a major advancement beyond search engines, which, until recently, were what patients used when they weren’t sure about their diagnosis. Unlike search engines that return generic results, chatbots integrate the specific details provided to generate responses tailored to each situation.

“What’s nice is that they’re free, immediately accessible and have up-to-date medical information that oftentimes doctors are not keeping up with,” Topol said.

Rodman uses a more guarded approach. He’s comfortable with patients using chatbots for a second opinion so long as they are sure to follow up with their clinician. “I really don’t think the tech is at the point that it should be making autonomous decisions without a health care provider in the loop,” he said. He especially warns against using AI for potentially life-threatening symptoms such as chest pain or stroke-like symptoms. Patients experiencing these should seek immediate medical care.

Both experts caution against sharing sensitive personal data with chatbots. It’s unclear how technology companies might store, use or sell the information that people share with them. Users should remove identifying details such as names, birth dates and addresses and avoid uploading entire medical records.

To minimize the risk of hallucinations, Topol advises comparing answers from multiple models. ChatGPT is perhaps the best-known chatbot; others include Gemini, Claude and Perplexity. Asking the same question across different platforms, or simply rephrasing it within the same one, can help spot inconsistencies. “Usually, they will cancel out if there’s a misdirection, mistake or overconfidence,” he said.

All of this reflects what AI tools can currently do. The technology is evolving rapidly, and future versions might be able help monitor patients in-home and generate customized treatment plans in ways that are hard to imagine today. But even then, the purpose of AI should be to complement clinician judgment, not to replace the human relationships at the heart of medicine.

Dr. Leana Wen is a compelling keynote speaker because she combines her expertise as a physician and public health leader with a unique ability to translate complex medical issues into clear, actionable guidance for the public. Her thoughtful commentary on critical health topics—from vaccines to health equity—makes her an invaluable voice in today’s fast-evolving healthcare landscape. With her deep experience in both clinical practice and policy, she empowers audiences to make informed decisions and engage meaningfully with public health challenges. To host her at your event, contact us.

Relevant posts

Subscribe to the WWSG newsletter.

Check Availability

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

0
Speaker List
Share My List