Whether AI should be used for self-diagnosis is far less clear. On one hand, chatbots can serve as a useful second opinion that confirms an existing diagnosis and offers reassurance. Sometimes they could surface other possible explanations that patients can discuss at their next medical visit.
Eric Topol, a cardiologist and director of the Scripps Research Translational Institute, told me that in his practice, he has seen patients use chatbots to identify conditions that other doctors had missed. He considers AI a major advancement beyond search engines, which, until recently, were what patients used when they weren’t sure about their diagnosis. Unlike search engines that return generic results, chatbots integrate the specific details provided to generate responses tailored to each situation.
“What’s nice is that they’re free, immediately accessible and have up-to-date medical information that oftentimes doctors are not keeping up with,” Topol said.
Rodman uses a more guarded approach. He’s comfortable with patients using chatbots for a second opinion so long as they are sure to follow up with their clinician. “I really don’t think the tech is at the point that it should be making autonomous decisions without a health care provider in the loop,” he said. He especially warns against using AI for potentially life-threatening symptoms such as chest pain or stroke-like symptoms. Patients experiencing these should seek immediate medical care.
Both experts caution against sharing sensitive personal data with chatbots. It’s unclear how technology companies might store, use or sell the information that people share with them. Users should remove identifying details such as names, birth dates and addresses and avoid uploading entire medical records.
To minimize the risk of hallucinations, Topol advises comparing answers from multiple models. ChatGPT is perhaps the best-known chatbot; others include Gemini, Claude and Perplexity. Asking the same question across different platforms, or simply rephrasing it within the same one, can help spot inconsistencies. “Usually, they will cancel out if there’s a misdirection, mistake or overconfidence,” he said.
All of this reflects what AI tools can currently do. The technology is evolving rapidly, and future versions might be able help monitor patients in-home and generate customized treatment plans in ways that are hard to imagine today. But even then, the purpose of AI should be to complement clinician judgment, not to replace the human relationships at the heart of medicine.
Dr. Leana Wen is a compelling keynote speaker because she combines her expertise as a physician and public health leader with a unique ability to translate complex medical issues into clear, actionable guidance for the public. Her thoughtful commentary on critical health topics—from vaccines to health equity—makes her an invaluable voice in today’s fast-evolving healthcare landscape. With her deep experience in both clinical practice and policy, she empowers audiences to make informed decisions and engage meaningfully with public health challenges. To host her at your event,
contact us.