AI tools can draft language that helps clinicians explain lab results to patients. The technology can also put results in the context of patients’ medical history, such as what medications they are taking and whether they have diabetes. A doctor still must check that language, but it could provide patients with faster information and reduce clinicians’ workload, Sharp said.
The second is explaining billing. “Patients write to us and say, ‘Why was this charged? I have a co-pay. Why didn’t my co-pay cover this?’” Sharp said, so his hospital is testing AI to respond to these questions. Again, a human must check all responses before sending them to patients.
The third is help with messaging. This was something Kristin, a preventive cardiologist from Virginia, brought up: “My hospital has integrated AI into patient messages we receive through electronic health records,” she wrote. “I’ve found the AI drafts to be consistently very good. It goes into more detail than I would normally write, and it saves me a lot of time. (I estimate as much as an hour a day.)”
Sharp said that during the pandemic, many more patients started emailing their providers with questions they previously would have asked at appointments. This “created a burden on our teams to respond appropriately and with empathy and accuracy,” he said. His health system is among many that have started using AI to draft physician responses.
The technology has improved so much that it can now pull in patient data in responding to queries. For instance, Sharp explained, “when the patient says, ‘In my last visit, my doctor told me I need to change a medication, but I can’t remember what that is,’ the draft will naturally surface that information.”
This saves time, but it also alleviates what Sharp refers to as cognitive burden. “We all know how hard it is to write an important email,” he said. “Now imagine doing that dozens and dozens of time a day.” He told me Stanford’s internal research shows that the AI-generated responses tend to be longer and more empathetic than what clinicians would write.
Anthony from Maryland is still unsure about this technology. “I’m retired from clinical practice, but my son is a doctor who raves about AI improving his work-life balance,” he wrote. “Here’s my worry: When administrators see that doctors are more efficient, won’t they just make them see more patients? Could this be a temporary benefit for doctors that ends up making it worse for them in the long run?”
I thought this was a great point, and so did Sharp. His answer, though, surprised me: “Let me be a little controversial and say, if we can see more patients, we probably should,” he said. “In my practice, I would love to see more patients in my day if I can do so in a way where I can be wholly present and meet their needs completely and not burn myself out.”
And herein lies the promise, and peril, of AI in health care. As Anthony accurately identified, a solution to make care more efficient could be misused and worsen clinician burnout. Or it could increase access to care while helping providers alleviate clerical burdens. The outcome depends not only on advances in technology but also on how we as a society choose to deploy them.