
When it is time to have a difficult conversation with a dying patient about whether to insert a feeding tube, Dr. Jonathan Chen, an internist at Stanford, practices first with a chatbot.
He asks the bot to play the role of a physician while he becomes the patient. Then they switch roles.
It makes him uneasy.
The chatbot is remarkably good at finding the right words. Doctors also know that artificial intelligence now excels at diagnosing conditions, interpreting scans and images — in many cases outperforming human physicians — and at responding to patient messages or drafting insurance appeals when treatments are denied.
So the question becomes unavoidable: What is the doctor’s role now?
“These systems are existentially threatening,” Dr. Chen said. “They threaten doctors’ identity and purpose.”
Dr. Harlan Krumholz, a Yale cardiologist and adviser to OpenEvidence, an A.I. platform for clinicians, agrees. “A.I.’s reasoning and diagnostic abilities are already surpassing what doctors can do in certain domains,” he said.
Yet few of the physicians thinking most deeply about A.I.’s impact on medicine are suggesting surrender. Instead, they are grappling with a profound transformation of medical work.
What A.I. Can Do — And What It Can’t
A.I., said Dr. Robert Califf, former commissioner of the Food and Drug Administration, is taking over what he calls the “dirty work” of medicine: clinical documentation, administrative tasks and data synthesis.
But knowing everything is not the same as understanding a patient.
Dr. Lee Schwamm, neurologist and associate dean at Yale School of Medicine, offered a simple example. A patient says they felt “dizzy” and their arm felt “dead.” Those words can mean radically different things, depending on context, tone, and subtle physical cues.
Determining whether this is a stroke — a medical emergency — requires experience, judgment and the ability to extract meaning from incomplete or ambiguous information.
“A chatbot is very good at pattern matching,” Dr. Schwamm said. “But it can only work with the data it’s given. It can’t discover missing information on its own.”
And when patients face devastating news, they often need something no algorithm can provide.
“In the end, you want to look someone in the eye,” he said, “and explain that they have six months to live.”
Redesigning the System, Not Replacing Doctors
Still, A.I. is already reshaping who sees which patients.
Dr. John Erik Pandolfino, a specialist in gastroesophageal reflux disease at Northwestern University, created an A.I. triage tool that directs less severe cases away from his clinic. Patients with serious symptoms are seen immediately; others receive faster care from nurses or physician assistants.
The result: fewer patients overall, but those who truly need specialist expertise get it sooner.
Eventually, Dr. Pandolfino expects A.I. to guide non-specialists through complex diagnoses — a development that may make some specialists less essential.
“When the algorithm does this better than a human,” he said, “I’ll have to find something else to do.”
The Risk of Automating a Broken System
Not everyone is optimistic.
Researchers warn that A.I. systems can replicate existing biases in medicine, paying less attention to women, minorities or patients with poor health literacy.
“The real danger isn’t A.I. itself,” said Dr. Leo Anthony Celi of MIT. “It’s deploying it to optimize a deeply broken system instead of reimagining it.”
Others worry about overreliance.
“The last thing you want,” said Dr. Jeffrey Linder of Northwestern, “is a doctor who turns off their brain and lets A.I. decide everything.”
Yet even critics acknowledge that the current system is failing patients — with long wait times, physician shortages and administrative overload.
Medicine is changing. The question is not whether doctors will disappear, but what kind of doctors society still needs.
“Even if A.I. has read all the medical literature,” said Dr. Joshua Steinberg, a primary care physician in New York, “I’ll still be the expert on my patients.”