What Makes a Good Doctor in the Age of AI?
I am currently taking a course through Harvard Medical School called Leading AI Innovation in Healthcare . A couple of weeks ago, I had the chance to hear Dr. Dereck Paul, co-founder of Glass Health, speak about the future of AI in medicine. His talk was fascinating. He’s one of those rare speakers who can communicate both the depth of innovation and the ethical tension that comes with it. Glass Health has built an AI tool that helps physicians make clinical decisions and draft documentation. It’s an impressive system. It can sift through massive amounts of data to assist in diagnosing and treating patients, faster, often more accurately, and with a broader scope than most humans could ever match.
But here’s what caught me off guard. One of the surprising findings they shared was that the tool performs best without the doctor in the loop. In other words, clinical decisions are more accurate when AI is left to do the work on its own. As a patient, that should be cause for celebration. Better outcomes. Fewer errors. Faster diagnoses. Imagine getting top-tier care in a rural clinic that rivals what you’d get at the best teaching hospital. Imagine eliminating wasted time and cost from misdiagnosis and trial-and-error treatments. Imagine not needing to bounce from specialist to specialist just to get a complete picture.
But at the same time, it left me uneasy.
At the end of the talk, I asked a question that had been sitting with me for a while: If our best doctors today got there through years of experience, working through complex cases, learning through repetition and failure, what happens when AI takes over that core part of training? Will the doctors of the future actually be less qualified than the doctors of today?
Dr. Paul’s answer was honest. And a little jarring: yes.
That hit hard. I’ve thought about it before, but hearing it said plainly by the CEO of an AI company made it real. We all want our doctors to use the best tools available. When we’re sick, it’s not about ego or sentiment. It’s about getting better. If AI can make a better call on what’s wrong with us, we should use it.
Still, that answer raises a huge question: If AI is doing the hard, technical thinking, what will define a good doctor in the future? The answer, I think, is this: the future of medicine won’t be about having the most knowledge. It will be about having the most humanity. That means empathy. Communication. Humor. Patience. The ability to build trust and make someone feel safe when they’re at their most vulnerable.
These aren’t fringe qualities anymore. They’re becoming the core of what makes a great doctor.
This isn’t a new problem, either. When my daughter was a baby, we had a pediatric specialist who was technically brilliant but completely tone-deaf. He’d share grim statistics about child mortality as casually as you’d mention the weather. No warmth. No sense of timing. And yet, because he was so skilled, we kept seeing him until he left the practice.
In a future where AI does the diagnosing and planning, doctors like that may not be valued the way they used to be. And that’s probably a good thing.
Medical schools, and really any training program for knowledge-based professions, need to take this seriously. If AI can outthink us, we better out-feel it. These so-called soft skills aren’t optional anymore. They’re the frontline skills.
Not just for doctors. For lawyers, psychologists, accountants. Anyone whose job has traditionally relied on deep, domain-specific knowledge.
Because when the data work is done by machines, the human work is what remains.