AI Clinical Notes: Why My Heart Says “Yes” but My Head Says “Probably Not”

It’s no secret: writing clinical notes is one of the most universally dreaded tasks in our profession. Even the most seasoned therapists groan at the prospect of sitting down at the end of a packed day to hammer out documentation. And for trainees? Let’s just say my own history includes a few stern supervisor conversations about falling behind on notes.

Enter the new wave of AI tools for therapists. These programs can listen to a session—or accept an uploaded recording—and generate a perfectly formatted clinical note. Many go a step further, offering treatment planning suggestions and ideas for in-session exercises. At first glance, this seems like a dream. Delegate the busywork to AI and focus entirely on the work that matters: being present, attuned, and responsive with your clients. What could be better?

Especially in settings like college counseling centers—where insurance isn’t billed and notes often live a lonely life in an untouched file—documentation becomes a formality. In these contexts, the clinical note exists primarily as a safeguard, something you hope never has to be reopened. So why not let AI take care of this largely symbolic chore?

But as with most seemingly perfect solutions, there’s another side to the coin.

Yes, there are the obvious concerns about privacy and data security—serious issues, especially when dealing with sensitive mental health information. But there’s also a more subtle discomfort: the idea of allowing a machine to determine what’s important to include and what’s just “yada yada yada” (for the Seinfeld fans out there). Will it know what really matters?

More pressing, though, is the question of what we lose when we no longer write our own notes.

Because here’s the thing: as burdensome and repetitive as documentation can feel, the act of reflecting on a session and choosing what to include forces us to think. It’s a cognitive exercise in clinical reasoning. It pushes us to analyze, synthesize, and commit to a conceptualization—whether it’s during an intake or a high-stakes follow-up. I can’t count the number of times I’ve come away from a session feeling one way, only to feel differently after writing the note. The very act of writing helped me clarify risk, redefine the core issue, or realize something I missed in the moment.

If we hand off that process to an AI—even a very good one—do we risk weakening our clinical muscles over time?

This concern becomes even more significant when we consider how likely it is that, in the chaos of a typical clinical day, we’ll always thoroughly review an AI-generated note. Best practice dictates we should. Real life suggests otherwise. It’s easy to imagine a future where therapists rely on these tools without reflection, rubber-stamping notes to get out the door in time to pick up the kids.

And what about training? I wouldn’t dream of letting a therapist-in-training use these tools. Writing notes isn’t just a task—it’s a skill-building exercise, a diagnostic refinement process, and a way of metabolizing the clinical encounter. If we remove that step, are we shortchanging their development?

The truth is, I want these tools to be standard in our profession. I’ve tested them and been genuinely impressed. They’re fast, accurate, and often beautifully written. I love the idea of reclaiming the time and emotional energy spent on documentation. But I’m not yet convinced that the trade-off is worth it. Not for trainees, and maybe not even for seasoned clinicians.

So here we are: caught between a burdensome, often low-utility task and a seductive technological solution that might quietly erode something essential. The question isn’t just whether AI can write our notes—it can. The real question is: Should it? And if we say yes, how do we safeguard the deeper clinical thinking that note-writing has quietly sustained for decades?

We’re standing at an inflection point. Let’s just be sure we don’t automate away the very practices that made us good therapists in the first place.

Previous
Previous

Saving Us from Ourselves: The Paradox of AI and Tech Dependency

Next
Next

Stop Calling It an AI Therapist — You’re Ruining Something Brilliant