Top Three Objections to AI in Mental Health and Why They Miss the Point

There’s growing pushback against using AI for mental health—and I understand. Nobody wants to hand over their demons to a cold algorithm. Yet straw‑man arguments about the worst iteration are drowning out real conversation about a tool that has tremendous potential for good. Yes, risks exist. Yes, we need guardrails. We need to exercise caution. But pretending vast numbers of people aren’t already using AI for mental health reasons is naïve at best and negligent at worst. Let’s unpack three common objections, ground them in reality, and then act.

1. “AI could never be a therapist—only humans can truly empathize.”

The objection: Therapy requires genuine human empathy. Algorithms have no beating heart.

Response: Fair. AI shouldn’t pretend to replace a human therapist. But insisting that traditional in-office therapy is the only pathway to improved mental health is flat-out wrong. Millions of people struggle without ever stepping foot in a counselor’s office: long wait lists, no insurance, no local providers. Would you tell them, “Too bad, you’re on your own”? Of course not. We already accept self-help books, peer support groups, meditation apps. AI can act like an interactive self-help workbook — guiding exercises, offering psychoeducation, nudging you back on track when you falter. It’s not “therapy,” it’s support. And for people who would otherwise have nothing, it could be a huge difference maker.

2. “AI companies just want profit—so they’ll skimp on safety, privacy, and ethics.”

The objection: For-profit AI apps will cut corners on data security, overpromise clinical effectiveness, and leave vulnerable people at risk.

Response: If we pretend AI isn’t already here, we lose the chance to drive the industry toward best practices. People are using ChatGPT, Gemini, and countless unregulated bots for mental health chats—study after study confirms it. So instead of burying our heads, we need to champion companies that incorporate:

  • Transparent disclaimers: “I’m not a licensed therapist; this is not a substitute for professional treatment.”

  • Strict data protection: Encrypt all user data; prohibit secondary use without consent.

  • Safety screening: Assess user suitability; flag crises and refer to appropriate resources.

  • Independent oversight: Regular third‑party audits; FDA‑style reviews where appropriate.

Building well‑regulated AI is harm reduction. Banning it only hands the market to bad actors.

3. “AI will steal therapists’ jobs.”

The objection: If AI can carry on a conversation, therapists won’t have work and users will be given substandard care.

Response: Lots of jobs will change or vanish in the AI era — mental health is no exception. But is that inherently bad? Here’s what I saw as a college counseling center director: students struggling, told they must get therapy whenever they feel anxious or down. The result? Wait lists, skyrocketing costs, and an industry that pathologizes perfectly normal emotions. We need more options, not fewer. AI can deliver low-intensity interventions — CBT exercises, stress-management techniques, resilience building — at a fraction of the price and time. And that might actually free up therapists to focus on the most complex, high-risk cases. What’s terrible about that? If people can get immediate, affordable help for everyday struggles, isn’t that a win?

What you can do today:
Developers: Solicit input from clinicians and embed safety and ethics feedback into your design.
Clinicians: Learn how AI works and engage in shaping its use in mental health.
Policymakers: Collaborate with providers to set baseline regulations and enforce consequences for negligent practices.

I’m not naive about the pitfalls. We absolutely must wrestle with privacy, clinical validity, bias, and liability. Ethical, regulated, transparent AI tools are non-negotiable. But ignoring the tidal wave of AI in mental health — pretending it won’t or shouldn’t happen — only ensures that the worst, most reckless players fill the void. Let’s channel our frustration into building and promoting the companies and standards that will do this right. Because denying AI’s potential won’t stop it — it’ll just leave the field to the bad actors.

Next
Next

Should Publishers of Self-Help Depression Workbooks Be Held Responsible if Someone Using Them Dies by Suicide?