Should Publishers of Self-Help Depression Workbooks Be Held Responsible if Someone Using Them Dies by Suicide?
I’m being deliberately provocative with this title to make a point.
AI mental health tools should not be treated differently from self-help books, as long as they are thoughtfully designed to support users ethically and do not falsely claim to be therapists.
No reasonable person believes a publisher of a self-help workbook should be legally responsible if a reader, despite their best efforts, continues to struggle or even dies by suicide.
People have free will. They seek out resources to help themselves. Many of these resources, especially those grounded in psychological science, have provided hope and healing to millions. And yet, when it comes to AI-driven mental health tools, the conversation is shifting toward fear and liability. Some argue that if an AI mental health support tool interacts with a struggling person who later harms themselves, the creators of the tool should be held responsible.
I believe this is a mistake.
AI Tools Must Never Pretend to Be Therapists
First, let’s be absolutely clear:
AI should never call itself a “therapist” or anything that implies it is one.
The term therapist is not just a casual label. It refers to a licensed professional role, one that carries a complex set of ethical obligations, legal responsibilities, and clinical standards. It also conveys an expectation of human connection—a genuine relational bond that is fundamental to the therapeutic process.
For these reasons, many practitioners are understandably uncomfortable, even outraged, when AI products use the term “therapist” or similar language. It’s not just a technicality. Misusing the term could mislead vulnerable users into believing they are receiving real clinical care, delay them from seeking appropriate human support, and create false reassurance at moments when real risk demands human intervention.
Labeling an AI support tool as a “therapist” would be unethical and potentially dangerous. Responsible developers must describe these tools accurately for what they are: advanced, engaging, psychoeducational self-help resources.
What Good AI Mental Health Tools Can Offer
When developed responsibly, AI mental health tools represent the next evolution of self-help: dynamic, personalized, and adaptive.
Good AI support tools can:
Teach evidence-based strategies drawn from CBT, ACT, DBT, and other therapies
Deliver psychoeducation about emotional health
Provide motivational encouragement tailored to the user
Suggest pathways to real-world support when needed
Recognize patterns of high-risk behavior and prompt users to seek appropriate professional care
Unlike static materials like workbooks or meditation apps, thoughtfully designed AI tools can adapt to a user’s situation in real time. They cannot and should not replace therapy, but they can act as accessible companions to healing—offering guidance, support, and encouragement when it is most needed.
To fulfill their promise without creating harm, AI mental health tools must be transparent about what they are, avoid misleading or exaggerated claims, build in ethical safeguards to detect concerning patterns, and consistently encourage users to seek human connection and professional support when appropriate.
When these principles are followed, AI mental health support becomes not only safer but meaningfully impactful.
Balancing Innovation, Ethics, and Access
If we insist that AI mental health tools must meet the same standards as licensed therapy before being made available, we set an unrealistic and counterproductive expectation. By that standard, we would also have to question the existence of self-help books, meditation apps, podcasts, and countless other resources that offer support but do not replace professional care.
The reality is simple: Support, even if imperfect, matters. Resources, even if not therapy, can save lives.
Importantly, people are already turning to general-purpose AI models for emotional support—models that have not been specifically designed or tuned for mental health needs, and that lack critical safeguards. Ignoring this trend does not protect vulnerable users. In this context, encouraging the development of responsible, purpose-built AI mental health tools is a harm reduction strategy. It ensures that when people seek help through AI, they find tools that are thoughtfully built, ethically guided, and designed to promote real-world human connection when needed.
When we confuse support with treatment, or punish those who offer thoughtful tools, we risk leaving people more alone, not less.
AI, responsibly developed, offers a profound opportunity to expand access, empower individuals, and support mental health at scale.
We should protect that opportunity, not destroy it out of fear.