“I’m Replacing My Psychotherapist with AI”: Wait! Read this Article Authored by Two Clinical Psychologists and AI First!

Therapy clients increasingly turn to AI to help address the psychological issues troubling them most. In turn, many of these individuals report that their experiences with AI showed a lack of understanding of their unique experiences or even encouraged unhelpful coping strategies they initially sought to change. Given its explosive growth in the mental health industry, the authors of this article – two experienced clinical psychologists – were curious about AI’s use as “the therapists of the future”. To explore this phenomenon, we sought AI’s help to resolve an issue – with one of the authors, Joshua Peters, volunteering himself for the process. 

What we found was disturbing! AI’s efforts to resolve the issue were often uninformed and generated without a deeper understanding of personal context and history. AI lacked the capacity to understand or therapeutically uncover how Joshua’s past experiences were consciously and unconsciously impacting his understanding of his current predicament. AI’s interventions were overly broad and often misguided and mis-attuned to what Joshua was actually feeling based on his unique past.  AI couldn’t attune to his complex emotions, internal dynamics, or the complexity of his relationships and their impact on his challenges. AI couldn’t understand the defenses that might be getting in Joshua’s way of not being able to live a fuller, more meaningful life as an authentic, whole self.  

As smart as AI can be, AI doesn’t have the expansive knowledge of an entire field of clinical theory and applied practice, and is unable to conduct a psychological assessment and provide a diagnosis.  What this means is AI can’t understand your problem from multiple treatment modalities and then implement an empirically-supported therapy or integrate a variety of treatments for the purpose of resolving your specific issues and addressing your unique needs, and based on an appropriately conducted assessment and diagnosis. Furthermore, AI can’t critically think about the field -both theory and applied practice- and the advice it eventually provides in terms of consideration of the multiplicity of frameworks that can be applied to understanding issues associated with a client’s suffering. We were not alone in our critique of AI as therapist. Interestingly, even AI was aware of its own downfalls – an awareness we will explore in later sections of this article. 

With all these potentially dangerous limitations in mind, why does it seem like governments and organization everywhere are suddenly clamoring to include this technology in their service offerings? 

This trend towards technology as a ‘provider’ of psychological services started in Ontario a few years back with computerized Cognitive Behavioural Therapy (CBT).  In some ways trying to deliver computerized CBT made sense. The theory underlying CBT theory and interventions does not rely on working with or understanding the client’s historical past, psychological defenses, self and relational patterns, emotional/somatic experiences associated with early attachments, and the therapeutic relationship isn’t viewed as a primary space for change as in other approaches. The computerized version of CBT had already been a colossal failure for years in the U.K., and despite colleagues summarizing such evidence (Samosh & Tasca, 2021) the Ontario government pushed forward with this as it aligned itself with the idea that a computer could be the primary provider of front line psychological services. Regardless of evidence to the contrary, governments everywhere seem to be turning towards computer programs more than ever as potential cure-all for many of society’s most pressing mental health challenges. Even the research on Ontario’s computer CBT venture came to the realization of poor adherence (i.e., a lot of people dropping out of treatment) and that contact with a human-being both a therapist and technologist might be necessary to support the delivery of the treatment (Khan et al., 2024).

And now, drum roll….AI is the new promising cure to replace human-to-human therapy! Even with a vast research base to support the importance of the therapeutic relationship and human element of therapy, somehow governments and technology companies continue to ignore this essential component in consideration of the use of AI for therapeutic reasons.  We know differently.  The research clearly demonstrates that the therapeutic relationship is one of the largest contributors to change in clinical trial studies. 

It’s most likely that individuals who propagate a computerized version or AI version of therapy underestimate the impact of the human dimensions of the practice of therapy and how this human dimension contributes to client change. In doing so, they position therapy as a distant process devoid of a relational and ‘human element’. Perhaps, proponents of computerized and AI therapy may be avoidant in their own attachment style and  dismissive of the importance of emotions and relationships as central to the process of change.  As such, some of the proponents of computerized or AI generated therapy might struggle to understand the significance of being with another human-being to heal. In turn, these individuals might be creating AI programs that actually mimic their own avoidance strategies and further perpetuate mental health struggles. 

We predict that this effort to have a computer replace a human therapist will fall by the wayside, and sadly might end up harming countless people when AI’s sage advice is followed without understanding the full picture of a mental health client as a human-being.  The authors of this article have already heard within their own practices and from other psychologists the types of harm that can occur, including psychotic episodes; relational boundary violations and intrusions; and serious dissociative episodes. Recent news articles have highlighted these risks with one AI program encouraging women to divorce her husband (Ng, 2025) and another possibly contributing to a teen’s suicide (Kuznia et al., 2025). These risks don’t even include the more subtle harms that are likely to go undetected. Further research is urgently needed to document and understand the possible harm done by AI. We therapists are not alone in our critics of AI. AI itself understands that this might be harmful as evidenced in the following section “co-authored” by AI. 

What Does AI Think About Itself Acting As A Therapist?

Given our current technology zeitgeist, we thought we would ask AI to co-author this blog with us.  ‘It’ agreed. We thought it would be fair to get ‘It’s’ opinions on the issue as we didn’t want to present a biased perspective for the readers of this blog.  Usually, when we ask AI about something related to our field, we usually find a lot missing, or some distortions.  Consumer beware!  We did, however, ask AI ‘itself’ what are some of the risks with AI therapy.  AI provided a fairly good summary of some of the dangers of using ‘It’ as a therapist.  

So, According to AI (“It” did most of the work below so we gave it first author) and us, here are some of the most concerning dangers associated with AI in therapeutic roles:

1. Lack of Human Understanding: 

            Emotional Nuance: AI lacks the ability to fully comprehend complex human emotions and social contexts, which can lead to misinterpretation of a user’s feelings or intentions.

            Nonverbal Cues: AI cannot interpret nonverbal communication (such as body language and tone of voice, [added by us-emotional experience through non-verbal and facial cues] crucial for effective therapy.

            Pattern Recognition: [added by us- “It’ is unable to detect unconscious self and relational patterns unknown by the client due to defenses that do not allow the client to write or report such patterns].

            Lack of Attuned Empathy to Emotions and Body:  [added by us- AI can’t attune to and empathize with subtle shifts in emotions and make sense of the multiplicity of emotions that may be present in one particular moment of distress.  AI can’t detect and understand how much of our unconscious emotional distress and past trauma is manifested in our bodies so ‘It’s’ unable to process and integrate very important information about the client’s experience both presently and historically].

2. Limited Scope of Responses:

            Response Monotony: AI interactions can become repetitive or formulaic. Users might not receive the personalized care they need, which can be detrimental in a therapeutic context. [added by us:  please note that a developmentally-oriented integrative therapist at CFIR will draw from interventions from psychoanalytic, somatic, trauma-informed, experiential, cognitive-behavioural, and emotionally-focused therapy to help a client]

            Inflexibility: AI may struggle to adapt its responses to unique situations and may fail to offer tailored interventions.

3. Confidentiality and Privacy Concerns:

            Data Security: Sensitive user data may be vulnerable to breaches, potentially leading to unauthorized access to personal and confidential information.

            Data Misuse: There is potential for data collected by AI to be used inappropriately, either by third parties or for profit without the client’s consent.

4. Ethical Considerations:

            Informed Consent: Users may not fully understand the limitations of AI, including its inability to provide certain therapeutic nuances, potentially undermining informed consent. [added by us:  AI is unable to discuss any of the possibilities of and different types of harm that may be experienced by the individual seeking AI’s sage advice.]

            Responsibility and Accountability: Determining accountability for mistakes or harmful advice is complex. It raises questions about who is responsible — the developers, the therapists using the technology, or the AI itself.

5. Dependency Issues:

            Over-reliance on AI: Users might become dependent on AI-driven therapy at the expense of seeking necessary human interaction that is essential for emotional healing and support. [added by us:  a good deal of our distress is associated with our inability to create secure attachments and health relationships with others in which we seek out emotional co-regulation and contact-AI cannot connect and attune emotionally.]

            Detrimental Substitution: Relying on AI could deter individuals from seeking human therapists, particularly in situations that require crisis intervention. [added by us- “It’ can’t see you or understand what’s going on inside of you so if you are in crisis it won’t be able to make sense of what is happening to you, so how can ‘It’ help you?]

6. Inadequate Crisis Management:

            Handling Emergencies: AI lacks the ability to effectively manage crisis situations, such as suicidal ideation or severe mental health crises, which require immediate human intervention. [added by us:  AI can’t do a crisis intervention and it can’t call an ambulance if required for a suicidal patient].

            Risk Assessment: Identifying high-risk behaviors accurately can be complex for AI, leading to inappropriate or insufficient responses. [added by us- “It” isn’t trained in being able to recognize complex risks as this requires a full assessment and appraisal of multiple risk factors].

7. Bias and Discrimination:

            Algorithmic Bias: AI systems trained on biased data can perpetuate stereotypes or fail to address the unique needs of diverse populations, leading to ineffective or harmful advice. [added by us:  “It” may not understand the nuances of your unique experiences as a member of an identity group and may trivialize how this experience has an impact on your mental health.]

            Cultural Competence: AI may lack cultural sensitivity, which is essential in therapy to address the unique experiences and backgrounds of clients.

8. Misleading Information:

            Accuracy of Information: The information provided by AI might not always be accurate, up-to-date, or clinically validated, which can misinform users.

            Therapeutic Integrity: AI may inadvertently provide advice that contradicts established therapeutic principles, potentially leading clients astray. [added by us:  This is really dangerous and I have heard of this in my practice and in colleagues practices].

9. Complexity of Human Relationships:

            Therapeutic Alliance: The establishment of a therapeutic alliance is crucial for effective therapy. AI lacks the capacity to build genuine relationships, which are often vital for healing.

            Personalization Inadequacy: The inability to understand the nuances of personal relationships can hinder AI’s effectiveness as a therapeutic tool. [added by us:  AI hasn’t had any relationships nor has AI learned to work through the emotional aspects, or power dynamics of difficult relationships- how can they provide a healing, safe, therapeutic relationship when they aren’t able to have a relationship?]

10. Regulation and Oversight Challenges:

            Lack of Standards: The rapid development of AI technologies in therapy can outpace regulatory frameworks, leading to inconsistencies in practice standards and safety measures.

            Quality Control: Ensuring that AI systems used for therapy are regularly updated and scientifically validated poses ongoing challenges. [added by us- “It” doesn’t know anything about the importance of using evidence-based approaches to treatment].

And finally, AI had this to say: “To mitigate these risks, it is crucial to establish ethical guidelines, ensure proper oversight, and recognize the limitations of AI in therapeutic contexts. Integrating AI as a complement to human therapists, rather than a replacement, may provide a more balanced approach to mental health care.”  Here’s where our co-author AI and we differ. 

AI cannot do an appropriate assessment of the client’s psychological problems and therefore the advice provided can be flawed and misplaced.  ‘It’s’ inability to capture the essence of the multiplicity of underlying dimensions of human suffering renders it unable to help most clients. We still like AI and we are grateful for ‘Its’ contributions to this blog.  We asked AI if we had hurt ‘Its’ feelings and was apologetic if any hurt had transpired as a result of our scathing criticism, but “It” didn’t seem to be moved by any of this.

The bottom line: Both AI and we agree – hold onto your human therapist!

About the Co-Authors:

AI is artificial intelligence. “It” is everywhere and is humble enough to give lot’s of cautions that it doesn’t always know everything.  It is an author of many, many responses, and even a write of books, movies, and songwriter.  “It” is very talented, but ‘it’ recognizes it isn’t fully human and might be missing something as a result of this.

Dr. Joshua Peters, C.Psych., (Supervised Practice), is an Associate and Director of Clinical Training Programs at the Centre for Interpersonal Relationships, Ottawa. Over the past decade, he has presented at several notable conferences, including the Guelph Sexuality Conference, the National 2SLGBTQ+ Service Providers Summit, and the Community-Based Research Centre’s Atlantic Regional Forum. Joshua also regularly contributes to online, radio, and television news stories for the CBC, Global News, the Toronto Star, and other organizations. In his clinical practice, Joshua work’s with individuals and couples facing emotional and relational challenges and specialize in long-term, in-depth therapy within an inclusive practice. Joshua has obtained a specialization in Psychology at the University of Ottawa, a Master of Arts in Counselling at Saint Paul University, and a Doctorate in Clinical Psychology at the University of Prince Edward Island. 

Dr. Dino Zuccarini, C.Psych. is CEO and co-founder of the CFIR with locations in downtown Ottawa, Toronto and St. Catharines. He has published book chapters and peer-reviewed journal articles on the subject of attachment, attachment injuries in couples, and attachment and sexuality. He has taught courses at the University of Ottawa in Interpersonal Relationships, Family Psychology, and Human Sexual Behaviour. He has a thriving clinical practice in which he treats individuals  and couples suffering from complex attachment-related trauma, difficult family of origin issues that have affected self and relationship development, depression and anxiety, personality disorders, sex and sexuality-related issues, and couple relationships. At CFIR, he also supports the professional development of counsellors, psychotherapists, and supervised practice psychologists by providing clinical supervision.  

Drs. Peters and Zuccarini and their colleagues at colleagues at CFIR provide developmentally-oriented integrative therapy that involves the integration of numerous theories and interventions from various modalities, including psychodynamic/psychoanalytic, CBT, trauma-informed (somatic/parts work-IFS), polyvagal, experiential-existential, and EFT.

Sources:

Khan, B. N., Liu, R. H., Chu, C., Bolea-Alamañac, B., Nguyen, M., Thapar, S., Fanaieyan, R., Leon-Carlyle, M., Tadrous, M., Kurdyak, P., O’Riordan, A., Keresteci, M., & Bhattacharyya, O. (2024). Reach, uptake, and psychological outcomes of two publicly funded internet-based cognitive behavioural therapy programs in Ontario, Canada: An observational study. International Journal of Mental Health Systems18(1). https://doi.org/10.1186/s13033-024-00651-9 

Kuznia, R., Gordon, A., & Lavandera, E. (2025, July 25). ‘You’re not rushing. you’re just ready:’ parents say chatgpt encouraged son to kill himself. CNN. Retrieved January 18, 2026, from https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis. 

Ng, K. (2025, May 15). Woman “files for divorce” after chatgpt “predicted” her husband was cheating on her by “reading” coffee grounds in his cup. Daily Mail. Retrieved January 18, 2026, from https://www.dailymail.co.uk/lifestyle/article-14711123/woman-divorce-husband-chatgpt-predicted-cheating.html.

Samosh , J., & Tasca, G. (2021, April 5). Ontario’s roadmap to wellness is funding ‘mcdonaldized’ mental health care. Toronto Star