Counselling

How are you using AI? Protecting the client/practitioner relationship in a modern age

How are you using AI? Protecting the client/practitioner relationship in a modern age

Sep 29, 2025

I need to start by saying - I don’t hate AI. In fact, I suspect in the years that come, there will be some incredibly client centred and accessibility-based strengths that come from the newest technological advances. However, the other week I was sitting with my clinical supervisor, musing over the fact that while I don’t hate AI, I am sure troubled by the rampant use of it when it comes to client interface, relational processes like charting, and the idea that we need to get more efficient when it comes to “charting” as an administrative task.

I suppose that’s where this blog began, a moment of appreciation for what technology is capable of, and a moment of fear when I consider the implications of one of the fastest moving, most aggressively rolled out, and never before seen in the history of the world - technological developments.

Artificial Intelligence (AI) is rapidly transforming industries, from finance to healthcare, and social work is no exception. With its ability to automate documentation, analyze behavioural patterns, and even simulate therapeutic dialogue, AI offers compelling possibilities for streamlining a number of areas of the profession.

Generative AI tools, such as large language models, can draft clinical notes, suggest treatment goals, and flag inconsistencies in documentation. These functions are particularly attractive in high-volume environments where social workers face mounting caseloads and burnout.¹

Yet beneath the surface of efficiency lies a web of ethical complexities that demand careful scrutiny. Many experts argue that the risks currently outweigh the rewards, and that AI should not yet be used in social work practice.²


The Temptation of AI in Charting

AI tools like ChatGPT, Copilot, and other generative models can:

  • Draft clinical notes from recorded session transcripts

  • Suggest treatment goals based on client history

  • Flag potential risks or inconsistencies in documentation

  • Reduce administrative burden, freeing up time for direct client care

These capabilities promise to alleviate the administrative load that often detracts from relational work. However, social work is not merely transactional. It is deeply relational, context-sensitive, and ethically grounded. Automating aspects of documentation may inadvertently erode the reflective and interpretive nature of the work, which is central to therapeutic engagement.³

This concern aligns with CASW’s Guiding Principle 1.1, which emphasizes respect for the inherent dignity and worth of all people, and Principle 1.6, which upholds the right to voluntary, informed consent.⁵


Ethical Complexities in AI-Driven Charting

Despite its potential, AI introduces serious ethical dilemmas that cannot be ignored related to confidentiality and privacy, informed consent and client autonomy, bias and misrepresentation, the loss of human judgement and frankly - liability and accountability.


Confidentiality and Privacy

AI systems often rely on cloud-based platforms and large datasets. This raises concerns about:

  • Unauthorized access to sensitive client data

  • Breaches of confidentiality

  • Compliance with privacy laws such as HIPAA (U.S.) or PIPEDA (Canada)

Social workers are ethically bound to protect client information. CASW’s Principle 3.4 calls for transparency about the limits of confidentiality and the preservation of privacy in electronic services.⁵


Informed Consent and Client Autonomy

Clients may not fully understand how their data is being used or processed by AI systems. Just as clinicians may not be fully clear and able to communicate the same. Without clinician clarity and transparent disclosure, the principle of informed consent is undermined. Ethical practice demands that clients are made aware of how technology interfaces with their care.¹

This directly relates to CASW’s Principle 1.6, which affirms the right of service users to make decisions based on voluntary consent.⁵


Bias and Misrepresentation

AI models are trained on historical data, which may contain systemic biases. This can lead to:

  • Misrepresentation or stereotyping of marginalized groups

  • Inaccurate or unfair documentation

  • Reinforcement of institutional inequities

CASW’s Principle 2.1 advocates for social justice and equitable access to services, while Principle 2.2 calls for the protection of Indigenous Peoples from systemic racism and discrimination.⁵ As clinicians responsible for holding these values, wherein lies our responsibility to avoid the use of technology that risks the reinforcement of harmful biases?


Loss of Human Judgment

Charting is not just clerical. It’s interpretive. Social workers use clinical judgment to capture nuance, emotion, and context. AI lacks emotional intelligence, cultural competence, and ethical reasoning. Delegating this task to machines risks flattening the complexity of human experience.³ Moreover, it begs to question, what are clients getting from their service provider? If as service providers we sanitize the process of our own reflections and knowledge as clinicians - are we even providing clients the service they are seeking out?

This concern reflects CASW’s Value 7: Professional Integrity, which emphasizes self-awareness, reflection, and ethical decision-making.⁵

This may make me sound like I’m generations away from current therapists, but I remember recording and transcribing my first sessions as a counsellor - being essentially forced to relive the session. You know what else I remember? Discovering how much I had missed the first time around.

Self-reflection is core to the clinician side of the therapeutic bond. Without this skill we are nothing to our clients - except potentially harmful. Moreover, skills have to be practiced in order to stay sharp, so what reps are we giving up when we don’t hold up our end of things post-session.


Accountability and Liability

We can’t forget that are charts are also our written record. Many will remember the old adage that “if it’s not written down, it didn’t happen”. This begets another question though - “if it does get written down, did it happen like that?”.

If an AI-generated chart leads to a harmful decision, who is responsible? The social worker? The software developer? The agency? This ambiguity poses legal and professional risks.

The NASW Code of Ethics emphasizes accountability, and CASW’s Principle 3.5 reinforces the need for transparency and accountability in professional conduct.¹,⁵ where is that accountability when chart content, that isn’t in the clients best interest, ultimately causes harm. When that happens, and the train has left the station, the dust is settling, the injury has been done - how will you feel as a clinician?

Why AI Should Not Be Used; Yet

Given these concerns, many scholars and practitioners advocate for a moratorium on AI, especially for charting, in social work until robust safeguards are in place.

The rationale includes:

  • Preserving the integrity of the therapeutic relationship

  • Avoiding premature reliance on unproven tools

  • Upholding professional standards

  • Ensuring equitable outcomes²,³,⁵

Preserving Relational Integrity in a Digital Landscape

Ultimately, the client/practitioner relationship is built on trust, empathy, and attunement - qualities that cannot be replicated by algorithms. While AI may assist with administrative tasks, it must not interfere with the relational core of practice. Research in psychotherapy and social work consistently shows that the therapeutic alliance is one of the strongest predictors of positive outcomes.⁴

CASW’s Principle 1.1 and Value 6 emphasize the centrality of human dignity and relational ethics in practice.⁵

Moreover, clients may feel alienated or dehumanized if they perceive that their practitioner is relying on cloud-generated responses to understand their lived experience. The ethical imperative, then, is to ensure that AI augments, not replaces, the practitioner’s presence, empathy, and judgment.³


Building Ethical Infrastructure for AI Integration

To protect the client/practitioner relationship in the age of AI, social work must proactively build an ethical infrastructure that includes:

  • Transparent AI policies

  • Client-centered consent protocols

  • Interdisciplinary oversight

  • Ongoing evaluation

As AI becomes more embedded in practice settings, these safeguards are not optional - they are essential.²,⁵


Moving Forward: A Human-Centered Approach

Rather than rushing to adopt AI in charting and practice, the social work profession must:

  • Develop clear ethical guidelines for AI use

  • Invest in training social workers to critically evaluate AI tools

  • Advocate for transparency and accountability in AI development

  • Prioritize human judgment and relational ethics over technological convenience

As Reamer cautions, AI may eventually enhance social work practice, but only if it aligns with the profession’s core values and ethical standards.¹


Conclusion

So I’ll finish by saying, AI is not inherently unethical, but its application in social work demands a level of caution, reflection, and regulation that we have not yet achieved.

On a personal note, I’m working on the annoyance I feel every time I want to use an ‘em dash’ and I know it’s become the hallmark of a generated statement - and I have to refrain.

For now, from a practical and ethical perspective I’m taking the stance that the best reflection, engagement and charting tool remains the thoughtful, empathetic, and ethically grounded human mind.

Perhaps like the technology, my perspective will change, but for now - what you’re getting with WCC is human generated.

Warmly, Erin

References (AMA Style)

  1. Reamer FG. Artificial Intelligence in Social Work: Emerging Ethical Issues. Int J Soc Work Values Ethics. 2023;20(2):52-71. https://jswve.org/volume-20/issue-2/item-05/

  2. Ontario College of Social Workers and Social Service Workers. The Use of Artificial Intelligence in Practice. April 18, 2024. Accessed September 28, 2025. https://www.ocswssw.org/2024/04/18/the-use-of-artificial-intelligence-in-practice/

  3. EBSCOEIJER. Artificial Intelligence and Social Work (AI): Modern Ethical Concerns. Self: Ethics, Equity, and Justice Review. 2025;3(1). https://ebscoeijer.org/index.php/self/article/view/146

  4. Norcross JC, Wampold BE. Evidence-Based Therapy Relationships: Research Conclusions and Clinical Practices. Psychotherapy. 2011;48(1):98-102. doi:10.1037/a0022161

  5. Canadian Association of Social Workers. CASW Code of Ethics, Values and Guiding Principles 2024. Accessed September 28, 2025. https://www.casw-acts.ca/en/casw-code-ethics-2024