AHPRA has released updated guidance for Psychologists in the form of fact sheets supporting the new draft of professional competencies for psychology due to come into effect on 1 December 2025. This is the third article following our coverage of the AHPRA guidelines and looks in detail at the implications for compliance with Australian privacy and data retention laws. In its Draft Competencies AHPRA specifically contemplates that practitioners may need to consider training on how to evaluate the Efficacy and Security of an Artificial Intelligence system.

This article sets out the key considerations for practitioners in considering the efficacy of any software Application (including apps or web-based applications) that relies on or utilises Artificial intelligence as a part of its service (referred to in this article as “AI software”). In considering the efficacy of the AI software we are inquiring as to whether it performs in the manner that it intends to perform. This is highly relevant to these systems as despite the face that they are termed to be a form of ‘intelligence’ they are highly sophisticated language emulation models and do not apply any form of judgement or higher level reasoning to the material they are provided.

 

Bias in AI Software Systems

Given that AI software is input and algorithm-driven, they are only capable of processing data in accordance with the internal rules. Consequentially, there are shortcomings in their processing which might not be immediately clear.

It’s important to understand the different types of bias that can occur in AI software systems:

  • Data bias: It occurs when the training data is not representative of the population the AI software will serve.
  • Algorithmic bias: Arises from the design or implementation of the AI software leading to unfair outcomes.
  • User bias: Results from how users interact with the AI software system, potentially reinforcing existing biases.

 

Psychologists and AI: Data Bias

Large Language Models have been trained off (uncopyrighted and, frequently, copyrighted) data that is accessible in the public domain. In 2023 (where most models in 2025 base their data), the internet is predominantly in English and developed by first-world authors. It is highly unlike that the data used to train the AI software system will reflect the diversity of the patient population (e.g., age, gender, ethnicity, socioeconomic status, and medical conditions). There is a high likelihood that underrepresented groups or missing data in the training data.

An example of possible bias is for instance an AI software system designed to summarise patient notes. The tool transcribes and condenses session content into key points, helping therapists save time on documentation. However, the AI was trained on data primarily from sessions conducted in English and with younger adults, leading to biases in how it processes and summarises information from non-English speakers, older adults, or individuals with unique speech patterns.

Such a tool may have a number of issues as a result of this data bias:

Data Bias in Language Comprehension

The AI software accurately transcribes and summarises sessions conducted in languages other than English or with heavy accents. For example, a Spanish-speaking patient describing their anxiety might be misinterpreted, leading to a summary that omits critical details or misrepresents their concerns. As a result, the therapist might miss important insights, leading to less effective treatment planning.

Data Bias in Age Demographic Variance

The AI tool may fail to capture the nuances of how older adults express themselves, such as focusing more on physical symptoms (e.g., fatigue, pain) rather than emotional states. For instance, an older adult discussing grief over the loss of a spouse might have their summary focus on physical complaints rather than the underlying emotional distress.

Data Bias in Cultural Variance

The AI tool might not recognise culturally specific expressions or idioms, leading to summaries that strip away important cultural context. For example, a patient from a collectivist culture might describe their stress in terms of family obligations, but the AI might summarise this as generic “work-related stress” losing the cultural nuance.

Data Bias and Stereotypes in the Training Data

If the AI tool is trained on biased data, it might reinforce stereotypes in its summaries. Given that much of the material available in the public domain is aged, there is a high likelihood the training data will contain implicit stereotypes about how various demographics utilise language. For example, it might over-emphasise emotional language in women’s sessions while downplaying similar expressions in men’s sessions, perpetuating gender biases.

Data Bias and Minority Populations

Potentially, all of the above data biases may be present when facilitating summaries for indigenous clients.

The potential for these biases to emerge in extractive AI software systems (for instance, note-taking or summarization software, is very high. It is noted that this bias in the practice of using AI software systems is very difficult to mitigate and in practicality can only be managed.

 

Psychologists and AI: Algorithmic Bias

In addition to data bias, algorithmic bias can emerge in AI software systems. All algorithms are subject to internal settings which prioritise and/or apply weight to the interpretation of certain types of language or emotional expressions. The consequence of these hidden algorithmic settings is that interpreting the client-input data may lead to summaries that disproportionately emphasise or de-emphasise specific content. It should be noted that this is an inevitable effect and benign.

Such a tool may have a number of issues as a result of this algorithmic bias:

Algorithmic Bias in Weighing Emotional Content

The algorithm may be designed to prioritise “high-impact” words and phrases, such as those associated with negative emotions (e.g., “sad,” “angry,” “hopeless”). As a result, a summary that focuses on this kind of transcript may disproportionately focus on negative aspects of the session, even if the patient also discussed positive developments or coping strategies. For example, a patient who shares both their struggles with anxiety and their progress in managing it might have their summary focus solely on the anxiety, missing the progress entirely.

Algorithmic Bias in Considering Idioms or Expressions

The algorithm might struggle to recognize and summarise subtle or nuanced expressions, such as metaphors, indirect statements, or culturally specific language. For instance, a patient who says,
“I feel like I’m carrying the weight of the world on my shoulders” might have this statement summarized as “patient reports feeling tired,” losing the emotional depth and context.

 

Mitigating Bias Risk in AI Software Systems

To address these bias issues set out above, practices should:

  1. Never rely on AI-generated summaries to ensure accuracy and completeness. Critically assess the resulting summaries as soon after the session. Query whether the notes accurately represent the tone and content of the session.
  2. Acknowledge that all training data has inherent bias. Select AI software that corrects for potential bias in the training data. Select AI software that allows therapists to flag inaccuracies or biases in the summaries, which can be used to improve the algorithm over time.
  3. Train Therapists on Tool Limitations: Educate therapists about the potential biases and limitations of the AI tool to encourage critical evaluation of its outputs. Share appropriately anonymised examples.

 

Conclusion

As above, it should be clear that when dealing with client data input into an AI software tool that summarises the session notes can produce inaccurate summaries, miss critical details, and potentially result in unfavourable client outcomes. By addressing bias through contemporaneous review and human oversight, therapy practices can ensure that AI tools assist rather than distort therapy practices.

Leave a Reply

Your email address will not be published. Required fields are marked *

Welcome to Therapas

[dew_register_form]

Welcome to Therapas