As an emerging technology, Artificial Intelligence (AI) has significant potential for Psychologists particularly in relation to note-taking and record-keeping, but also potentially in relation to assessments and report drafting. AHPRA has released updated guidance for Psychologists in the form of fact sheets supporting the new draft of Professional competencies for psychology due to come into effect on 1 December 2025. This first article covers some of the risks and benefits associated with use of AI for professionals.
Artificial Intelligence for Psychologists: First, the Risks
By now, most practitioners would be familiar with some of the risks of utilising AI in professional practice. For instances :
- In Victoria, a Melbourne lawyer was referred to the Victorian legal complaints body for professional misconduct after admitting to using artificial intelligence software to generate false case citations, resulting in a court adjournment. Specifically, the lawyer used generative AI to draft submissions for the court which hallucinated a number of case citations from legal software Leap.
-
In New York, a circuit court judge imposed sanctions on two New York lawyers who submitted a legal brief that included fabricated case citations generated by ChatGPT. “We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” the firm’s statement said.
Generative AI
In both cases, the practitioners in questions appeared to be labouring under the impression that generative AI tools were drafting submissions that contained factually correct arguments and references. These stories illustrate the risk of the use of generative AI – a tool that is designed to process and generate text based on the information they have been trained on. While they can provide helpful and informative responses, it is essential to recognize that they are principally language emulators and cannot apply independent judgement. With this in mind, hallucinations are not deviations from AI models, they a continuation of the logic of language emulation.
The potential for risks arising out of generative AI have been covered in close detail by the World Health Organization (WHO) report entitled “Ethics and Governance of Artificial Intelligence for Health. Guidance on Large Multi-Modal Models”, which specifically focuses on the risk of AI displacing independant medical judgement.
Extractive AI
The issue, however, isn’t limited to generative AI, extractive AI has distinct but parallel issues. Extractive artificial intelligence doesn’t generate content, rather it operates compiling specific, relevant information from various sources, acting much more like a sophisticated filter. Summary tools generally rely on extractive AI. However, neither is the extractive AI approach without caution:
- Recent findings from an in depth study by ASIC indicate that AI-generated summaries, specifically those produced by the Llama2-70B model, fall short of human-quality summaries. This suggests that while AI has made significant strides, it still has limitations when it comes to complex tasks like document summarization. By far the biggest weakness of the AI summaries was “a limited ability to analyze and summarize complex content requiring a deep understanding of context, subtle nuances, or implicit meaning,” ASIC writes.
- An in-depth analysis by Forbes identified that there are theoretical limits to the extent to which AI is capable of effective summarisation of medical or health notes as distinct from mere simplification. Namely, health summarisation requires deep contextual knowledge without which the filtering mechanism has the potential to miss critical integers of the record with the risk that the tool has filtered out the pertinent details.
To that extent, enthusiasm about AI transforming the profession is somewhat naive. Researchers into the space have been alerting us that there are conceptual limits on the level of judgment that AI is capable of exercising. While the allied health professions may be augmented they are not capable of being replaced.
Artificial Intelligence for Psychologists: The Benefits
However, notwithstanding these caveats, to the extent that practitioners can navigate these shortcomings, AI tools provide scope for significant benefit to practitioners. Some strong use cases include:
- Summarisation (or strictly, simplification) of factual material provided by a client after a session where the practitioner provides rough notes, a recording, or a transcript into the AI program;
- Drafting of introductions or conclusions to reports summarising material that has preceded or follows;
- AI may be able to provide limited levels of targeted psycho-education to clients; and,
- AI may assist in client care by enabling the creation of personalised treatment plans with greater ease and speed. AI may be able to suggest additional treatment strategies.
It should be noted that in all use cases, independent judgment ought to be applied by the practitioner prior to any reliance. In the next article in this series we will take a closer look at the guidance provided by AHPRA in its forthcoming Professional competencies for psychology which stress the necessity of independant guidance.
Therapas also provides a Template Client Agreement for Psychologists that specifically contemplates the necessary disclosures and consents for use of the most popular tools for Psychologists.