Artificial Intelligence (AI) has the potential to significantly transform the manner health practitioners provide health services. AI provides a significant opportunity to minimise the administrative burden associated with note-taking and with certain forms of drafting.
AHPRA has released updated guidance for Psychologists in the form of fact sheets supporting the new draft of professional competencies for psychology due to come into effect on 1 December 2025. This is the third article following our coverage of the AHPRA guidelines and looks in detail at the implications for compliance with Australian privacy and data retention laws.
The new professional competencies for psychology make it clear that practitioners need to be confident in their use of these tools that don’t breach their privacy obligations, namely that the software complies with Australian law and data storage laws. It should be noted that privacy laws differ from the general principles of confidentiality that practitioners are bound by virtue of their professional obligations.
Artificial Intelligence for Psychologists: Privacy Obligations
The Privacy Act 1988 (Cth) places significant emphasis on the protection of Personal Information. This legislation applies to any organisation providing a health service, regardless of size, that collects or holds personal health information. Relevantly for practitioners, Personal Information includes an individual’s name, signature, address, phone number, or date of birth along with sensitive information such as their health, their religious beliefs, and their political opinions.
The definition of ‘health service’ under the Act is broad, encompassing a wide range of activities, from traditional medical care provided by doctors and hospitals to services offered by complementary therapists, gyms, and even some childcare centers. Essentially, any entity involved in assessing, maintaining, or improving a person’s physical or psychological health falls within the scope of the Act. Necessarily, this includes Psychology, Counselling, Occupational Therapy, and other similar practices.
The Privacy Act contains the Australian Privacy Principles (APP 6) which outlines stringent guidelines for how organisations can collect, use, and disclose personal information.
While there is a distinction between personal information and sensitive information, the latter of which is owed the higher level obligation, given that almost all data collected by a health agency will include sensitive information, for the purpose of health practices, there is no distinction.
Key provisions include an individual’s right to access and correct their personal information. The Australian Privacy Principles are clearly written and easily accessible and it is recommended that practitioners take the time to read through them on a regular basis.
OAIC Guidance
A principle consideration for practitioners considering the use of Artificial Intelligence in their practice is ensuring compliance with the Australian Privacy Principles. The Office of the Australian Information Commissioner (OAIC) plays a role in overseeing the handling of health information, including the management of My Health Records and the use of healthcare identifiers. The OAIC has released helpful guidelines on maintaining compliance with Privacy Laws in the use of commercially available AI tools.
Organizations exploring the use of AI products should adopt a ‘privacy by design’ approach to identify and mitigate potential privacy risks. Organizations must ensure transparency by updating their privacy policies and notifications to clearly communicate how they use AI systems. This includes:
- Clearly identifying any public-facing AI tools (e.g., chatbots) as AI-driven to users.
- Establishing robust policies and procedures for AI system usage to promote transparency and uphold strong privacy governance.
The guidance of the OAIC is consistent with AHPRA that practices should consider a privacy policy for all uses of AI.
Privacy Risks When Using AI
Organizations must be mindful of how they handle personal information when using AI systems. Privacy obligations apply to:
- Any personal information input into an AI system.
- Output data generated by AI systems, if it contains personal information.
- Personal information includes inferred, incorrect, or artificially generated data (e.g., hallucinations, deepfakes) that relates to an identified or reasonably identifiable individual.
If personal information is input into an AI system, APP 6 requires organizations to use or disclose the information only for the primary purpose for which it was collected, unless:
- They have the individual’s consent.
- The secondary use is directly related to the primary purpose.
Secondary use may be considered reasonably expected if it was explicitly outlined in a notice at the time of collection or in the organization’s privacy policy. Updates to APP 5 notices, privacy policies, or other communications after collection may also influence this assessment. However, given the privacy risks associated with AI, establishing reasonable expectations for secondary uses can be challenging.
In the health context, all personal information collected will necessarily include ‘sensitive information’. To mitigate regulatory risks, organizations should:
- Seek consent for secondary AI-related uses where reasonable expectations cannot be clearly established.
- Provide individuals with a meaningful and informed opportunity to opt out.
Personal information can include information that has been artificially generated (including inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes), where it is about an identified or reasonably identifiable individual.
Best Practices for AI Usage
Avoid entering personal information, particularly sensitive information, into publicly available AI chatbots or generative AI tools due to the significant and complex privacy risks involved.
If AI systems generate or infer personal information, this constitutes a collection of personal information and must comply with APP 3. Organizations must ensure that such generation is:
- Reasonably necessary for their functions or activities.
- Conducted by lawful and fair means.
The use of AI in decisions that may have a legal or similarly significant effect on an individual’s rights is considered a high privacy risk activity. In such cases, organizations should:
- Ensure the accuracy and appropriateness of the AI system for its intended purpose.
- Implement additional safeguards to mitigate risks.
- By adhering to these principles, organizations can responsibly leverage AI while minimizing privacy risks and ensuring compliance with regulatory obligations.
Conclusion
Therapas provides a template Artificial Intelligence Policy for Psychologists, Counsellors, and other Allied Health professionals that is compliant with the APP 6 and integrates best practice protocols for the use of these tools.