Currently, the adoption of AI technology varies across the Australian Public Service (APS). The policy aims to standardise the government’s approach by setting baseline requirements on governance, assurance, and transparency. This will reduce obstacles to government adoption by boosting agencies’ confidence in their AI strategies and promoting safe and beneficial uses for the public good.
One significant barrier to successful AI adoption is public scepticism about how the government uses AI, including concerns about data use, transparency, accountability, and the impact of AI-assisted decision-making on individuals. The policy addresses these issues by implementing required and optional measures, such as performance monitoring, greater transparency about AI use, and standardised governance.
What are some of the Policy Principles?
- Safely engage with AI. Improve productivity, decision-making, policy outcomes, and service delivery for Australians.
- Explain, justify, and own decisions. APS officers must explain, justify, and take ownership of AI-assisted advice and decisions.
- Clear accountabilities. Establish clear accountabilities for AI adoption and understand its use.
- Build AI capability for the long term. Develop and maintain AI capabilities within the APS for sustained benefits.
What are the mandatory requirements and recommended actions?
For government agencies, they must appoint accountable officials within 90 days of this policy’s implementation. Responsibilities may be assigned to individuals or the chair of a body and can be divided among officials or existing roles like Chief Information Officer or Chief Data Officer, based on agency preferences.
Accountable officials are responsible for:
- Implementing this policy within their agencies.
- Informing the Digital Transformation Agency (DTA) about new high-risk AI use cases
- Serving as contact points for whole-of-government AI coordination.
- Participating in government-wide AI forums and processes.
- Staying updated with evolving requirements.
Private organisations and government agencies are strongly recommended to provide:
- AI fundamentals training for all staff within 6 months of policy implementation, aligned with policy guidance.
- Additional role-specific training for staff involved in AI procurement, development, training, and deployment.
Private organisations and government agencies should:
- Identify where and how AI is used within the agency and develop an internal register.
- Integrate AI considerations into existing frameworks such as privacy, security, record-keeping, cyber, and data management (
- By prioritising these ethical principles and mandatory training requirements, Australia aims to establish a responsible AI ecosystem that benefits society and mitigates risks. These measures ensure that AI technologies are developed and used ethically, transparently, and accountably.
Why embedding the ethical principles of AI should be your top priority?
The ethical principles of AI aim to ensure that these systems are human-centred, fair, reliable, safe, and transparent, while protecting the privacy of the people they serve.
These principles are your compass. Whilst there’s been a lot of noise and disruption in the AI landscape, the principles of ethical and responsible use of AI haven’t changed. These should be your guide to making AI decisions at all levels and will support behaviours aligned to any internal policies and other approaches based on the mandatory obligations.
References for this article
- Australia’s AI Ethics Principles | Australia’s Artificial Intelligence Ethics Framework | Department of Industry Science and Resources
- National Framework for Artificial Intelligence in Government | SENATOR THE HON KATY GALLAGHERMinister for FinanceMinister for WomenMinister for the Public ServiceSenator for the ACT | Finance Ministers
- Artificial intelligence technologies could be classified by risk, as government consults on AI regulation – ABC News