Policy for the responsible use of AI in government
Version 1.1
Need to know
This policy took effect on 1 September 2024.
It applies to all non-Corporate Commonwealth entities, with some exceptions.
Departments and agencies must meet the mandatory requirements for:
- accountable official(s), by 30 November 2024 (within 90 days of the policy taking effect)
- transparency statements, by 28 February 2025 (within 6 months of the policy taking effect).
Explore the policy
Explore the principles and requirements of the policy under the ‘enable, engage and evolve’ framework.
Policy introduction
The increasing adoption of artificial intelligence (AI) is reshaping the economy, society and government. While the technology is moving fast, the lasting impacts of AI on the activities of government are likely to be transformational.
This policy provides a framework to position the Australian Government as an exemplar under its broader safe and responsible AI agenda.
AI has an immense potential to improve social and economic wellbeing. Development and deployment of AI is accelerating. It already permeates institutions, infrastructure, products and services, with this transformation occurring across the economy and in government.
For government, the benefits of adopting AI include more efficient and accurate agency operations, better data analysis and evidence-based decisions, and improved service delivery for people and business. Many areas of the Australian Public Service (APS) already use AI to improve their work and engagement with the public.
To unlock innovative use of AI, Australia needs a modern and effective regulatory system. Internationally, governments are introducing new regulations to address AI’s distinct risks, focused on preventative, risk-based guardrails that apply across the supply chain and throughout the AI lifecycle.
The Australian Government’s consultations on safe and responsible AI show our current regulatory system is not fit for purpose to respond to the distinct risks that AI poses.
The consultation also found that the public expects government to be an exemplar of safe and responsible adoption and use of AI technologies. Public trust in AI and government’s use of it is low, which acts as a handbrake on adoption. The preparedness and maturity for managing AI varies across the APS. AI technologies change at speed and scale, presenting further risks if not acted upon quickly to mitigate them.
This means government has an elevated level of responsibility for its use of AI and should be held to a higher standard of ethical behaviour.
The Australian Government’s interim response to the consultations included a commitment to creating a regulatory environment that builds community trust and promotes innovation and adoption. It outlines pathways to ensure the design, development and deployment of AI in legitimate, but high-risk settings is safe and can be relied upon, while ensuring AI in low-risk settings can continue largely unimpeded.
This policy is a first step in the journey to position government as an exemplar in its safe and responsible use of AI, in line with the Australian community’s expectations. It sits alongside whole-of-economy measures such as mandatory guardrails and voluntary industry safety measures.
The policy aims to create a coordinated approach to government’s use of AI and has been designed to complement and strengthen – not duplicate – existing frameworks in use by the APS.
In recognition of the speed and scale of change in this area, the policy is designed to evolve over time as the technology changes, leading practices develop, and the broader regulatory environment matures.