• Policy and standards supporting the Australian Government's use of artificial intelligence.

  • This policy aims to ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations.

  • Embrace the benefits

    This policy provides a unified approach for government to engage with AI confidently, safely and responsibly, and realise its benefits.

    The adoption of AI technology and capability varies across the APS. This policy is designed to unify government’s approach by providing baseline requirements on governance, assurance and transparency of AI.

    This will remove barriers to government adoption by giving agencies confidence in their approach to AI and incentivising safe and responsible use for public benefit. 

  • Strengthen public trust

    This policy aims to strengthen public trust in government’s use of AI by providing enhanced transparency, governance and risk assurance.

    One of the biggest challenges to the successful adoption of AI is a lack of public trust around government’s adoption and use. Lack of public trust acts as a handbrake on adoption. The public is concerned about how their data is used, a lack of transparency and accountability in how AI is deployed and the way decision-making assisted by these technologies affects them.

    This policy addresses these concerns by implementing mandatory and optional measures for agencies, such as monitoring and evaluation of performance, be more transparent about their AI use and adopt standardised governance.

  • Adapt over time

    This policy aims to embed a forward leaning, adaptive approach for government’s use of AI that is designed to evolve and develop over time.

    AI is a rapidly changing technology and the scale and nature of change is uncertain. This policy has been designed to ensure a flexible approach to the rapidly changing nature of AI and requires agencies to pivot and adapt to changes in the technological and policy environment.

  • Implementation

    Application

    This policy took effect on 1 September 2024.

    Consistent with other whole-of-government digital policies, all non-corporate Commonwealth entities (NCEs), as defined by the Public Governance, Performance and Accountability Act 2013, must apply this policy.

    Corporate Commonwealth entities are also encouraged to apply this policy.


    National security carveouts

    This policy does not apply to the use of AI in the defence portfolio.

    This policy does not apply to the ‘national intelligence community’ (NIC) as defined by Section 4 of the Office of National Intelligence Act 2018.

    The NIC includes:

    • Office of National Intelligence (ONI)
    • Australian Signals Directorate (ASD)
    • Australian Security Intelligence Organisation (ASIO)
    • Australian Secret Intelligence Service (ASIS)
    • Australian Geospatial-Intelligence Organisation (AGO)
    • Defence Intelligence Organisation (DIO)
    • Australian Criminal Intelligence Commission (ACIC)
    • the intelligence role and functions of the Australian Transaction Reports and Analysis Centre (AUSTRAC), Australian Federal Police (AFP), the Department of Home Affairs and the Department of Defence.

    Defence and members of the NIC may voluntarily adopt elements of this policy where they are able to do so without compromising national security capabilities or interests.

  • Existing frameworks

    The challenges raised by government use of AI are complex and inherently linked with other issues, such as:

    • the APS Code of Conduct
    • data governance
    • cyber security
    • privacy
    • ethics practices.

    This policy has been designed to complement and strengthen – not duplicate – existing frameworks, legislation and practices that touch upon government’s use of AI.

    This policy must be read and applied alongside existing frameworks and laws to ensure agencies meet all their obligations.

  • Artificial intelligence definition

    While there are various definitions of what constitutes AI, for the purposes of this policy agencies should apply the definition provided by the Organisation for Economic Co-operation and Development (OECD):

    An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

    Agencies may refer to further explanatory material on the OECD website.

    Given the rapidly changing nature of AI, agencies should keep up to date on any updates or changes to this definition. The definition in this policy will be reviewed as the broader, whole-of-economy regulatory environment matures to ensure an aligned approach.

  • Enable and prepare

    Principles

    • Safely engage with AI to enhance productivity, decision-making, policy outcomes and government service delivery for the benefit of Australians.
    • APS officers need to be able to explain, justify and take ownership of advice and decisions when utilising AI.
    • Have clear accountabilities for the adoption of AI and understand its use.
    • Build AI capability for the long term.
  • Mandatory requirements

    Accountable officials

    Agencies must designate accountability for implementing this policy to accountable official(s) within 90 days of this policy taking effect.

    The responsibilities may be vested in an individual or in the chair of a body. The responsibilities may also be split across officials or existing roles (such as Chief Information Officer, Chief Technology Officer or Chief Data Officer) to suit agency preferences.

    The responsibilities of the accountable officials are to:

    • be accountable for implementation of this policy within their agencies
    • notify the Digital Transformation Agency (DTA) where the agency has identified a new high-risk use case by emailing ai@dta.gov.au. This information will be used by the DTA to build visibility and inform the development of further risk mitigation approaches. Agencies may wish to use the risk matrix to determine risk ratings.
    • be a contact point for whole-of-government AI coordination
    • engage in whole-of-government AI forums and processes
    • keep up to date with changing requirements as they evolve over time. 

    Agencies are to email the DTA when they designate and make any changes to their accountable official(s) by emailing ai@dta.gov.au

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.