-
-
Need to know
Test
-
Policy for the responsible use of AI in government
Version 1.1
Need to know
This policy took effect on 1 September 2024.
It applies to all non-Corporate Commonwealth entities, with some exceptions.
Departments and agencies must meet the mandatory requirements for:
- accountable official(s), by 30 November 2024 (within 90 days of the policy taking effect)
- transparency statements, by 28 February 2025 (within 6 months of the policy taking effect).
-
Supporting the policy for responsible use of AI in government.
-
Use the following information to support your agency’s implementation of the policy for responsible use of AI in government.
-
Standard for accountable officials
Version 1.1
Use the following information to support your agency’s implementation of the policy for responsible use of AI in government.
-
How to apply
Choose a suitable accountable official
Agencies may choose AOs who suit the agency context and structure.
The responsibilities may be vested in an individual or in the chair of a body. The responsibilities may also be split across officials or existing roles (such as Chief Information Officer, Chief Technology Officer or Chief Data Officer) to suit agency preferences.
— Enable and prepare, Policy for the responsible use of AI in government
As policy implementation is not solely focused on technology, AOs may also be selected from business or policy areas. AOs should have the authority and influence to effectively drive the policy’s implementation in their agency.
Agencies may choose to share AO responsibilities across multiple leadership positions.
Agencies are to email ai@dta.gov.au with the contact details of all their AOs both on initial selection and when the accountable roles change.
-
Implementing the policy
AOs are accountable for their agency’s implementation of the policy. Agencies should implement the entire policy as soon as practical, with consideration to the agency’s context, size and function.
The mandatory actions of the policy must be implemented within their specified timelines.
The policy provides a coordinated approach for the use of AI across the Australian Government. It builds public trust by supporting the Australian Public Service (APS) to engage with AI in a responsible way. AOs should assist in delivering its aims by:
- uplifting governance of AI adoption in their agency
- embedding a culture that fairly balances AI risk management and innovation
- enhancing the response and adaptation to AI policy changes in their agency
- facilitating their agency’s involvement in cross-government coordination and collaboration.
AOs should also consider the following activities:
- developing a policy implementation plan
- monitoring and measuring the implementation of each policy requirement
- strongly encouraging implementation of AI fundamentals training for all staff
- strongly encouraging additional training for staff in consideration of their role and responsibilities, such as those responsible for the procurement, development, training and deployment of AI systems
- encouraging the implementation of further actions suggested in the policy
- encouraging the development or alignment of an agency-specific AI policy
- reviewing policy implementation regularly and provide feedback to the DTA.
In line with the Standard for Transparency Statements, AOs should provide the DTA with a link to their agency transparency statement, whenever it is published or updated, by emailing ai@dta.gov.au.
-
Reporting high-risk use cases
AOs must send information to the DTA, by emailing ai@dta.gov.au, in the event their agency identifies a new high-risk use case. This includes when an existing AI use case is re-assessed as high-risk.
The information should include:
- the type of AI
- intended application
- why the agency arrived at a ‘high-risk’ assessment
- any sensitivities.
This is not intended to prevent agencies from adopting the use case. Instead, it will help government develop risk mitigation approaches and maintain a whole-of-government view of high-risk use cases.
While the risk matrix explains how to approach a risk assessment, it is the responsibility of agencies to determine the risks to assess for.
-
Acting as the agency’s contact point
At times, the DTA will need to collect information and coordinate activities across government to mature the whole-of-government approach and policy.
AOs are the primary point of contact within their agency. They should facilitate connection to the appropriate internal areas and allow for information collection and agency participation in these activities.
-
Engaging with forums and processes
AOs must participate in, or nominate a delegate for, whole-of-government forums and processes which support collaboration and coordination on current and emerging AI issues. These forums will be communicated to AOs as they emerge.
-
Keeping up-to-date with changes
The policy will evolve as technology, leading practices and the broader regulatory environment mature. While the DTA will communicate changes, AOs should keep themselves and stakeholders in their agency up to date on:
- changes to the policy
- impacts of policy requirements.
-
Questions about policy implementation
AOs can contact the DTA with questions about policy implementation by emailing ai@dta.gov.au.
-
-
Related frameworks for AI
While this section lists frameworks that are related to AI, it is not exhaustive. Agencies should consider what existing frameworks apply to them and their specific AI use cases.
-
Risk assessment for use of AI
Risk matrix
Consequence
Insignificant Minor Moderate Major Severe Likelihood
Almost certain Medium Medium High High High Likely Medium Medium Medium High High Possible Low Medium Medium High High Unlikely Low Low Medium Medium High Rare Low Low Low Medium Medium Figure 1: Risk matrix for use of AI
-
Using the risk matrix, determine the severity of the risks. In considering the consequence and likelihood consult with relevant stakeholders. The risk assessment should reflect the intended scope, function and risk controls of the AI use case.
The following are examples of risks that an agency can consider as part of their assessment.
What is the risk that the use of AI:
- negatively affects public accessibility or inclusivity of government services
- unfairly discriminates against individuals or communities
- perpetuates stereotypes or demeaning representations of individuals or communities
- causes harm to individuals, communities, businesses or the environment
- results in privacy concerns due to the sensitivity of the data being manipulated, parsed or transformed by the system
- results in security concerns due to the sensitivity or classification of the data being manipulated, parsed or transformed by the system
- results in security concerns due to the implementation, sourcing or characteristics of the system
- influences decision-making that affects individuals, communities, businesses or the environment
- poses a reputational risk or undermines public confidence in government
- results in intellectual property concerns due to the system manipulating, transforming or reproducing material for which a third party owns copyright.
Agencies should refer to existing risk management frameworks, such as the Commonwealth Risk Management Policy and internal agency risk management approaches, for guidance in assessing the concepts under the risk matrix at Figure 1.
-
What is the risk that the use of AI:
negatively affects public accessibility or inclusivity of government services
unfairly discriminates against individuals or communities
perpetuates stereotypes or demeaning representations of individuals or communities
causes harm to individuals, communities, businesses or the environment
results in privacy concerns due to the sensitivity of the data being manipulated, parsed or transformed by the system
results in security concerns due to the sensitivity or classification of the data being manipulated, parsed or transformed by the system
results in security concerns due to the implementation, sourcing or characteristics of the system
influences decision-making that affects individuals, communities, businesses or the environment
poses a reputational risk or undermines public confidence in government
results in intellectual property concerns due to the system manipulating, transforming or reproducing material for which a third party owns copyright.
Agencies should refer to existing risk management frameworks, such as the Commonwealth Risk Management Policy and internal agency risk management approaches, for guidance in assessing the concepts under the risk matrix at Figure 1.
-
Content
-
Procurement
Commonwealth Procurement Rules and other procurement policies.
-
Privacy
-
Cyber and protective security
2023-2030 Australian Cyber Security Strategy
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.