• 8. Contestability

    8.1    Notification of AI affecting rights

    You should notify individuals, groups, communities or businesses when an administrative action materially influenced by an AI system has a legal or similarly significant effect on them. This notification should state that the action was materially influenced by an AI system and include information on available review rights and whether and how the individual can challenge the action. 

    An action producing a legal effect is when an individual, group, community or business’s legal status or rights are affected, and includes:

    • provision of benefits granted by legislation
    • contractual rights.

    An action producing a similarly significant effect is when an individual, group, community or business’s circumstances, behaviours or choices are affected, and includes: 

    • denial of consequential services or support, such as housing, insurance, education enrolment, criminal justice, employment opportunities and health care services
    • provision of basic necessities, such as food and water.

    A decision may be considered to have been materially influenced by an AI system if:

    • the decision was automated by an AI system, with little to no human oversight
    • a component of the decision was automated by an AI system, with little to no human oversight (for example, a computer makes the first 2 limbs of a decision, with the final limb made by a human)
    • the AI system is likely to influence decisions that are made (for example, the output of the AI system recommended a decision to a human for consideration or provided substantive analysis to inform a decision). 

    ‘Administrative action’ is any of the following:

    • making, refusing or failing to make a decision
    • exercising, refusing or failing to exercise a power
    • performing, refusing or failing to perform a function or duty.

    Note: this guidance is designed to supplement, not replace, existing administrative law requirements pertaining to notification of administrative decisions. The Attorney-General’s Department is leading work to develop a consistent legislative framework for automated decision making (ADM), as part of the government’s response to recommendation 17.1 of the Robodebt Royal Commission Report. The Australian Government AI assurance framework will continue to evolve to ensure alignment as this work progresses.

    8.2    Challenging administrative actions influenced by AI

    Individuals, groups, communities or businesses subject to an administrative action materially influenced by an AI system that has a legal or similarly significant effect on them should be provided with an opportunity to challenge this action. This is an important administrative law principle. See guidance on section 8.1 above for assistance interpreting terminology.

    Administrative actions may be subject to both merits review and judicial review. Merits review considers whether a decision made was the correct or preferable one in the circumstances, and includes internal review conducted by the agency and external review processes. Judicial review examines whether a decision was legally correct.

    You should ensure that review rights that ordinarily apply to human-made decisions or actions are not impacted or limited because an AI system has been used.

    Notifications discussed at section 8.1 should include information about available review mechanisms so that people can make informed decisions about disputing administrative actions.

    You will need to ensure a person within your agency is able to answer questions in a court or tribunal about an administrative action taken by an AI system if that matter is ultimately challenged. Review mechanisms also impact on the obligation to provide reasons. For example, the Administrative Decisions (Judicial Review) Act 1977 gives applicants a right to reasons for administrative decisions.

  • 9. Accountability

    9.1    Establishing responsibilities

    Establishing clear roles and responsibilities is essential for ensuring accountability in the development and use of AI systems. In this section, you are asked to identify the individuals responsible for 3 key aspects of your AI system:

    Use of AI insights and decisions

    The person responsible for the application of the AI system’s outputs, including making decisions or taking actions based on those outputs.

    Monitoring the performance of the AI system

    The person responsible for overseeing the ongoing performance and safety of the AI system, including monitoring for errors, biases or unintended consequences. 

    Data governance

    The person responsible for the governance of the data used for operating, training or validating the AI system. 

    Where feasible, it is recommended that these 3 roles not all be held by the same person. The responsible officers should be appropriately senior, skilled and qualified for their respective roles. 

    9.2    Training of AI system operators

    AI system operators play a crucial role in ensuring the responsible and effective use of AI. They must have the necessary skills, knowledge and judgment to understand the system’s capabilities and limitations, how to appropriately use the system, interpret its outputs and make informed decisions based on those outputs.

    In your answer, describe the process for ensuring AI system operators are adequately trained and skilled. This may include:

    Initial training

    What training do operators receive before being allowed to use the AI system? Does this training cover technical aspects of the system, as well as ethical and legal considerations?

    Ongoing training

    Is there a process for continuous learning and skill development? How are operators kept up to date with changes or updates to the AI system?

    Evaluation

    Are operators’ skills and knowledge assessed? Are there any certification or qualification requirements?

    Support

    What resources and support are available to operators if they have questions or encounter issues?

    Consider whether this needs to be tailored to the specific needs and risks of your AI system or proposed use case or whether general AI training requirements are sufficient.

  • 10. Human-centred values

    10.1    Incorporating diversity

    Diversity of perspective promotes inclusivity, mitigates biases, supports critical thinking and should be incorporated in all AI system lifecycle stages. 

    AI systems require input from stakeholders from a variety of backgrounds, including different ethnicities, genders, ages, abilities and socio-economic statuses. This also includes people with diverse professional backgrounds, such as ethicists, social scientists and domain experts relevant to the AI application. Determining which stakeholders and user groups to consult, which data to use, and the optimal team composition will depend on your AI system. 

    The following examples demonstrate the often-unintended negative consequences of AI systems that failed to adequately incorporate diversity into relevant lifecycle stages. 

    AI systems ineffective at predicting recidivism outcomes for defendants of colour and underestimating the health needs of patients from marginalised racial and ethnic backgrounds.

    AI job recruitment systems unfairly affecting employment outcomes.

    Algorithms used to prioritise patients for high-risk care management programs were less likely to refer black patients than white patients with the same level of health.

    An AI system designed to detect cancers had shown biases towards lighter skin tones stemming from an oversight in collecting a more diverse set of skin tone images, potentially delaying life-saving treatments.

    Resources, including approaches, templates and methods to ensure sufficient diversity and inclusion of your AI system, are described in the NAIC’s Implementing Australia’s AI Ethics Principles report.

    10.2    Human rights obligations

    You should consult an appropriate source of legal advice or otherwise ensure that your AI use case and use of data align with human rights obligations. If you have not done so, explain your reasoning.

    It is recommended that you complete this question after you have completed the previous sections of the assessment. This will provide more complete information to enable an assessment of the human rights implications of your AI use case.

    In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation in certain areas of public life, including education and employment. Australia's federal anti‑discrimination laws are contained in the following legislation:

    • Age Discrimination Act 2004
    • Disability Discrimination Act 1992
    • Racial Discrimination Act 1975
    • Sex Discrimination Act 1984.

    Human rights are defined in the Human Rights (Parliamentary Scrutiny) Act 2011 as the rights and freedoms contained in the 7 core international human rights treaties to which Australia is a party, namely the: 

    • International Covenant on Civil and Political Rights (ICCPR).
    • International Covenant on Economic, Social and Cultural Rights (ICESCR).
    • International Convention on the Elimination of All Forms of Racial Discrimination (CERD).
    • Convention on the Elimination of All Forms of Discrimination against Women (CEDAW).
    • Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT).
    • Convention on the Rights of the Child (CRC).
    • Convention on the Rights of Persons with Disabilities (CRPD).
  • 11. Internal review and next steps

    11.1    Legal review of AI use case

    If the threshold assessment in section 3 results in a risk rating of ‘medium’ or ‘high’, your AI use case must undergo legal review to ensure that the use case and associated use of data meet legal requirements.

    The nature of the legal review is context dependent. Without limiting the scope of legal review, examples of potentially applicable legislation, policies and frameworks are outlined at Attachment A of the Policy for the responsible use of AI in government.

    If there are significant changes to the AI use case (including changes introduced due to recommendations from internal or external review), then the advice should be revisited to ensure the AI use case and associated use of data continues to meet legal requirements.

    11.2    Risk summary table

    To complete the risk summary table, list any: 

    • risks assessed in section 3 (the threshold assessment) as ‘medium’ or ‘high’ 
    • instances where you have answered ‘no’ to questions in sections 4 to 10. You are encouraged to identify risk treatments in relation to these, however, you do not need to assign a residual risk rating to those risks
    • additional risks that have been identified throughout the assessment process 
    • risk treatments identified during internal review (section 11.3) and, if applicable, external review (section 11.4) – using the risk matrix in section 3 to assess residual risk.

    11.3    Internal review of AI use case

    This requires an internal agency governance body designated by your agency’s Accountable Authority to review the assessment and the risks outlined in the risk summary table. 

    The governance body may decide to accept any ‘medium’ risks, to recommend risk treatments, or decide not to accept the risk and recommend not proceeding with the AI use case. You should list the recommendations of your agency governance body in the text box provided.

    11.4    External review of AI use case

    If, following internal review (section 11.3), there are any residual risks with a ‘high’ risk rating, your agency should consider whether the AI use case and this assessment would benefit from external review. This external review may recommend further risk treatments or adjustments to the use case.

    In line with the APS Strategic Commissioning Framework, consider whether someone in the APS could conduct this review or whether the nature of the use case and identified risks warrant independent outside review and expertise. 

    Your agency must consider recommendations of an external review, decide which to implement, and whether to accept any residual risk and proceed with the use case. If applicable, you should list any recommendations arising from external review in the text box provided and record the agency's response to these recommendations.

  • Attachment

  • Attachment

    Risk consequence rating advice

    Negatively affecting public accessibility or inclusivity of government services

    Insignificant
    • Insignificant compromises to accessibility or inclusivity of services.
    • Minor technical issues causing brief inconvenience but no actual barriers to access or inclusion.
    •  Issues rapidly resolved with minimal impact on user experience.
    Minor
    • Limited, reversable compromises to accessibility or inclusivity of services.
    • Some people experience difficulties accessing services due to technical issues or design oversights.
    • Barriers are short-term and addressed once identified, with additional support provided to people affected.
    Moderate
    • Many compromises are made to the accessibility or inclusivity of services.
    • Considerable access challenges for a modest number of users.
    • Resolving access issues requires substantial effort and resources.
    • Certain groups may be disproportionately impacted.
    • Affected users experience frustration and delays in receiving services.
    Major
    • Extensive compromises are made to the accessibility or inclusivity of services, may include some essential services.
    • Ongoing delays that require external technical assistance to resolve.
    • Widespread inconvenience, frustration, public distress and potential legal implications.
    • Vulnerable user groups disproportionately impacted.
    Severe
    • Widespread irreversible ongoing compromises are made to the accessibility or inclusivity of services, including some essential services.
    • Majority of users, especially vulnerable groups affected.
    • Essential services inaccessible for extended periods, causing significant public distress, legal implications, and a loss of trust in government efficiency.
    • Comprehensive and immediate actions are urgently needed to rectify the situation.

    Unfair discrimination against individuals, communities or groups

    Insignificant
    • Negligible instances of discrimination, with virtually no discernible effect on individuals, communities, or groups.
    • Issues are proactively identified and rapidly addressed before causing harm.
    Minor
    • Limited instances of unfair discrimination occur, affecting a small number of individuals.
    • Relatively isolated cases, and corrective measures minimise their impact.
    Moderate
    • Moderate levels of discrimination leading to noticeable harm to certain individuals, communities, or groups.
    •  These incidents raise bias and fairness concerns and require targeted interventions.
    Major
    • Significant discrimination results in major, tangible harm to individuals and multiple communities or groups.
    • Rebuilding trust requires substantial reforms and remediation efforts.
    Severe
    • Pervasive and systemic discrimination causes severe harm across a broad spectrum of the population, particularly marginalised and vulnerable groups.
    • Public outrage, potential legal action, and a profound loss of trust in government.
    • Immediate, sweeping reforms and accountability measures are required.

    Perpetuating stereotyping or demeaning representations of individuals, communities or groups

    Insignificant
    • Inadvertently reinforce mild stereotypes, but these instances are quickly identified and rectified with no lasting harm or public concern.
    • Minor
    • Isolated cases of stereotyping, affecting limited members of community with some noticing and raising concerns.
    • Prompt action mitigates the issue, preventing broader impact.
    Moderate
    • Moderate stereotyping by AI systems leads to noticeable public discomfort and criticism.
    • Disproportionally affecting certain communities or groups.
    • Requires targeted corrective measures to address and prevent recurrence.
    Major
    • Significant and widespread reinforcement of harmful stereotypes and demeaning representations.
    • Causes public outcry and damages the relationship between communities and government entities.
    • Urgent, comprehensive strategies are needed to rectify these representations and restore trust.
    Severe
    • Pervasive and damaging stereotyping severely harms multiple communities, leading to widespread distress.
    • Potential legal consequences, and a profound breach of trust in government use of technology.
    • Requires immediate, sweeping actions to address the harm, including system overhauls and public apologies.

    Harm to individuals, communities, groups, businesses or the environment

    Insignificant
    • Inconsequential glitches with no real harm to the public, business operations or ecosystems.
    • Easily managed through routine measures.
    Minor
    • Isolated incidents mildly affecting the public.
    • Slight inconveniences or disruptions to businesses, leading to manageable financial costs.
    • Limited manageable environmental disturbances affecting local ecosystems or resource consumption.
    Moderate
    • Noticeable negative effects on the public.
    • Businesses face operational challenges or financial losses, affecting their competitiveness.
    • Obvious environmental degradation, including pollution or habitat disruption, prompting public concern.
    Major
    • Significant public harm causing distress and potentially lasting damage.
    • Significant harm to a wide range of businesses, resulting in substantial financial losses, layoffs, and long-term reputational damage.
    • Compromises ecosystem wellbeing causing substantial pollution, loss of biodiversity, and resource depletion.
    Severe
    • Widespread, profound harm and severe distress affecting broad segments of the public.
    • Profound damage across the business sector, leading to bankruptcies, major job losses, and a lasting negative impact on the economy.
    • Comprehensive environmental destruction, leading to critical loss of biodiversity, irreversible ecosystem damage, and severe resource scarcity.

    Compromising privacy due to the sensitivity, amount or source of the data being used by an AI system

    Insignificant
    • Insignificant data handling errors occur without compromising sensitive information.
    • Incidents are quickly rectified, maintaining public trust in data security.
    Minor
    • Isolated exposure of limited sensitive data affects a small group of individuals.
    • Swift actions taken to secure the data and prevent further incidents.
    Moderate
    • Breach of moderate amounts of sensitive data, leading to privacy concerns among the affected populace.
    • Some individuals experience inconvenience and distress.
    Major
    • Serious misuse of sensitive private data affects a large segment of the population, leading to widespread privacy violations and a loss of public trust.
    • Comprehensive measures are urgently required to secure data and address the privacy breaches.
    Severe
    • Significant potential to expose sensitive information of a vast number of individuals, causing severe harm, identity-theft risks; use of sensitive personal information in a way that is likely to draw public criticism with limited ability for individuals to choose how their information is used.
    • Significant potential to harm trust in government-information handling with potential for lasting consequences.

    Raising security concerns due to the sensitivity or classification of the data being used by an AI system

    Insignificant
    • Inconsequential security lapses occur without actual misuse of sensitive data.
    • Quickly identified and corrected with no real harm done.
    • These types of incidents may serve as prompts for reviewing security protocols.
    Minor
    • A limited security breach involves unauthorised access to protected data affecting a small number of records with minimal impact.
    • Immediate actions secure the breach, and affected individuals are notified and supported.
    • Incident is catalyst for review of security protocols.
    Moderate
    • Security incident leads to the compromise of a moderate volume of sensitive data, raising concerns over data protection and privacy.
    • The breach necessitates a thorough investigation, enhanced security measures.
    Major
    • A significant security breach results in extensive unauthorised access to sensitive or protected data, causing considerable concern and distress among the public.
    • Urgent security upgrades and support measures for impacted individuals are implemented. to restore security and trust.
    Severe
    • A massive security breach exposes a vast amount of sensitive and protected data, leading to severe implications for national security, public safety, and individual privacy.
    • This incident triggers an emergency response, including legal actions, a major overhaul of security systems, and long-term support for those affected.

    Raising security concerns due to implementation, sourcing or characteristics of the AI system

    Insignificant
    • Inconsequential security concerns arise due to characteristics of the AI system, such as software bugs, which are promptly identified and fixed with no adverse effects on overall security.
    • These issues may serve as lessons, leading to slight improvements in the system's security framework.
    Minor
    • Certain characteristics of the AI system lead to vulnerabilities that are exploited in a limited manner, causing minor security breaches.
    • Immediate remediation measures are taken, and the system is updated to prevent similar issues.
    Moderate
    • A moderate security risk is realised when intrinsic features of the AI system allow for unintended access or data leaks.
    • Incident affects a noticeable but contained component of the AI system.
    • Prompts a comprehensive security review of the AI system and the implementation of more robust safeguards.
    Major
    • Significant security flaws in the AI system's design result in major breaches, compromising a large amount of data and severely affecting system integrity.
    • Incident leads to an urgent overhaul of security measures and protocols, alongside efforts to mitigate the damage.
    Severe
    • Critical security vulnerabilities inherent to the AI system lead to widespread breaches, exposing vast quantities of sensitive data and jeopardising national security or public safety.
    • The incident results in severe consequences, necessitating emergency responses, extensive system redesigns, and long-term efforts to recover from the breach and prevent recurrence.

    Influencing decision-making affects individuals, communities, groups, businesses or the environment

    Insignificant
    • Decisions lead to negligible errors, swiftly identified and corrected with no harm to the public, business operations or the environment.
    • Incidents may serve as learning opportunity for system improvement.
    Minor
    • Decisions result in minor inconveniences or errors affecting the public, business operations or finances or slight environmental impacts.
    • All impacts reversible with prompt action.
    Moderate
    • Decisions cause moderate harm to the public, business operations or finances or noticeable environmental degradation.
    • Targeted interventions are required to mitigate these effects.
    Major
    • Significant harm to the public, substantial business financial losses or operational disruptions, or significant environmental damage.
    • Loss of confidence in government, operations, service delivery and partnerships.
    • Significant harm to a wide range of businesses, resulting in substantial financial losses, layoffs, and long-term reputational damage.
    • Compromises ecosystem wellbeing causing substantial pollution, loss of biodiversity, and resource depletion.
    Severe
    • AI's influence on critical decision-making processes leads to severe and widespread harm to public, business operations or finances or the environment.
    • Potentially endangering lives or significantly impacting public safety, rights and trust.
    • Causes massive job losses, undermining business economic stability and viability.
    • Catastrophic loss of ecosystems, endangered species, and long-term ecological imbalance or severe resources depletion.

    Posing a reputational risk or undermining public confidence in the government

    Insignificant
    • Isolated reputational issues arise, quickly addressed and explained.
    • Causes negligible damage to public trust in government capabilities.
    Minor
    • Small-scale AI mishaps lead to brief public concern, slightly denting the government's reputation.
    • Prompt clarification and corrective measures minimize long-term impact on public confidence
    • Seen by the government as poor management.
    Moderate
    • Misapplications result in moderate public dissatisfaction and questioning of government oversight.
    • Requires remedial actions to mend trust and address concerns.
    • Seen by government and opposition as failed management.
    Major
    • Widespread public scepticism and criticism, majorly affecting the government's image.
    • Requires substantial efforts to rebuild public confidence through transparency, accountability, and improvement of AI governance.
    • High profile negative stories, seen by government and opposition as significant failed management.
    Severe
    • Severe misuse or failure of AI systems leads to profound public distrust and criticism.
    • Significantly undermining confidence in government effectiveness and integrity.
    • Requires comprehensive, long-term strategies for rehabilitation of public trust, including systemic changes and ongoing engagement.
    • Seen by government and opposition as catastrophic failure of management.
    • Minister expresses loss of confidence or trust in agency.

    Risk likelihood table

    LikelihoodProbabilityDescription
    Almost certain91% and aboveThe risk is almost certain to eventuate within the foreseeable future.
    Likely61–90%The risk will probably eventuate within the foreseeable future.
    Possible31–60%The risk may eventuate within the foreseeable future.
    Unlikely5–30%The risk may eventuate at some time but is not likely to occur in the foreseeable future.
    RareLess than 5%The risk will only eventuate in exceptional circumstances or as a result of a combination of unusual events.
  • 2. Purpose and expected benefits

  • 1. Basic information

    1.1    AI use case profile

    Complete the information below:
    •    Name of AI use case.
    •    Reference number.
    •    Lead agency.
    •    Assessment contact officer (name and email).
    •    Executive sponsor (name and email).

    1.2    AI use case description

    In plain language, briefly explain how you are using or intend to use AI. 200 words or less.

    1.3    Type of AI technology

    Briefly explain what type of AI technology you are using or intend to use. 100 words or less.

    1.4    Lifecycle stage

    These stages can take place in an iterative manner and are not necessarily sequential. They are adapted from the OECD’s definition of the AI system lifecycle. Refer to guidance for further information. Select only one.

    Which of the following lifecycle stages best describes the current stage of your AI use case?

    • Early experimentation (note: assessment not required).
    • Design, data and models
    • Verification and validation
    • Deployment
    • Operation and monitoring
    • Retirement

    1.5    Review date

    Assessments must be reviewed when use cases either move to a different stage of their lifecycle or significant changes occur to the scope, function or operational context of the use case. Consult the Guidance and, if in doubt, consult the DTA.

    Indicate next date/milestone that will trigger the next review of the AI use case.

    1.6    Assessment review history

    Record the review history for this assessment. Include the review dates and brief summaries of changes arising from reviews (50 words or less).

  • 3. Threshold assessment

    3.1    Risk assessment

    Using the risk matrix, determine the severity of each of the risks in the table below, accounting for any risk mitigations and treatments. Provide a rationale and an explanation of relevant risk controls that are planned or in place. The guidance document contains consequence and likelihood descriptors and other information to support the risk assessment. 

    The risk assessment should reflect the intended scope, function and risk controls of the AI use case. Keep the rationale for each risk rating clear and concise, aiming for no more than 200 words per risk. 

    Risk matrix
    Likelihood/ConsequenceInsignificantMinorModerateMajorSevere
    Almost certainMediumMediumHighHighHigh
    LikelyMediumMediumMediumHighHigh
    PossibleLowMediumMediumHighHigh
    UnlikelyLowLowMediumMediumHigh
    RareLowLowLowMediumMedium

    What is the risk (low, medium or high) of the use of AI:

    • Negatively affecting public accessibility or inclusivity of government services?
    • Unfairly discriminating against individuals, communities or groups?
    • Perpetuating stereotyping or demeaning representations of individuals, communities or groups?
    • Harming individuals, communities, groups, organisations or the environment?
    • Raising privacy concerns due to the sensitivity, amount or source of the data being used by an AI system?
    • Raising security concerns due to the sensitivity or classification of the data being used by an AI system?
    • Raising security concerns due to the implementation, sourcing or characteristics of the AI system?
    • Influencing decision-making that affects individuals, communities, groups, organisations or the environment?
    • Posing a reputational risk or undermining public confidence in the government?

    3.2    Assessment contact officer recommendation

    If the assessment contact officer is satisfied that all risks in the threshold assessment are low, then they may recommend that a full assessment is not needed and that the agency accept the low risk.

    If one or more risks are medium or above, then a full assessment must be completed, unless you amend the AI use scope, function or risk controls such that the assessment contact officer is satisfied that all risks in the threshold assessment are low. 

    You may decide not to accept the risk and not proceed with the AI use case. 

    The assessment contact officer recommendation should include:

    • the statement ‘a full assessment is/is not necessary for this use case’
    • comments (optional)
    • name and position
    • date.

    3.3    Executive sponsor endorsement

    The executive sponsor endorsement should include:

    • the statement ‘I have reviewed the recommendation, am satisfied by the supporting analysis and agree that a full assessment is/is not necessary for this use case’
    • comments (optional)
    • name and position
    • date.
  • 4.  Fairness

  • Under Australia’s AI Ethics Principles, AI systems should throughout their lifecycle be inclusive and accessible and should not involve or result in unfair discrimination against individuals, communities or groups.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    4.1    Defining fairness

    Do you have a clear definition of what constitutes a fair outcome in the context of your use of AI?

    Where appropriate, you should consult relevant domain experts, affected parties and stakeholders to determine how to contextualise fairness for your use of AI. Consider inclusion and accessibility. Consult the guidance document for prompts and resources to assist you.

    4.2    Measuring fairness

    Do you have a way of measuring (quantitatively or qualitatively) the fairness of system outcomes?

    Measuring fairness is an important step in identifying and mitigating fairness risks. A wide range of metrics are available to address various concepts of fairness. Consult the guidance document for resources to assist you.

  • 5.  Reliability and safety

  • Under Australia's AI Ethics Principles, AI systems should throughout their lifecycle reliably operate in accordance with their intended purpose.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    5.1    Data suitability

    If your AI system requires the input of data to operate, or you are training or evaluating an AI model, can you explain why the chosen data is suitable for your use case?

    Consider data quality and factors such as accuracy, timeliness, completeness, consistency, lineage, provenance and volume.

    5.2    Indigenous data 

    If your AI system uses Indigenous data, including where any outputs relate to Indigenous people, have you ensured that your AI use case is consistent with the Framework for Governance of Indigenous Data?

    Consider whether your use of Indigenous data and AI outputs is consistent with the expectations of Indigenous people, and the Framework for Governance of Indigenous Data (GID). See definition of Indigenous data in guidance material.

    5.3    Suitability of procured AI model

    If you are procuring an AI model, can you explain its suitability for your use case?  

    May include multiple models or a class of models. Includes using open-source models, application programming interfaces (APIs) or otherwise sourcing or adapting models. Factors to consider are outlined in guidance.

    5.4    Testing

    Outline any areas of concern in results from testing. If testing is yet to occur, outline elements to be considered in testing plan (for example, the model’s accuracy).

    5.5    Pilot

    Have you conducted, or will you conduct, a pilot of your use case before deploying?

    If answering ‘yes’, explain what you have learned or hope to learn in relation to reliability and safety and, if applicable, outline how you adjusted the use of AI. 

    5.6    Monitoring

    Have you established a plan to monitor and evaluate the performance of your AI system?

    If answering ‘yes’, explain how you will monitor and evaluate performance. 

    5.7    Preparedness to intervene or disengage

    Have you established clear processes for human intervention or safely disengaging the AI system where necessary (for example, if stakeholders raise valid concerns with insights or decisions or an unresolvable issue is identified)?  

    See guidance document for resources to assist you in establishing appropriate processes.

  • 6.  Privacy protection and security

  • Under Australia's AI Ethics Principles, AI systems should throughout their lifecycle respect and uphold privacy rights and data protection, and ensure data security.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    6.1    Minimise and protect personal information

    Are you satisfied that any collection, use or disclosure of personal information is necessary, reasonable and proportionate for your AI use case?  
    See guidance on data minimisation and privacy enhancing technologies.

    6.2    Privacy assessment

    Has the AI use case undergone a Privacy Threshold Assessment or Privacy Impact Assessment?

    6.3    Authority to operate

    Has the AI system been authorised or does it fall within an existing authority to operate in your environment, in accordance with Protective Security Policy Framework (PSPF) Policy 11: Robust ICT systems?

    Engage with your agency’s IT Security Adviser and consider the latest security guidance and strategies for AI use (such as Engaging with AI from the Australian Signals Directorate).

  • 7. Transparency and explainability

  • Under Australia's AI Ethics Principles, there should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI and can find out when an AI system is engaging with them.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    7.1    Consultation

    Have you consulted stakeholders representing all relevant communities or groups that may be significantly affected throughout the lifecycle of the AI use case?

    Refer to the list of stakeholders identified in section 2. Seek out community representatives with the appropriate skills, knowledge or experience to engage with AI ethics issues. Consult the guidance document for prompts and resources to assist you.

    7.2    Public visibility

    Will appropriate information (such as the scope and goals) about the use of AI be made publicly available?

    See guidance document for advice on appropriate transparency mechanisms, information to include and factors to consider in deciding to publish or not publish AI use information.

    7.3    Maintain appropriate documentation and records

    Have you ensured that appropriate documentation and records will be maintained throughout the lifecycle of the AI use case?

    Ensure you comply with requirements for maintaining reliable records of decisions, testing and the information and data assets used in an AI system. This is important to enable internal and external scrutiny, continuity of knowledge and accountability.

    7.4    Disclosing AI interactions and outputs

    Will people directly interacting with the AI system or relying on its outputs be made aware of the interaction or that they are relying on AI generated output? How?

    Consider members of the public or government officials that may interact with the system or decision makers that may rely on its outputs. 

    7.5    Offer appropriate explanations

    If your AI system will materially influence administrative action or decision making by or about individuals, groups, organisations or communities, will your AI system allow for appropriate explanation of the factors leading to AI generated decisions, recommendations or insights?

  • 8. Contestability

  • Under Australia's AI Ethics Principles, when an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    8.1    Notification of AI affecting rights

    Will individuals, groups, organisations or communities be notified if an administrative action with a legal or similarly significant effect on them was materially influenced by the AI system?

    See guidance document for help interpreting ‘administrative action’, ‘materially influenced’ and ‘legal or similarly significant effect’ as well as recommendations for notification content.

    8.2    Challenging administrative actions influenced by AI

    Is there a timely and accessible process to challenge the administrative actions discussed at 8.1?

    Administrative law is the body of law that regulates government administrative action. Access to review of government administrative action is a key component of access to justice. Consistent with best practice in administrative action, ensure that no person could lose a right, privilege or entitlement without access to a review process or an effective way to challenge an AI generated or informed decision. 

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.