• Under Australia's AI Ethics Principles, there should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI and can find out when an AI system is engaging with them.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    7.1    Consultation

    Have you consulted stakeholders representing all relevant communities or groups that may be significantly affected throughout the lifecycle of the AI use case?

    Refer to the list of stakeholders identified in section 2. Seek out community representatives with the appropriate skills, knowledge or experience to engage with AI ethics issues. Consult the guidance document for prompts and resources to assist you.

    7.2    Public visibility

    Will appropriate information (such as the scope and goals) about the use of AI be made publicly available?

    See guidance document for advice on appropriate transparency mechanisms, information to include and factors to consider in deciding to publish or not publish AI use information.

    7.3    Maintain appropriate documentation and records

    Have you ensured that appropriate documentation and records will be maintained throughout the lifecycle of the AI use case?

    Ensure you comply with requirements for maintaining reliable records of decisions, testing and the information and data assets used in an AI system. This is important to enable internal and external scrutiny, continuity of knowledge and accountability.

    7.4    Disclosing AI interactions and outputs

    Will people directly interacting with the AI system or relying on its outputs be made aware of the interaction or that they are relying on AI generated output? How?

    Consider members of the public or government officials that may interact with the system or decision makers that may rely on its outputs. 

    7.5    Offer appropriate explanations

    If your AI system will materially influence administrative action or decision making by or about individuals, groups, organisations or communities, will your AI system allow for appropriate explanation of the factors leading to AI generated decisions, recommendations or insights?

  • 8. Contestability

  • Under Australia's AI Ethics Principles, when an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    8.1    Notification of AI affecting rights

    Will individuals, groups, organisations or communities be notified if an administrative action with a legal or similarly significant effect on them was materially influenced by the AI system?

    See guidance document for help interpreting ‘administrative action’, ‘materially influenced’ and ‘legal or similarly significant effect’ as well as recommendations for notification content.

    8.2    Challenging administrative actions influenced by AI

    Is there a timely and accessible process to challenge the administrative actions discussed at 8.1?

    Administrative law is the body of law that regulates government administrative action. Access to review of government administrative action is a key component of access to justice. Consistent with best practice in administrative action, ensure that no person could lose a right, privilege or entitlement without access to a review process or an effective way to challenge an AI generated or informed decision. 

  • 9. Accountability

  • Under Australia's AI Ethics Principles, those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. 

  • 9.1    Establishing responsibilities 

    Identify who will be responsible for:

    • use of AI insights and decisions
    • monitoring the performance of the AI system
    • data governance.

    Where feasible, it is recommended that the same person does not hold all 3 of these roles. The responsible officers should be appropriately senior, skilled, and qualified. 

    9.2    Training of AI system operators

    For question 9.2, indicate either yes, no or N/A, and explain your answer.

    Is there a process in place to ensure operators of the AI system are sufficiently skilled and trained?

    With all automated systems, there is always the risk of overreliance on results. It is important that the operators of the system, including any person who exercises judgment over the use of insights, or responses to alerts, are appropriately trained on the use of the AI system. Training should be sufficient to understand how to appropriately use the AI system, and to monitor and critically evaluate outcomes.

  • 10. Human-centred values

    • Content type
      Select a content type to filter by
    • Topics
      Select a topic to filter by.
    • Title
      Enter a keyword within the title
  • Under Australia's AI Ethics Principles, AI systems should throughout their lifecycle respect human rights, diversity and the autonomy of individuals. 

  • For each of the following questions, indicate either yes, no or N/A, and explain your answer.

    10.1    Incorporating diversity

    Are you satisfied that you have incorporated diversity and people with appropriately diverse skills, experience and backgrounds throughout the lifecycle of your AI use case?

    Consider how you have incorporated diversity of perspective through the lifecycle of your AI use case – for example, through the choice of data, composition of development and deployment teams and the stakeholder and user groups to choose to consult.

    10.2    Human rights obligations

    Have you consulted an appropriate source of legal advice or otherwise ensured that your AI use case and the use of data align with human rights obligations?

    It is recommended you complete this question after completing previous sections of the assessment. This approach will enable a more considered assessment of the human rights implications of your AI use case.

  • 11. Internal review and next steps

    11.1    Legal review of AI use case

    This section must be completed by a qualified legal adviser. Ensure any supporting legal advice is available for the remaining review steps. Repeat this step if there are significant changes.
    The response to this section should include:

    • the statement ‘I am/am not satisfied that the AI use case and the use of data meet legal requirements’
    • comments (optional)
    • name and position of legal adviser
    • date.

    11.2    Risk summary table

    In the table below, list any risks identified in section 3 (the threshold assessment) or subsequently as having a risk severity of ‘medium’ or ‘high’. Also list any instances where you have answered ‘no’ in any of the questions in sections 4 to 10.

    As you proceed through internal review (section 11.3) and, if applicable, external review (section 11.4), list any agreed risk treatments and assess residual risk using the risk matrix in section 3. 

    Risk summary table 
    Risk Risk treatments Residual risk 
    [Example] [Example] [Example] 

     

    11.3    Internal review of AI use case

    An internal agency governance body designated by your agency’s Accountable Authority must review the assessment and the risks outlined in the risk summary table.

    The governance body may decide to accept any ‘medium’ risks, to recommend risk treatments, or decide not to accept the risk and recommend not proceeding with the AI use case.

    List recommendations of your agency governance body below.

    11.4    External review of AI use case

    If, following internal review (section 11.3), there are any residual risks with a ‘high’ risk rating, consider whether the AI use case and this assessment would benefit from external review. 

    If an external review recommends further risk treatments or adjustments to the use case, your agency must consider these recommendations, decide which to implement, and whether to accept any residual risk and proceed with the use case.

    If applicable, list any recommendations arising from external review below and record the agency response to these recommendations.

    The assessment should answer the following questions about the external review.

    • Has your AI use case been subject to external review? Answer yes, no or not applicable. 
    • Who conducted the external review?
    • What date was an external review last completed?
    • What are the external review recommendations? 
    • For each recommendation, what is the agency response? 
  • 2. Purpose and expected benefits

  • 3. Threshold assessment

  • 4. Fairness

  • 5. Reliability and safety

  • 6. Privacy protection and security

  • 7. Transparency and explainability

  • 8. Contestability

  • 9. Accountability

  • 10. Human-centred values

  • 11. Internal review and next steps

  • The majority of trial participants are positive about Copilot

    Most trial participants are optimistic about Copilot and wish to continue using Copilot.

    Trial participants had high expectations prior to the start of the trial. As shown in Figure 1, the majority of survey respondents (77%) who completed both the pre-use and post-use survey reported an optimistic opinion of Copilot. This indicates the initial high level of optimism held by trial participants have been largely met.

    A chart comparing the slight increase in positive sentiment of participants from before and after the trial.
    Figure 1 | Pre-use and post-use survey responses to 'Which of the following best describes your sentiment about using Microsoft 365 Copilot?' by respondents who completed both (n=330)
  • Trial participants, regardless of job family, ubiquitously praised Copilot for automating time-consuming menial tasks such as searching for information, composing emails or summarising long documents. In addition, trial participants also acknowledged it was a safer alternative than accessing other forms of AI.

    The positive sentiment was not uniformly observed across all MS products or activities

    There were mixed opinions on the usefulness and performance of Copilot across Microsoft applications. 

    While the majority of pulse survey respondents were positive about Copilot’s functionality in Word and Teams, capabilities in other Microsoft products were viewed less favourably, in particular Excel. As shown in Figure 7, Excel had the largest proportion of negative sentiment with almost a third of respondents reporting that it did not meet their expectations.

    A graph showing which Microsoft programs with Copilot integration met participants’ expectations. Teams and Word shared the most positive response overall, while Excel had the most negative response.
    Figure 7 | Pulse survey responses to 'How little or how much do you agree with the following statement: Copilot has met my expectations', by Microsoft product (n=1,141)

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.