• Approach and methodology

    A mixed-methods approach was adopted for the evaluation.

    Over 2,000 trial participants from more than 50 agencies contributed to the evaluation. The final report was written based on document/data review, consultations and surveys.

    Document/data review

    The evaluation synthesised existing evidence, including:

    • government research papers on Copilot and generative AI
    • the trial issue register
    • 6 agency-led internal evaluations.

    Consultations

    It also involved thematic analysis through:

    • 24 outreach interviews conducted by the DTA
    • 17 focus groups facilitated by Nous Group
    • 8 interviews facilitated by Nous Group.

    Surveys

    Analysis was conducted on data collected from:

    • 1,556 respondents in pre-use survey
    • 1,159 respondents in pulse survey
    • 831 respondents in post-use survey.
    Off
    • A mixed-methods approach was adopted for the evaluation.

      Over 2,000 trial participants from more than 50 agencies contributed to the evaluation. The final report was written based on document/data review, consultations and surveys.

      Document/data review

      The evaluation synthesised existing evidence, including:

      • government research papers on Copilot and generative AI
      • the trial issue register
      • 6 agency-led internal evaluations.

      Consultations

      It also involved thematic analysis through:

      • 24 outreach interviews conducted by the DTA
      • 17 focus groups facilitated by Nous Group
      • 8 interviews facilitated by Nous Group.

      Surveys

      Analysis was conducted on data collected from:

      • 1,556 respondents in pre-use survey
      • 1,159 respondents in pulse survey
      • 831 respondents in post-use survey.
  • Appendix

    Methodological limitations

    Evaluation fatigue may have reduced the participation in engagement activities.

    Several agencies conducted their own internal evaluations over the course of the trial and did not participate in Digital Transformation Agency’s overall evaluation.

    Mitigations: where possible, the evaluation has drawn on agency-specific evaluation to complement findings.

    The non-randomised sample of trial participants may not reflect the views of the entire APS.

    Participants self-nominated to be involved in the trial, contributing to a degree of selection bias. The representation of APS job families and classifications in the trial differs from the proportions in the overall APS.

    Mitigations: the over and underrepresentation of certain groups has been noted. Statistical significance and standard error were calculated, where applicable, to ensure robustness of results.

    There was an inconsistent roll out of Copilot across agencies.

    Agencies began the trial at different stages, meaning there was not an equal opportunity to build capability or identify use cases. Agencies also used different versions of Copilot due to frequent product releases.

    Mitigations: there is a distinction between what may be a functionality limitation of Copilot and when a feature has been disabled by an agency.

    Measuring the impact of Copilot relied on trial participants’ self-assessment of productivity benefits.

    Trial participants were asked to estimate the scale of Copilot’s benefits, which may naturally under or overestimate its impact.

    Mitigations: where possible, the evaluation has compared productivity findings against other evaluations and external research to verify its validity.

    Statistical significance of outcomes

    The trial of Copilot for Microsoft 365 involved the distribution of nearly 5,765 Copilot licenses across 56 participating agencies. As part of engagement activities — consultations and surveys — the evaluation gathered the experience and sentiment from over 2,000 trial participants representing more than 45 agencies. Insights were further strengthened by the findings from internal evaluations completed by certain agencies. The sample size was sufficient to ensure 95% confidence intervals of reported proportions (at the overall level) were within a margin of error of 5%.

    There were 3 questions asked in the post-use survey that were originally included in either the pre-use or pulse survey. These questions were repeated to compare responses of trial participants before and after the survey and measure the change in sentiment. A t-test was used to determine whether changes were statistically significant at a 5% level of significance.

  • A t-test is a statistical method to test whether the difference between 2 groups, such as a ‘before’ and ‘after’ samples, are statistically significant.

  • The survey aligned with the APS Job Family Framework and APS job families and classifications were aggregated in survey analysis to reduce standard error and ensure statistical robustness. Post-use survey responses from Trades and Labour, and Monitoring and Audit job families were excluded from reporting as their sample size was less than 10, but their responses were still included in aggregate findings. 

    For APS classifications, APS 3-6 have been aggregated.

    Survey participation by APS classification and job family

    Table A: Aggregation of APS job families for survey analysis
    GroupJob families
    Corporate

    Accounting and Finance

    Administration

    Communications and Marketing

    Human Resources

    Information and Knowledge Management

    Legal and Parliamentary

    ICT and Digital SolutionsICT and Digital Solutions
    Policy and Program Management

    Policy

    Portfolio, Program and Project Management

    Service Delivery

    Technical

    Compliance and Regulation

    Data and Research

    Engineering and Technical

    Intelligence

    Science and Health

     

    Table B: Participation in surveys according to APS level classification
     Percentage of all APS employeesPercentage of pre-use survey respondentsPercentage of post-use survey respondents
    SES1.94.75.3
    EL 29.020.020.2
    EL 120.836.934.0
    APS 623.423.422.3
    APS 514.78.59.6
    APS 3-426.06.07.4
    APS 1-24.210.51.1

     

    Table C: Participation in surveys according to job family
     Percentage of all APS employeesPercentage of pre-use survey respondentsPercentage of post-use survey respondents
    Accounting and Finance5.15.33.5
    Administration11.49.08.9
    Communication and Marketing2.54.95.8
    Compliance and Regulation10.36.66.5
    Data and Research3.79.98.3
    Engineering and Technical1.81.31.5
    Human Resources3.95.35.0
    ICT and Digital Solutions5.019.622.3
    Information and Knowledge Management1.12.51.6
    Intelligence2.40.92.1
    Legal and Parliamentary 2.64.13.5
    Monitoring and Audit1.51.11.0
    Policy7.913.714.4
    Portfolio, Program and Project Management8.38.67.5
    Science and Health4.21.62.1
    Senior Executive2.12.31.5
    Service Delivery25.52.74.0
    Trades and Labour0.70.9-

     

    Participating agencies

    Table D: List of participating agencies by portfolio
    PortfolioEntity
    Agriculture, Fisheries and Forestry

    Department of Agriculture, Fisheries and Forestry

    Grains Research and Development Corporation

    Regional Investment Corporation

    Rural Industries Research and Development (trading as AgriFutures Australia)

    Attorney-General’s

    Australian Criminal Intelligence Commission

    Australian Federal Police

    Australian Financial Security Authority

    Office of the Commonwealth Ombudsman

    Climate Change, Energy, the Environment and Water

    Australian Institute of Marine Science

    Australian Renewable Energy Agency

    Department of Climate Change, Energy, Environment and Water

    Bureau of Meteorology

    Education

    Australian Research Council

    Department of Education

    Tertiary Education Quality and Standards Agency

    Employment and Workplace Relations

    Comcare

    Department of Employment and Workplace Relations

    Fair Work Commission

    Finance

    Commonwealth Superannuation Corporation

    Department of Finance

    Digital Transformation Agency

    Foreign and Trade Affairs

    Australian Centre for International Agricultural Research

    Australian Trade and Investment Commission

    Department of Foreign Affairs and Trade

    Tourism Australia

    Health and Aged Care

    Australian Digital Health Agency

    Australian Institute of Health and Welfare

    Department of Health and Aged Care

    Home AffairsDepartment of Home Affairs (Immigration and Border Protection)
    Industry, Science and Resources

    Australian Building Codes Board

    Australian Nuclear Science and Technology Organisation

    Commonwealth Scientific and Industrial Research Organisation

    Department of Industry, Science and Resources

    Geoscience Australia

    IP Australia

    Infrastructure, Transport, Regional Development, Communication and the ArtsAustralian Transport Safety Bureau
    Parliamentary Departments (not a portfolio)Department of Parliamentary Services
    Social Services

    Australian Institute of Family Studies

    National Disability Insurance Agency

    Treasury

    Australian Prudential Regulation Authority

    Australian Securities and Investments Commission

    Australian Charities and Not-for-profits Commission

    Australian Taxation Office

    Department of the Treasury

    Productivity Commission

  • Supporting the policy for responsible use of AI in government

  • Guidance for staff training on AI

    Version 1.1

    Use the following information to support your agency’s implementation of the policy for responsible use of AI in government.

    The policy recognises that AI is used in many areas of the APS and everyday life. As adoption grows, staff at all levels will interact with AI and its outputs, directly or indirectly.

    The policy strongly recommends that agencies implement AI fundamentals training for all staff, regardless of their role. Agencies should also consider if it is appropriate for staff to complete annual refresher training. 

    Providing foundational training will help staff become confident using AI in a responsible way. 

    What’s covered in the training

    The Digital Transformation Agency (DTA) has developed AI fundamentals training for agency use. The training will be updated periodically to address changes in the AI landscape, such as evolutions in the technology and its use in government.

    The training provides knowledge on AI that includes:

    • an introduction to AI
    • an explanation of generative AI 
    • foundations of safe and responsible use.

    As staff will likely interact with generative AI now and in the future, it is an area of focus for the training.

    The training is designed for all staff regardless of their experience using AI. It takes approximately 20 to 30 minutes to complete.

    It does not cover advanced topics such as model training, system development or instructions for specific technologies or platforms.

    Implement the training

    Agency’s Learning and Development specialists can access the training for download through the APS Learning Bank. The training is provided in a format compatible with most e-learning platforms. Alternatively, agencies can choose to let their staff access the module directly through APSLearn.

    Agencies can use the training module as provided, or choose to modify it or incorporate it into an existing training program based on their specific context and requirements. 

    Agencies are encouraged to consider additional training for staff in consideration of their roles and responsibilities, such as those responsible for the procurement, development, training and deployment of AI systems.

    Report completion rates

    Where an agency implements the training, accountable officials should monitor completion rates and provide this information if requested by the DTA. This is in line with the activities to measure the implementation of the policy under the Standard for accountable officials

    Questions about implementation

    Agencies can contact the DTA with questions about implementing the training by emailing ai@dta.gov.au.

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.