-
Approach and methodology
A mixed-methods approach was adopted for the evaluation.
Over 2,000 trial participants from more than 50 agencies contributed to the evaluation. The final report was written based on document/data review, consultations and surveys.
Document/data review
The evaluation synthesised existing evidence, including:
- government research papers on Copilot and generative AI
- the trial issue register
- 6 agency-led internal evaluations.
Consultations
It also involved thematic analysis through:
- 24 outreach interviews conducted by the DTA
- 17 focus groups facilitated by Nous Group
- 8 interviews facilitated by Nous Group.
Surveys
Analysis was conducted on data collected from:
- 1,556 respondents in pre-use survey
- 1,159 respondents in pulse survey
- 831 respondents in post-use survey.
-
-
A mixed-methods approach was adopted for the evaluation.
Over 2,000 trial participants from more than 50 agencies contributed to the evaluation. The final report was written based on document/data review, consultations and surveys.
Document/data review
The evaluation synthesised existing evidence, including:
- government research papers on Copilot and generative AI
- the trial issue register
- 6 agency-led internal evaluations.
Consultations
It also involved thematic analysis through:
- 24 outreach interviews conducted by the DTA
- 17 focus groups facilitated by Nous Group
- 8 interviews facilitated by Nous Group.
Surveys
Analysis was conducted on data collected from:
- 1,556 respondents in pre-use survey
- 1,159 respondents in pulse survey
- 831 respondents in post-use survey.
-
-
Appendix
Methodological limitations
Evaluation fatigue may have reduced the participation in engagement activities.
Several agencies conducted their own internal evaluations over the course of the trial and did not participate in Digital Transformation Agency’s overall evaluation.
Mitigations: where possible, the evaluation has drawn on agency-specific evaluation to complement findings.
The non-randomised sample of trial participants may not reflect the views of the entire APS.
Participants self-nominated to be involved in the trial, contributing to a degree of selection bias. The representation of APS job families and classifications in the trial differs from the proportions in the overall APS.
Mitigations: the over and underrepresentation of certain groups has been noted. Statistical significance and standard error were calculated, where applicable, to ensure robustness of results.
There was an inconsistent roll out of Copilot across agencies.
Agencies began the trial at different stages, meaning there was not an equal opportunity to build capability or identify use cases. Agencies also used different versions of Copilot due to frequent product releases.
Mitigations: there is a distinction between what may be a functionality limitation of Copilot and when a feature has been disabled by an agency.
Measuring the impact of Copilot relied on trial participants’ self-assessment of productivity benefits.
Trial participants were asked to estimate the scale of Copilot’s benefits, which may naturally under or overestimate its impact.
Mitigations: where possible, the evaluation has compared productivity findings against other evaluations and external research to verify its validity.
Statistical significance of outcomes
The trial of Copilot for Microsoft 365 involved the distribution of nearly 5,765 Copilot licenses across 56 participating agencies. As part of engagement activities — consultations and surveys — the evaluation gathered the experience and sentiment from over 2,000 trial participants representing more than 45 agencies. Insights were further strengthened by the findings from internal evaluations completed by certain agencies. The sample size was sufficient to ensure 95% confidence intervals of reported proportions (at the overall level) were within a margin of error of 5%.
There were 3 questions asked in the post-use survey that were originally included in either the pre-use or pulse survey. These questions were repeated to compare responses of trial participants before and after the survey and measure the change in sentiment. A t-test was used to determine whether changes were statistically significant at a 5% level of significance.
-
The survey aligned with the APS Job Family Framework and APS job families and classifications were aggregated in survey analysis to reduce standard error and ensure statistical robustness. Post-use survey responses from Trades and Labour, and Monitoring and Audit job families were excluded from reporting as their sample size was less than 10, but their responses were still included in aggregate findings.
For APS classifications, APS 3-6 have been aggregated.
Survey participation by APS classification and job family
Table A: Aggregation of APS job families for survey analysis Group Job families Corporate Accounting and Finance
Administration
Communications and Marketing
Human Resources
Information and Knowledge Management
Legal and Parliamentary
ICT and Digital Solutions ICT and Digital Solutions Policy and Program Management Policy
Portfolio, Program and Project Management
Service Delivery
Technical Compliance and Regulation
Data and Research
Engineering and Technical
Intelligence
Science and Health
Table B: Participation in surveys according to APS level classification Percentage of all APS employees Percentage of pre-use survey respondents Percentage of post-use survey respondents SES 1.9 4.7 5.3 EL 2 9.0 20.0 20.2 EL 1 20.8 36.9 34.0 APS 6 23.4 23.4 22.3 APS 5 14.7 8.5 9.6 APS 3-4 26.0 6.0 7.4 APS 1-2 4.2 10.5 1.1 Table C: Participation in surveys according to job family Percentage of all APS employees Percentage of pre-use survey respondents Percentage of post-use survey respondents Accounting and Finance 5.1 5.3 3.5 Administration 11.4 9.0 8.9 Communication and Marketing 2.5 4.9 5.8 Compliance and Regulation 10.3 6.6 6.5 Data and Research 3.7 9.9 8.3 Engineering and Technical 1.8 1.3 1.5 Human Resources 3.9 5.3 5.0 ICT and Digital Solutions 5.0 19.6 22.3 Information and Knowledge Management 1.1 2.5 1.6 Intelligence 2.4 0.9 2.1 Legal and Parliamentary 2.6 4.1 3.5 Monitoring and Audit 1.5 1.1 1.0 Policy 7.9 13.7 14.4 Portfolio, Program and Project Management 8.3 8.6 7.5 Science and Health 4.2 1.6 2.1 Senior Executive 2.1 2.3 1.5 Service Delivery 25.5 2.7 4.0 Trades and Labour 0.7 0.9 - Participating agencies
Table D: List of participating agencies by portfolio Portfolio Entity Agriculture, Fisheries and Forestry Department of Agriculture, Fisheries and Forestry
Grains Research and Development Corporation
Regional Investment Corporation
Rural Industries Research and Development (trading as AgriFutures Australia)
Attorney-General’s Australian Criminal Intelligence Commission
Australian Federal Police
Australian Financial Security Authority
Office of the Commonwealth Ombudsman
Climate Change, Energy, the Environment and Water Australian Institute of Marine Science
Australian Renewable Energy Agency
Department of Climate Change, Energy, Environment and Water
Bureau of Meteorology
Education Australian Research Council
Department of Education
Tertiary Education Quality and Standards Agency
Employment and Workplace Relations Comcare
Department of Employment and Workplace Relations
Fair Work Commission
Finance Commonwealth Superannuation Corporation
Department of Finance
Digital Transformation Agency
Foreign and Trade Affairs Australian Centre for International Agricultural Research
Australian Trade and Investment Commission
Department of Foreign Affairs and Trade
Tourism Australia
Health and Aged Care Australian Digital Health Agency
Australian Institute of Health and Welfare
Department of Health and Aged Care
Home Affairs Department of Home Affairs (Immigration and Border Protection) Industry, Science and Resources Australian Building Codes Board
Australian Nuclear Science and Technology Organisation
Commonwealth Scientific and Industrial Research Organisation
Department of Industry, Science and Resources
Geoscience Australia
IP Australia
Infrastructure, Transport, Regional Development, Communication and the Arts Australian Transport Safety Bureau Parliamentary Departments (not a portfolio) Department of Parliamentary Services Social Services Australian Institute of Family Studies
National Disability Insurance Agency
Treasury Australian Prudential Regulation Authority
Australian Securities and Investments Commission
Australian Charities and Not-for-profits Commission
Australian Taxation Office
Department of the Treasury
Productivity Commission
-
Supporting the policy for responsible use of AI in government
-
Guidance for staff training on AI
Version 1.1
Use the following information to support your agency’s implementation of the policy for responsible use of AI in government.
The policy recognises that AI is used in many areas of the APS and everyday life. As adoption grows, staff at all levels will interact with AI and its outputs, directly or indirectly.
The policy strongly recommends that agencies implement AI fundamentals training for all staff, regardless of their role. Agencies should also consider if it is appropriate for staff to complete annual refresher training.
Providing foundational training will help staff become confident using AI in a responsible way.
What’s covered in the training
The Digital Transformation Agency (DTA) has developed AI fundamentals training for agency use. The training will be updated periodically to address changes in the AI landscape, such as evolutions in the technology and its use in government.
The training provides knowledge on AI that includes:
- an introduction to AI
- an explanation of generative AI
- foundations of safe and responsible use.
As staff will likely interact with generative AI now and in the future, it is an area of focus for the training.
The training is designed for all staff regardless of their experience using AI. It takes approximately 20 to 30 minutes to complete.
It does not cover advanced topics such as model training, system development or instructions for specific technologies or platforms.
Implement the training
Agency’s Learning and Development specialists can access the training for download through the APS Learning Bank. The training is provided in a format compatible with most e-learning platforms. Alternatively, agencies can choose to let their staff access the module directly through APSLearn.
Agencies can use the training module as provided, or choose to modify it or incorporate it into an existing training program based on their specific context and requirements.
Agencies are encouraged to consider additional training for staff in consideration of their roles and responsibilities, such as those responsible for the procurement, development, training and deployment of AI systems.
Report completion rates
Where an agency implements the training, accountable officials should monitor completion rates and provide this information if requested by the DTA. This is in line with the activities to measure the implementation of the policy under the Standard for accountable officials.
Questions about implementation
Agencies can contact the DTA with questions about implementing the training by emailing ai@dta.gov.au.
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.