-
Many focus group participants and post-use survey respondents found it difficult to prepare data in Excel for effective prompting, in particular the need for data to be structured in tables. In addition, focus group participants noted that Copilot often either could not complete the requested action or would not fully perform the function it was asked (for example only returning the suggested formula for users to enter rather than automatically performing the analysis).
There were also issues with accessing Copilot functionality in MS Outlook. Copilot features in Outlook were only available through either the newest Outlook desktop application or the web version of Copilot. Focus group participants noted that it was unlikely that trial participants had the newest versions of Outlook and were therefore unable to access Copilot features in Outlook. To work around this accessibility barrier, several post-use survey respondents and focus group participants noted they copied content from Outlook into Word or Teams to then prompt Copilot for assistance. The inability to access Copilot in Outlook likely weighed down participant sentiment for this product.
There has been a reduction in positive sentiment across all activities that trial participants had expected Copilot to assist with.
Trial participants who completed both the pre-use and post-use survey recorded a reduction in positive sentiment across most activities Copilot was expected to support. As shown in Figure 8, although the sentiment remains positive for most of these categories, their initial expectations of Copilot were unmet.
Figure 8 | Comparison of combined 'somewhat agree', 'agree' and 'strongly agree' responses to pre-use survey statement 'I believe Copilot will…' and post-use survey statement 'Using Copilot has…', by type of observation (n=330)
-
Data table for figure 8
Comparison of combined 'somewhat agree', 'agree' and 'strongly agree' responses to pre-use survey statement 'I believe Copilot will…' and post-use survey statement 'Using Copilot has…', by type of observation (n=330).
Sentiment Pre-trial expectation, agreed Post-trial experience, agreed Help me spend less mental effort on tedious or mundane tasks* 84% 73% Be a net positive on my work 78% 76% Improve the speed at which I complete tasks 75% 74% Make valuable suggestion to enhance my work* 75% 71% Allow me to quickly find the information I am looking for* 72% 64% Free up more focus time for important work* 61% 55% Improve the quality of my work 62% 63% Allow me to spend less time in emails* 43% 30% Allow me to reduce task switching 29% 28% Allow me to attend fewer meetings* 25% 12% *Statistically significant change (p-value < 0.05)
Off -
-
-
It is likely that trial participants expectations were heightened prior to the commencement of the trial. During consultations, it was noted that the features of Copilot (and generative AI more broadly) were marketed as being able to significantly save participants’ time, thereby heightening participants expectations. These expectations appear to have been tempered following Copilot’s use.
Of note, the most significant reductions in positive sentiment were observed in activities where survey respondents had the lowest expectations. There was a 32% decrease in the positive belief that Copilot allowed participants to ‘spend less time in emails’ and a 54% reduction in the belief that it would allow them to ‘attend fewer meetings.’ Even with low expectations, survey respondents did not perceive Copilot was able to assist in these activities.
Other generative AI tools may be more effective at meeting bespoke users’ needs than Copilot
The small proportion of trial participants who use other generative AI products in a work capacity found those tools met their needs slightly better than Copilot.
Of the post-use survey respondents, 16% reported that they use other generative AI products to support their role. Post-use survey respondents reported using the following tools:
- Versatile LLMs – ChatGPT, Gemini, Claude, Amazon Q, Meta AI
- Development tools – Github Copilot, Azure
- Image generators/editors – Midjourney, Dall-E 2, Adobe, Canva
As shown in Table 2, 44% of post-use respondents who used other generative AI tools viewed that other tools met their needs more than Copilot.
Table 2. Post-use survey responses to 'How does Microsoft 365 Copilot compare to other generative AI products you have used?' Sentiment Response (%) Other generative AI products meet my needs significantly more than Copilot 13% Other generative AI products meet my needs slightly more than Copilot 31% Copilot and other generative AI products meet my needs to the same extent 32% Copilot meets my needs slightly more than other generative AI products 13% Copilot meets my needs significantly more than other generative AI products 11% Copilot offers general functionality, matching most features of other publicly available generative AI products, but it may not offer an equivalent level of sophistication or depth across all these features. Post-use survey respondents noted that generative AI products were used for discrete use cases where Copilot was considered less advanced such as: writing and reviewing code, producing more complex written documents, generating images for internal presentations and searching research databases.
Despite the positive sentiment, the actual use of Copilot is moderate
A third of post-use survey respondents used Copilot daily.
While survey respondents were generally positive towards Copilot, the majority of survey respondents did not frequently use Copilot. As shown in Table 3, only a third of post-use survey respondents used Copilot on a daily basis.
Table 3. Post-use survey responses to 'How frequently did you use Copilot for Microsoft 365 during the trial?' (n=811) How frequently did you use Copilot for Microsoft 365 during the trial? Responses (%) Not at all 1% A few times a month 21% A few times a week 46% A few times a day 26% Most of the day 6% The moderate frequency of usage was broadly consistent across APS classifications and job families. There was variance, however, across cohorts relating to the features used. For example, a higher proportion of SES and EL2s used the Teams meeting summarisation compared to other APS classifications. This could reflect the higher number of meetings these cohorts needed to attend, and therefore the higher usage and potential benefits in meeting summaries for the SES and EL2 classifications.
Post-use survey respondents who only used Copilot a few times a month highlighted that they stopped using Copilot because they had a poor first experience with the tool or that it took more time to verify and edit outputs than it would take to create them otherwise. Similarly, other post-use survey respondents remarked they did not feel confident using the tool and couldn’t find time for training among other work commitments and time pressures.
Focus groups with trial participants also remarked that they often forgot Copilot was embedded into Microsoft 365 applications as it was not obviously apparent in the user interface. Consequently they neglected to use features, including forgetting to record meetings for transcription and summarisation. Commonwealth Scientific and Industrial Research Organisation (CSIRO) identified through internal research with CSIRO trial participants that the user interface at times made it difficult to find features (CSIRO 2024:28). Given one of the arguable advantages of Copilot is its current integration with existing MS workflows, its reported lack of visibility amongst users largely diminishes its greatest value-add.
Overall, the usage of Copilot is in its infancy within the APS. Due to a combination of user capability, user interface, perceived benefit of the tool and convenience, Copilot is yet to be ingrained in the daily habits of APS staff.
Copilot was predominantly used to summarise and re-write content
Copilot use is concentrated in MS Teams and Word.
While Copilot provides support in a range of activities, the use of Copilot amongst trial participants was concentrated in a few activities. As shown in Figure 9, over 70% of post-use survey respondents used Copilot in Teams to summarise meetings, and in Word to summarise documents or to re-write content.
Figure 9 | Post-use survey responses, grouped by Microsoft application, to 'How frequently did you use the following features?' of ‘a few times a month’ or more frequent
-
Data table for figure 9
Post-use survey responses, grouped by Microsoft application, to 'How frequently did you use the following features?' of ‘a few times a month’ or more frequent.
OffFeature Used at least once Microsoft application General queries 58% Copilot chat Interface with other Microsoft products 49% Copilot chat Document search/retrieval 46% Copilot chat Data analysis 20% Excel Formula assistance 19% Excel Insight generation 16% Excel Summarisation 3% Loop Draft page content 3% Loop Content editing 2% Loop Idea generation 2% Loop Summarisation 9% OneNote Content generation 6% OneNote Task management 6% OneNote Collaboration enhancement 3% OneNote Drafting assistance 43% Outlook Email summarisation 41% Outlook Meeting follow-up 23% Outlook Content creation 33% PowerPoint Design suggestions 29% PowerPoint Summarisation 26% PowerPoint Meeting summaries 72% Teams Real-time answers 54% Teams Task management 39% Teams Content summarisation 3% Whiteboard Content organisation 2% Whiteboard Idea generation 2% Whiteboard Interactive collaboration 2% Whiteboard Summarisation 71% Word Rewrite suggestions 71% Word Tone adjustments 52% Word Formatting assistance 47% Word -
-
-
Data table for figure 10
Post-use survey responses to 'Do you agree with the following statement: I feel confident in my skills and abilities to use Copilot', by amount of training received (n=810).
OffSentiment Not at all confident Not very confident Moderately confident Fairly confident Very confident Overall (n=810) 2% 13% 35% 35% 14% One form of training (n=276) 4% 23% 35% 28% 10% Two forms of training (n=282) 1% 13% 40% 34% 12% Three or more forms of training (n=252) 0% 4% 31% 44% 21% -
-
-
The general sentiment among focus group participants who were involved in their agency’s implementation (namely chief technology officers (CTOs)/chief information officers (CIOs) and Copilot Champions) was that training requirements were greater than anticipated.
Complicating this is the diverse digital literacy and maturity of staff. Some agencies were better positioned to manage this than others but the positive relationship between perceived capability and amount of training suggests that a concerted, material and ongoing effort is needed to build confidence. A one-off session is unlikely to have a lasting impact on a user’s skills and abilities.
Trial participants broadly found Copilot training useful but noted areas for improvement.
The majority of post-use survey respondents (76%) who attended either agency or Microsoft-led training found the sessions useful. Anecdotal evidence from focus groups, however, suggested that more could be done to personalise training, particularly the training delivered by Microsoft.
Focus group participants believed that Microsoft training was too focused on the features of Copilot, rather than its applications and use cases. Participants also noted that some Microsoft trainers did not understand the APS context and could not answer targeted questions.
To supplement Microsoft-led sessions, almost all agencies that participated in the evaluation offered some form of training. The quality and exhaustiveness of this training, however, varied according to the time and resource constraints of agencies. Some focus group participants had dedicated resources to lead the training effort, while others were encouraged to learn Copilot through hands-on use.
Training was most effective when tailored to APS and agency context.
One focus group participant found Microsoft’s industry specific advice and prompt library a useful aid to upskilling, others expressed a similar desire for cheat sheets with tailored prompts aligned to their roles. Several focus group participants remarked that they gained the most knowledge on impactful use cases through forums their agency created, such as ‘lunch and learns’, webinars, ‘promptathons’ or similar.
A more flexible, community of practice approach was also seen as an effective training method as it provided a means to identify and propagate highly relevant use cases for Copilot. In general, there appears to be a strong demand for training even amongst trial participants who had a high proportion of individuals who were already experienced in generative AI. This included a desire for a wide range of training and information sources supplemented by opportunities to share use cases and broad skills in generative AI.
There are opportunities to further explore use cases in the APS
There were a few novel use cases for Copilot in the APS.
Some trial participants identified a few novel use cases which were highly specific to the roles of participants but highlight the potential of Copilot to support higher order and more bespoke activities. These included:
- Writing and reviewing PowerShell script (for task automation)
- Assessing documents against a rubric or criteria
- Converting technical documentation into plain language (to distribute to a broader audience)
- Drafting content for internal exercises e.g. phishing simulations
- Drafting content for business cases and Cabinet Submissions
- Converting information into standard forms and templates (for processing and assessment).
The presence of novel use cases highlights that there are opportunities to innovatively use Copilot beyond its summarisation, information search and content drafting features.
References
- Commonwealth Scientific and Industrial Research Organisation (2024) ‘Copilot for Microsoft 365; Data and Insights’, Commonwealth Scientific and Industrial Research Organisation, Canberra, ACT, 28.
- Department of Industry, Science and Resources (2024) ‘DISR Internal Mid-Trial Survey Insights’, Department of Industry, Science and Resources, Canberra, ACT, 2.
-
The main use cases are an illustration of the current perceived strengths of Copilot’s predictive engine - it is adept at natural language processing and synthesis, where it can output human-like text in response to provided information and prompts.
Of note, the use of Copilot within Whiteboard and Loop were particularly low. This is not unsurprising when compared with the usage trends before the trial. It is also interesting to note the relatively lower use of Copilot to create content and ideas relative to summarisation of information.
There was a positive relationship between the provision of training and capability to use Copilot.
There was no standard approach undertaken to train trial participants on how to use Copilot. Participants adopted a combination of methods based on their perceived capability and resources provided to them. The 4 main training options available to trial participants were:
- accessing Copilot resources on the Internet
- hands-on experimentation with Copilot
- attending agency-facilitated Copilot training
- attending Microsoft-led Copilot training.
Almost half of all post-use survey respondents felt fairly or very confident in their skills and abilities to use Copilot. As depicted in Figure 10, the proportion of ‘fairly’ and ‘very confident’ responses combined is 16 percentage points higher when post-use survey respondents accessed 3 or more forms of training compared to overall. This indicates a positive correlation between the amount of training participants received and their ability to use Copilot. The importance of training was also highlighted in the CSIRO’s evaluation which reported that trial participants needed additional training and/or resources that would support advanced features and usage (CSIRO 2024:11).
-
Key insights
The majority of survey respondents (69% of post-use survey respondents) agreed that Copilot improved the speed at which they could complete tasks and uplifted the quality of their work (61% of post-use survey respondents).
Approximately 65% of the managers in the post-use survey found that Copilot had a positive impact on the quality and efficiency of their team members in particular in assisting team members to quickly produce briefing materials and uplifting the quality of written outputs.
Copilot contributed the most perceived time savings in tasks related to summarisation, information searches and preparing first drafts with an estimated time savings of around an hour a day for those tasks.
The ICT and Digital Solution job family perceived the most efficiency gains of around one hour a day across summarisation activities and preparing first drafts of documents. Across APS classifications, APS3-6 and EL1s perceived the similar time savings of an hour a day in summarisation tasks and creating first drafts.
40% of trial participants reported being able to reallocate their time to higher value activities. Trial participants also remarked on the ability to spend more time on face-to-face activities such as staff engagement, culture building and mentoring, and taking more time to build relationships with end users and stakeholders.
The quality of Copilot’s output limited the scale of productivity benefits. Overall, Copilot’s improvements to work quality were more subdued than improvements to work efficiency. While the majority of trial participants viewed that Copilot was effective at developing first drafts of documents and lifting overall quality, editing was almost always needed to tailor content for the audience or context thereby reducing total efficiency gains. Copilot is perceived to improve the efficiency and quality of outputs.
Copilot is perceived to improve the efficiency and quality of outputs
The majority of post-use survey respondents perceived that Copilot positively affected their productivity.
Trial participants generally perceived that Copilot had a positive impact on 2 key measures of productivity – efficiency and quality. As shown in Figure 11, the majority of post-use survey respondents agreed that Copilot improved the speed at which they could complete tasks (69%) and uplifted the quality of their work (61%).
-
Data table for figure 11
Post-use responses to 'What extent do you agree with the following statements: using Copilot has improved the…', from respondents who completed both pre- and post-use surveys (n=330).
Sentiment Strongly disagree Disagree Somewhat disagree Neutral Somewhat agree Agree Strongly agree speed at which I complete tasks 4% 5% 7% 16% 26% 26% 17% quality of my work 4% 6% 7% 22% 26% 24% 11% Totals may amount to less or more than 100% due to rounding.
Off -
-
-
Managers have also noticed productivity improvements within their teams.
Approximately 65% of the managers in the post-use survey found that Copilot had a positive impact on the quality and efficiency of their team members. As shown in Figure 12, less than 3% of this cohort believed Copilot had a negative effect on their team.
-
Data for figure 12
Post-use survey responses to 'What is the impact of Copilot on…', from respondents who manage staff (n=209).
Sentiment Negative Somewhat negative Neutral Somewhat positive Positive quality of your team's output (n=209) 0% 3% 32% 47% 17% efficiency of your staff (n=208) 0% 2% 31% 49% 17% Totals may amount to less or more than 100% due to rounding.
Off -
-
-
Manager respondents in the post-use survey indicated that Copilot helped team members to quickly produce briefing materials and added value to written deliverables. Some managers in focus groups thought that Copilot made writing more consistent across their teams and lifted the overall standard of work.
Efficiencies are concentrated in a few tasks
Copilot contributed the highest perceived time savings in tasks related to summarisation, preparing first drafts and information searches.
Post-use survey respondents perceived Copilot contributed the highest time savings in activities related to information summarisation preparing first drafts and information searches. Respondents estimated that Copilot saved up to an hour a day in these activities, shown in Table 4. These figures are approximations and likely quote the upper bound of time savings Copilot could contribute (assuming APS employees perform the tasks every day).
Table 4. Averaged post-use survey responses to 'On average, how many hours per day has Copilot helped you save in the following areas' from respondents who completed both pre- and post-use surveys (n=330) Activity Hours Communicating through digital means other than meetings 0.5 Summarising existing information 1.1 Preparing first draft of a document 1.0 Searching for information required for a task 0.8 Undertaking preliminary data analysis 0.5 Undertaking preliminary data analysis 0.6 Preparing meeting minutes 0.9 Table notes:
- Hours saved on tasks was approximated by first calculating the mean of the time brackets specified in the question (e.g. 0, 1-4, 5-8, 9-12…).
- The average time was then multiplied by the number of respondents (for each bracket) to determine total time on the activity. The total time is then divided by the number of respondents to estimate average time per respondent.
Productivity benefits were concentrated in a narrow set of tasks that are commonly undertaken by APS staff
In activities where Copilot was perceived to save a significant proportion of time – preparing meeting minutes, summarising information and preparing slides – AI assistants, in the future, could become the primary means to significantly reduce the effort to complete these tasks, but there still remains the need for human involvement and accountability.
The time savings associated with these activities were also observed in agency evaluations. The Australian Tax Office (ATO) saw the greatest proportional efficiencies in these activities (Australian Taxation Office 2024:3) and Home Affairs Copilot trial participants observed that Copilot may provide time savings in scribing, minute-taking, writing up action items and transcribing (Department of Home Affairs 2024:10). For other tasks such as ‘summarising existing information’ and ‘preparing first draft of a document,’ Copilot was perceived to reduce the time spent on these tasks by between 50-70%.
Finally, there is an interesting intersection between time saved and usage. For example, PowerPoint was not frequently used by trial participants but it saved a significant proportion of time. The ATO identified a similar insight in their evaluation as the highest absolute time savings were in data visualisation, taking nearly an hour off the activity (Australian Taxation Office 2024:3). This implies that for those who do use a broad range of MS products and Copilot functionality, the potential time savings from applications such as PowerPoint could be significant.
Copilot’s impact on efficiency varied according to job requirements.
The ICT and Digital Solution job family experienced the most efficiency gains.
Across all the activities provided in the post-use survey, the ICT and Digital Solution job family group estimated the highest efficiency savings across all activities. As shown in Table 5, the ICT and digital solutions job family reported an efficiency saving of around an hour a day when performing summarisation and document drafting activities.
Table 5. Averaged post-use survey responses to 'On average, how many hours per day has Copilot helped you save in the following areas', by APS job family Average Corporate ICT and digital solutions Policy and program management Technical Searching for information required for a task (n=718) 0.76 0.7 0.85 0.68 0.86 Summarising existing information
(n=735)1.03 1 1.06 0.99 1.08 Preparing meeting minutes
(n=608)0.94 0.82 1.06 0.91 1 Preparing first draft of a document
(n=715)0.99 0.94 1.12 0.96 0.96 Undertaking preliminary data analysis (n=586) 0.59 0.67 0.69 0.57 0.43 Preparing slides
(n=605)0.59 0.55 0.64 0.55 0.63 Communicating through digital means other than meetings (n=680) 0.49 0.45 0.54 0.51 0.46 Attending meetings
(n=713)0.37 0.33 0.48 0.41 0.26 Writing or reviewing code in a programming language (n=393) 0.5 0.48 0.58 0.3 0.6 Table 6. Averaged post-use survey responses to 'On average, how many hours per day has Copilot helped you save in the following areas', by APS classification Average APS 3-6 EL 1 EL 2 SES Searching for information required for a task (n=690) 0.73 0.83 0.84 0.63 0.61 Summarising existing information for various purposes (n=708) 0.99 1.06 1.07 0.97 0.86 Preparing meeting minutes (n=582) 0.95 0.99 0.97 0.89 0.95 Preparing first draft of a document (n=687) 0.93 1.1 1.09 0.76 0.78 Undertaking preliminary data analysis (n=561) 0.57 0.64 0.67 0.45 0.52 Preparing slides (n=575) 0.61 0.66 0.6 0.51 0.68 Communicating through digital means other than meetings (n=651) 0.48 0.56 0.54 0.39 0.44 Attending meetings (n=682) 0.37 0.38 0.48 0.28 0.35 Writing or reviewing code in a programming language (n=370) 0.40 0.68 0.57 0.21 0.12 Within agencies, APS 3 to 4 (usually graduates) are usually expected to lead notetaking and summarisation tasks as well as create the first draft of document. APS staff in more junior levels may not yet possess the capability to complete these tasks efficiently. It is likely that Copilot positively augments their ability to a greater extent than more experienced employees.
Around 40% of trial participants reported the ability to reallocate their time to higher value activities.
For some trial participants, Copilot was seen as a facilitator for engagement in more substantive and complex work. As shown in Figure 13, 41% of post-use survey respondents believed Copilot enabled them to spend more time on higher-value tasks.
-
Data for figure 13
Post-use survey responses to 'What extent do you agree with the following statement: Copilot has enabled me to allocate my time to perform tasks that are higher value and/or more complex' (n=807).
Sentiment Strongly disagree Disagree Neutral Agree Strongly agree Response 4% 10% 44% 32% 9% Totals may amount to less or more than 100% due to rounding.
Off -
-
-
Post-use survey respondents remarked they felt they spent less time playing ‘corporate archaeologist’ in searching for information and documents and more time in strategic thinking and deep analysis.
-
Data for figure 14
Post-use survey responses reporting time savings of 0.5 hours or more (n=795) and overall agreement of improved quality of work (n=801), by type of activity.
OffResponse Improved quality Some time saved Summarising existing information for various purposes 69% 76% Preparing the first draft of a document 58% 67% Preparing meeting minutes 60% 68% Searching for information required for a task 54% 62% Undertaking preliminary data analysis 32% 44% Preparing slides 35% 40% Communicating through digital means other than meetings 31% 35% Writing or reviewing code in a programming language 30% 30% -
-
-
A concern voiced by many focus group and post-use survey participants was that Copilot could not emulate the standard style of Australian Government documents. Some of these participants highlighted that heavy re-work was needed to meet the tone expected by senior stakeholders within their agency and of government more broadly.
For this reason, focus group participants noted they would not use Copilot for important documents or communications. Some trial participants acknowledged that Copilot could get closer to the desired output through follow-up prompts and clarifications, but this was not viewed as being worth the additional effort.
Copilot’s unpredictability and inaccuracy limited the scale of productivity benefits.
The unpredictability of Copilot affected the trust of trial participants and their productivity gains. Generative AI is a non-deterministic form of AI, meaning it will almost always produce a different output even if given the same exact prompt. Copilot is trained to predict patterns rather than understand facts, sometimes leading to it returning plausible sounding but inaccurate information, which is referred to as a ‘hallucination’.
Many trial participants across focus groups and the post-use survey commented that due to fears of hallucinations, they combed through Copilot’s outputs to verify its accuracy. In some cases, this involved reading the entire document Copilot produced to check for any errors which significantly reduced any efficiency gains.
As shown in Table 4, up to 7% of post-use survey respondents reported that Copilot added time to tasks in part due to the effort required to verify outputs. Distrust of Copilot’s outputs also surfaced in DISR’s internal mid-trial survey insights, with 60% of trial participants claiming they had to make a moderate to significant number of edits to outputs (see Department of Industry, Science and Resources, 2024:6).
Table 7. Post-use survey responses reporting that Copilot added time to activity, by type Response Added time Preparing slides (n=620) 7% Undertaking preliminary data analysis (n=603) 6% Writing or reviewing code in a programming language (n=620) 6% Attending meetings (n=739) 4% Summarising existing information for various purposes (n=759) 3% Preparing the first draft of a document (n=739) 3% Searching for information required for a task (n=744) 3% Communicating through digital means other than meetings (n=705) 3%
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.