• Australians now have unprecedented transparency into the performance of the government’s digital projects

    Work to improve assurance of digital projects is ensuring reliable assessments of delivery confidence are regularly undertaken. These assessments show most projects are on track to deliver expected outcomes on budget and on schedule.

    Bar graphs showing information about projects by tier and delivery confidence. For the data contained in the bar graphs, see the image description below.
  • Criterion 2 – Know your user

  • Image description

    There are three bar graphs in the image. 

    1. Image 1: Diagram header: 'Tier 1 and 2 projects including an assessment of delivery confidence'. The diagram indicates that in 2024, 52.1% of projects included an assessment of delivery confidence, in comparison to 98.4% in 2025
    2. Image 2: Diagram header: 'Independent delivery confidence ratings'. The diagram indicates that in 2025, 80.3% of projects included independent delivery confidence ratings (no comparison data for 2024 is provided)
    3. Image 3: Diagram header: Projects reporting Medium-High or above delivery confidence'. The diagram indicates that in 2024, 31.3% of projects reported Medium-High or above delivery confidence, in comparison with 61.3% in 2025.
    Off
  • Information management for records created using AI technologies

    Guidance on identifying and managing records created by, or relating to, AI technologies employed by Australian Government agencies.

    These materials are hosted on the National Archives of Australia website.

  • Disclaimer

    “Certain numbers in this report have been rounded to one decimal place. Due to rounding, some totals may not correspond with the sum of the separate figures.” 

  • Disclaimer

    “Certain numbers in this report have been rounded to one decimal place. Due to rounding, some totals may not correspond with the sum of the separate figures.” 

  • Image description

    The diagram indicates that a total of 29 projects entered assurance oversight in February 2024, with a total budget figure of $7.1 billion.

    High delivery confidence – 5 projects with a total budget $0.3 billion.

    Medium delivery confidence - 17 projects with a total budget $5.6 billion.

    Medium delivery confidence - 5 projects with a total budget $0.6 billion.

    Medium-Low delivery confidence - 2 projects with a total budget $0.6 billion.

    Off
  • Understanding overall changes in delivery confidence to target engagement and reforms

    Most (75.9%) of the 29 Tier 1 and 2 projects entering oversight since February 2024 report a High or Medium-High delivery confidence. These projects commonly report factors contributing to their delivery confidence rating at the start as: establishing effective governance early; having well-prepared documentation and artefacts; and ensuring experienced and capable personnel were ready.

    This is an early sign that investment to strengthen digital project design processes is increasing overall delivery confidence. Projects often start with lower levels of delivery confidence, but the recent emphasis on ensuring mature planning is in place before projects start appears to be paying dividends, with more than three-quarters of these new projects entering oversight reporting High or Medium-High confidence. This contrasts with the United Kingdom where ‘it is not unusual for projects to be rated as Red earlier in their lifecycle, when scope, benefits, costs and delivery methods are still being explored’ (Infrastructure and Projects Authority 2024 p.13).

    Reforms supporting success – partnering with industry to deliver digital projects

    Recognising the crucial role of technology vendors in delivering the Australian Government’s ambitions for digital transformation, the Digital and ICT Investment Oversight Framework includes ‘sourcing’ as an area of focus. As part of this, the DTA coordinates marketplaces and agreements designed to enable agencies to easily access technology goods and services to support their digital projects. In 2023–24, the Australian Government sourced more than $6.4 billion of digital products and services from industry via these marketplaces and agreements. By accessing these arrangements through the BuyICT platform, agencies benefited from the Australian Government’s collective buying power and strengthened terms and conditions.

    The DTA’s latest ICT labour hire and professional services panel, the Digital Marketplace Panel 2, adopts the APS Career Pathfinder dataset and Skills Framework for the Information Age (SFIA) to classify ICT labour hire opportunities. The classification of roles and greater panel pricing transparency provides clearer signals for in-demand skills, their costs and potential shortages that will inform delivery capacity and confidence in digital projects. The top in-demand digital and ICT skills sourced by the APS include software engineer, solution architect and business analyst.

  • Summary of requirements in the standard

    The statements and criteria of this standard are organised by stage of the AI lifecycle, including those that apply across all lifecycle stages.

    Lifecycle stage: Across all lifecycle stages

    Statement Number 1. Define an operational model

    Recommended
    • Identify a suitable operational model to design, develop and deliver the system securely and efficiently.
    • Consider the technology impacts of the operating model.

    Statement Number 2. Define the reference architecture

    Recommended
    • Evaluate existing reference architectures.
    • Monitor emerging reference architectures to evaluate and update the AI system.

    Statement Number 3. Identify and build people capabilities

    Recommended
    • Identify and assign AI roles to ensure a diverse team of professionals with specialised skills.
    • Build and maintain AI capabilities by undertaking regular training and education of staff and stakeholders.
    • Mitigate staff overreliance or misuse of AI by conducting regular reviews and audits.

    Statement Number 4. Enable AI auditing

    Required
    • Perform model-specific audits.
    Recommended
    • Develop auditable AI system.

    Statement Number 5. Provide explainability based on the use case

    Required
    • Explain the AI technology used, including the limitations and capabilities of the system.
    Recommended
    • Explain predictions and decisions made by the AI system
    • Explain data usage and sharing.
    • Explain the AI model.

    Statement Number 6. Manage system bias 

    Required
    • Identify sources of bias.
    • Assess identified bias.
    • Manage identified bias across the AI system lifecycle.

    Statement Number 7. Apply version control practices

    Required
    • Apply version management practices to the end-to-end development lifecycle.
    Recommended
    • Use metadata in version control to distinguish between production and non-production data, models and code.
    • Use a version control toolset to improve useability for users.
    • Record version control information in audit logs.

    Statement Number 8. Apply watermarking techniques

    Required
    • Apply watermarks to media content that is generated to acknowledge provenance and provide transparency.
    • Apply watermarks that are WCAG compatible where relevant.
    Recommended
    • Use watermarking tools based on the use case and content risk.
    • Assess watermarking risks and limitations.
  • Whole of AI lifecycle

    Statement Number 1. Define an operational model

    Recommended
    • Criterion 1: Identify a suitable operational model to design, develop, and deliver the system securely and efficiently.
    • Criterion 2: Consider the technology impacts of the operating model.  
    • Criterion 3: Consider technology hosting strategies.

    Statement Number 2. Define the reference architecture

    Required
    • Criterion 4: Evaluate existing reference architectures.
    Recommended
    • Criterion 5: Monitor emerging reference architectures to evaluate and update the AI system.

    Statement Number 3. Identify and build people capabilities

    Required
    • Criterion 6: Identify and assign AI roles to ensure a diverse team of business and technology professionals with specialised skills.
    • Criterion 7: Build and maintain AI capabilities by undertaking regular training and education of end users, staff, and stakeholders.
    Recommended
    • Criterion 8: Mitigate staff over reliance, under reliance, and aversion of AI.

    Statement Number 4. Enable AI auditing

    Required
    • Criterion 9: Provide end-to-end auditability.
    • Criterion 10: Perform ongoing data-specific checks across the AI lifecycle.
    • Criterion 11: Perform ongoing model-specific checks across the AI lifecycle.

    Statement Number 5. Provide explainability based on the use case

    Required
    • Criterion 12: Explain the AI system and technology used, including the limitations and capabilities of the system.
    Recommended
    • Criterion 13: Explain outputs made by the AI system to end users.
    • Criterion 14: Explain how data is used and shared by the AI system.

    Statement Number 6. Manage system bias 

    Required
    • Criterion 15: Identify how bias could affect people, processes, data, and technologies involved in the AI system lifecycle.
    • Criterion 16: Assess the impact of bias on your use case.
    • Criterion 17: Manage identified bias across the AI system lifecycle.

    Statement Number 7. Apply version control practices

    Required
    • Criterion 18: Apply version management practices to the end-to-end development lifecycle.
    Recommended
    • Criterion 19: Use metadata in version control to distinguish between production and non-production data, models, and code.
    • Criterion 20: Use a version control toolset to improve useability for users.
    • Criterion 21: Record version control information in audit logs.

    Statement Number 8. Apply watermarking techniques

    Required
    • Criterion 22: Apply visual watermarks and metadata to generated media content to provide transparency and provenance, including authorship.
    • Criterion 23: Apply watermarks that are WCAG compatible where relevant.
    • Criterion 24: Apply visual and accessible content to indicate when a user is interacting with an AI system.
    Recommended
    • Criterion 25: For hidden watermarks, use watermarking tools based on the use case and content risk.
    • Criterion 26: Assess watermarking risks and limitations.
    Off
  • Design

    Statement Number 9. Conduct pre-work

    Required
    • Criterion 27: Define the problem to be solved, its context, intended use, and impacted stakeholders.
    • Criterion 28: Assess AI and non-AI alternatives.
    • Criterion 29: Assess environmental impact and sustainability.
    • Criterion 30: Perform cost analysis across all aspects of the AI system.
    • Criterion 31: Analyse how the use of AI will impact the solution and its delivery.

    Statement Number 10. Adopt a human-centred approach

    Required
    • Criterion 32: Identify human values requirements.
    • Criterion 33: Establish a mechanism to inform users of AI interactions and output, as part of transparency.
    • Criterion 34: Design AI systems to be inclusive, ethical, and meets accessibility standards using appropriate mechanisms.
    • Criterion 35: Design feedback mechanisms.
    • Criterion 36: Define human oversight and control mechanisms.
    Recommended
    • Criterion 37: Involve users in the design process.

    Statement Number 11. Design safety systemically

    Required
    • Criterion 38: Analyse and assess harms.
    • Criterion 39: Mitigate harms by embedding mechanisms for prevention, detection, and intervention.
    Recommended
    • Criterion 40: Design the system to allow calibration at deployment.

    Statement Number 12. Define success criteria

    Required
    • Criterion 41: Identify, assess, and select metrics appropriate to the AI system.
    Recommended
    • Criterion 42: Reevaluate the selection of appropriate success metrics as the AI system moves through the AI lifecycle.
    • Criterion 43: Continuously verify correctness of the metrics.
    Off
  • Summary of requirements in the standard

    The statements and criteria of this standard are organised by stage of the AI lifecycle, including those that apply across all lifecycle stages.

  • Lifecycle stage: Design

    Statement Number 9. Conduct pre-work

    Required
    • Define the problem to be solved, its context, intended use and expected outcomes.
    • Identify and document user groups, stakeholders, processes, data, systems, operating environment and constraints.
    • Assess AI and non-AI alternatives.
    • Conduct experimentation and trade-off analysis.
    • Analyse how the use of AI will impact the solution and its delivery.

    Statement Number 10. Adopt a human-centred approach throughout design

    Required
    • Identify human values requirements.
    • Provide transparent user interfaces.
    • Design AI systems to be inclusive, meet accessibility standards.
    • Design feedback mechanisms.
    Recommended
    • Involve users in the design process.
    • Define user control mechanisms.
    • Allow users to personalise their experience.
    • Design the system to allow for calibration at deployment where parameters are critical to the performance, reliability, and safety of the AI system.

    Statement Number 11. Design safety systemically

    Required
    • Analyse, assess and mitigate harms relevant to their AI use case by identifying sources and embedding mechanisms for prevention, detection, and intervention.

    Statement Number 12. Define success criteria

    Required
    • Identify, assess, and select metrics appropriate to the AI system.
    Recommended
    • Reevaluate the selection of appropriate success metrics as the AI system moves through the AI lifecycle.
  • Data

    Statement Number 13. Establish data supply chain management processes

    Required
    • Criterion 44: Create and collect data for the AI system and identify the purpose for its use.
    • Criterion 45: Plan for data archival and destruction.
    Recommended
    • Criterion 46: Analyse data for use by mapping the data supply chain and ensuring traceability.
    • Criterion 47: Implement practices to maintain and reuse data.

    Statement Number 14. Implement data orchestration processes

    Required
    • Criterion 48: Implement processes to enable data access and retrieval, encompassing the sharing, archiving, and deletion of data.
    Recommended
    • Criterion 49: Establish standard operating procedures for data orchestration.
    • Criterion 50: Configure integration processes to integrate data in increments.
    • Criterion 51: Implement automation processes to orchestrate the reliable flow of data between systems and platforms.
    • Criterion 52: Perform oversight and regular testing of task dependencies.
    • Criterion 53: Establish and maintain data exchange processes.

    Statement Number 15. Implement data transformation and feature engineering practices

    Recommended
    • Criterion 54: Establish data cleaning procedures to manage any data issues.
    • Criterion 55: Define data transformation processes to convert and optimise data for the AI system.
    • Criterion 56: Map the points where transformation occurs between datasets and across the AI system.
    • Criterion 57: Identify fit-for-purpose feature engineering techniques.
    • Criterion 58: Apply consistent data transformation and feature engineering methods to support data reuse and extensibility.

    Statement Number 16. Ensure data quality is acceptable

    Required
    • Criterion 59: Define quality assessment criteria for the data used in the AI system.
    Recommended
    • Criterion 60: Implement data profiling activities and remediate any data quality issues.
    • Criterion 61: Define processes for labelling data and managing the quality of data labels.

    Statement Number 17. Validate and select data

    Required
    • Criterion 62: Perform data validation activities to ensure data meets the requirements for the system’s purpose.
    • Criterion 63: Select data for use that is aligned with the purpose of the AI system.

    Statement Number 18. Enable data fusion, integration and sharing

    Recommended
    • Criterion 64: Analyse data fusion and integration requirements.
    • Criterion 65: Establish an approach to data fusion and integration.
    • Criterion 66: Identify data sharing arrangements and processes to maintain consistency.

    Statement Number 19. Establish the model and context dataset

    Required
    • Criterion 67: Measure how representative the model dataset is.
    • Criterion 68: Separate the model training dataset from the validation and testing datasets.
    • Criterion 69: Manage bias in the data.
    Recommended
    • Criterion 70: For generative AI, build reference or contextual datasets to improve the quality of AI outputs.
    Off
  • Criterion 3 – Leave no one behind

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.