1. Basic information
1.1 AI use case profile
This section is intended to record basic information about the AI use case.
Name of AI use case
Choose a clear, simple name that accurately conveys the nature of the use case.
Reference number
Assign a unique reference number for your assessment. Unless otherwise advised by your agency or the Digital Transformation Agency (DTA), we recommend using an abbreviation of your agency’s name followed by the date (YYMMDD) that work first began on this assessment and a sequence number if multiple assessments start on the same day. This is intended to assist with internal record keeping and engagement with the DTA.
Lead agency
This should be the agency with primary responsibility for the AI use case. Where 2 or more agencies are jointly leading, nominate one as the contact point for the assessment.
Assessment contact officer
This should be the officer with primary responsibility for the completion and accuracy of the AI assurance assessment.
Executive sponsor
This should be the SES officer with primary responsibility for reviewing and signing off on the AI use case assessment.
AI use case description
Briefly explain how you are using or intending to use AI. This should be targeted at the level of an ‘elevator pitch’ that gives the reader a clear idea of the kind of AI use intended, without going into unnecessary technical detail. You may wish to include a high‑level description of the problem that the AI use case is trying to solve, the way AI will be used and the outcome it is intended to achieve (drawing on your answers in section 2). Use simple, clear language, avoiding technical jargon where possible.
Type of AI technology
Briefly explain what type of AI technology you are using or intend to use (for example, supervised or unsupervised learning, computer vision, natural language processing, generative AI).
While this may require a more technical answer than the use case description, aim to be clear and concise with your answer and use terms that a reasonably informed person with experience in the AI field would understand.
1.2 Lifecycle stage
The lifecycle stages and the guidance below are adapted from the OECD’s definition of the AI system lifecycle.
Early experimentation
Intended to cover experimentation which does not:
- commit you to proceeding with a use case or to any design decisions that would affect implementation later
- commit you to expending significant resources or time
- risk harming anyone
- introduce or exacerbate any privacy or cybersecurity risks
- produce outputs that will form the basis of policy advice, service delivery or regulatory decisions.
Design, data and models
A context-dependent phase encompassing planning and design, data collection and processing, and model building.
‘Planning and design of an AI system’ involves articulating the system’s concept and objectives, underlying assumptions, context and requirements and potentially building a prototype.
‘Data collection and processing’ includes gathering and cleaning data, performing checks for completeness and quality, documenting metadata and characteristics of the data set.
‘Model building and interpretation’ involves the creation, adaptation or selection of models and algorithms, their calibration and/or training and interpretation.
Verification and validation
Involves executing and tuning models, with tests to assess performance across various dimensions and considerations.
Deployment
Into live productions involves piloting, checking compatibility with legacy systems, managing organisational change and evaluating user experience.
Operation and monitoring
Involves operating the AI system and continuously assessing the recommendations and impacts (intended and unintended) in light of objectives and ethical considerations. This phase identifies problems and adjusts by reverting to other phases or, if necessary, retiring an AI system from production.
Retirement
Involves ceasing operation or development of a system and may include activities such as evaluation, decommissioning and data migration.
These phases are often iterative and not necessarily sequential. The decision to retire an AI system from operation may occur at any point in the operation and monitoring phase.
1.3 Review date
Include the estimated date when this assessment will next need to be reviewed. For example, ‘Moving to deployment – Q3 2026’.
The triggers for a review are:
- an AI use case moving to a different stage of its lifecycle (for example, from ‘design, data and models’ to ‘verification and validation’)
- a significant change to the scope, function or operational context of the use case.
Agencies may choose to conduct reviews at regular intervals even if the above review triggers have not been met, in line with internal policies and risk tolerance. For assistance in determining the next appropriate review date, consult the DTA.
1.4 Assessment review history
For each review of the assessment, record the review date and summarise the change or changes arising from the review.