5.  Reliability and safety

Under Australia's AI Ethics Principles, AI systems should throughout their lifecycle reliably operate in accordance with their intended purpose.

For each of the following questions, indicate either yes, no or N/A, and explain your answer.

5.1    Data suitability

If your AI system requires the input of data to operate, or you are training or evaluating an AI model, can you explain why the chosen data is suitable for your use case?

Consider data quality and factors such as accuracy, timeliness, completeness, consistency, lineage, provenance and volume.

5.2    Indigenous data 

If your AI system uses Indigenous data, including where any outputs relate to Indigenous people, have you ensured that your AI use case is consistent with the Framework for Governance of Indigenous Data?

Consider whether your use of Indigenous data and AI outputs is consistent with the expectations of Indigenous people, and the Framework for Governance of Indigenous Data (GID). See definition of Indigenous data in guidance material.

5.3    Suitability of procured AI model

If you are procuring an AI model, can you explain its suitability for your use case?  

May include multiple models or a class of models. Includes using open-source models, application programming interfaces (APIs) or otherwise sourcing or adapting models. Factors to consider are outlined in guidance.

5.4    Testing

Outline any areas of concern in results from testing. If testing is yet to occur, outline elements to be considered in testing plan (for example, the model’s accuracy).

5.5    Pilot

Have you conducted, or will you conduct, a pilot of your use case before deploying?

If answering ‘yes’, explain what you have learned or hope to learn in relation to reliability and safety and, if applicable, outline how you adjusted the use of AI. 

5.6    Monitoring

Have you established a plan to monitor and evaluate the performance of your AI system?

If answering ‘yes’, explain how you will monitor and evaluate performance. 

5.7    Preparedness to intervene or disengage

Have you established clear processes for human intervention or safely disengaging the AI system where necessary (for example, if stakeholders raise valid concerns with insights or decisions or an unresolvable issue is identified)?  

See guidance document for resources to assist you in establishing appropriate processes.

6. Privacy protection and security

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.