7. Transparency and explainability
For each of the following questions, indicate either yes, no or N/A, and explain your answer.
7.1 Consultation
Have you consulted stakeholders representing all relevant communities or groups that may be significantly affected throughout the lifecycle of the AI use case?
Refer to the list of stakeholders identified in section 2. Seek out community representatives with the appropriate skills, knowledge or experience to engage with AI ethics issues. Consult the guidance document for prompts and resources to assist you.
7.2 Public visibility
Will appropriate information (such as the scope and goals) about the use of AI be made publicly available?
See guidance document for advice on appropriate transparency mechanisms, information to include and factors to consider in deciding to publish or not publish AI use information.
7.3 Maintain appropriate documentation and records
Have you ensured that appropriate documentation and records will be maintained throughout the lifecycle of the AI use case?
Ensure you comply with requirements for maintaining reliable records of decisions, testing and the information and data assets used in an AI system. This is important to enable internal and external scrutiny, continuity of knowledge and accountability.
7.4 Disclosing AI interactions and outputs
Will people directly interacting with the AI system or relying on its outputs be made aware of the interaction or that they are relying on AI generated output? How?
Consider members of the public or government officials that may interact with the system or decision makers that may rely on its outputs.
7.5 Offer appropriate explanations
If your AI system will materially influence administrative action or decision making by or about individuals, groups, organisations or communities, will your AI system allow for appropriate explanation of the factors leading to AI generated decisions, recommendations or insights?