7. Transparency and explainability

7.1    Consultation

You should consult with a diverse range of internal and external stakeholders at every stage of your AI system’s deployment to help identify potential biases, privacy concerns, and other ethical and legal issues present in your AI use case. This process can also help foster transparency, accountability, and trust with your stakeholders and can help improve their understanding of the technology’s benefits and limitations. Refer to the stakeholders you identified in section 2.4.

If your project has the potential to significantly impact Aboriginal and Torres Strait Islander peoples or communities, it is critical that you meaningfully consult with relevant community representatives.

Consultation resources

APS Framework for Engagement and Participation – sets principles and standards that underpin effective APS engagement with citizens, community and business and includes practical guidance on engagement methods.

Office of Impact Analysis Best Practice Consultation guidance note – provides a detailed explanation of the application of the whole-of-government consultation principles outlined in the Australian Government Guide to Policy Impact Analysis.

AIATSIS Principles for engagement in projects concerning Aboriginal and Torres Strait Islander peoples – provides non-Indigenous policy makers and service designers with the foundational principles for meaningfully engaging with Aboriginal and Torres Strait Islander peoples on projects that impact their communities.

7.2    Public visibility 

Where appropriate, you should make the scope and goals of your AI use case publicly available. You should consider publishing relevant, accessible information about your AI use case in a centralised location on your agency website. This information could include:

Note: All agencies in scope of the Policy for the responsible use of AI in in government are required to publish an AI transparency statement. More information on this requirement can be found in the policy and associated guidance. You may wish to include information about your use case in your agency’s AI transparency statement. 

Considerations for publishing

In some circumstances it may not be appropriate to publish detailed information about your AI use case. When deciding whether to publish this information you should balance the public benefits of AI transparency with the potential risks as well as compatibility with any legal requirements around publication. 

For example, you may choose to limit the amount of information you publish or not publish any information at all if:

  • the AI use case is still in the experimentation phase 
  • publishing may have negative implications for national security
  • publishing may have negative implications for criminal intelligence activities
  • publishing may significantly increase the risk of fraud or non-compliance
  • publishing may significantly increase the risk of cybersecurity threats
  • publishing may jeopardise commercial competitiveness.

7.3    Maintain appropriate documentation and records 

You may also wish to refer to the exemptions under the Freedom of Information Act 1982 in considering whether it is appropriate to publish information about your AI use case.

Agencies should comply with legislation, policies and standards for maintaining reliable and auditable records of decisions, testing, and the information and data assets used in an AI system. This will enable internal and external scrutiny, continuity of knowledge and accountability. This will also support transparency across the AI supply chain – for example, this documentation may be useful to any downstream users of AI models or systems developed by your agency.

Agencies should document AI technologies they are using to perform government functions as well as essential information about AI models, their versions, creators and owners. In addition, artifacts used and produced by AI – such as prompts, inputs and raw outputs – may constitute Commonwealth records under the Archives Act 1983 and may need to be kept for certain periods of time identified in records authorities issued by the National Archives of Australia (NAA).

To identify their legal obligations, business areas implementing AI in agencies may want to consult with their information and records management teams. The NAA can also provide advice on how to manage data and records produced by different AI use cases. 

The NAA Information Management Standard for Australian Government outlines principles and expectations for the creation and management of government business information. Further guidance relating to AI records is available on the NAA website under Information Management for Current, Emerging and Critical Technologies.

AI documentation types

Where suitable, you should consider creating the following forms of documentation for any AI system you build. If you are procuring an AI system from an external provider, it may be appropriate to request these documents as part of your tender process.

System factsheet/model card

A system factsheet (sometimes called a model card) is a short document designed to provide an overview of an AI system to non-technical audiences (such as users, members of the public, procurers, and auditors). These factsheets usually include information about the AI system’s purpose, intended use, limitations, training data, and performance against key metrics. 

Examples of system factsheets include Google Cloud Model Cards and IBM AI factsheets.

Datasheets

Datasheets are documents completed by dataset creators to provide an overview of the data used to train and evaluate an AI system. Datasheets provide key information about the dataset including its contents, data owners, composition, intended uses, sensitivities, provenance, labelling and representativeness.

Examples of datasheets include Google’s AI data cards and Microsoft’s Aether Data Documentation template.

System decision registries

System decision registries record key decisions made during the development and deployment of an AI system. These registries contain information about what decisions were made, when they were made, who made them and why they were made (the decision rationale). 

Examples of decision registries include Atlassian’s DACI decision documentation template and Microsoft’s Design Decision Log.

Documentation in relation to reliability and safety

It is also best practice to maintain documentation on testing, piloting and monitoring and evaluation of your AI system and use case, in line with the practices outlined in section 5.

See Implementing Australia’s AI Ethics Principles for more on AI documentation.

7.4    Disclosing AI interactions and outputs

You should design your use case to inform people (including members of the public, APS staff and decision-makers) that that they are interacting with an AI system or are being exposed to content that has been generated by AI.

When to disclose use of AI

You should ensure that you disclose when a user is directly interacting with an AI system, especially: 

  • when AI plays a significant role in critical decision-making processes
  • when AI has potential to influence opinions, beliefs or perceptions
  • where there is a legal requirement regarding AI disclosure
  • where AI is used to generate recommendations for content, products or services.

You should ensure that you disclose when someone is being exposed to AI-generated content and:

  • any of the content has not been through a contextually appropriate degree of fact checking and editorial review by a human with the appropriate skills, knowledge or experience in the relevant subject matter
  • the content purports to portray real people, places or events or could be misinterpreted that way
  • the intended audience for the content would reasonably expect disclosure.

Exercise judgment and consider the level of disclosure that the intended audience would expect, including where AI-generated content has been through rigorous fact-checking and editorial review. Err on the side of greater disclosure – norms around appropriate disclosure will continue to develop as AI-generated content becomes more ubiquitous.

Mechanisms for disclosure of AI interactions:

When designing or procuring an AI system, you should consider the most appropriate mechanism(s) for disclosing AI interactions. Some examples are outlined below:

Verbal or written disclosures

Verbal or written disclosures are statements that are heard by or shown to users to inform that they are interacting with (or will be interacting with) an AI system. 

For example, disclaimers, warnings, specific clauses in privacy policy and/or terms of use, content labels, visible watermarks, by-lines, physical signage, communication campaigns. 

Behavioural disclosures

Behavioural disclosure refers to the use stylistic indicators that help users to identify that they are engaging with AI-generated content. These indicators should generally be used in combination with other forms of disclosure.

For example, using clearly synthetic voices or formal, structured language, robotic avatars.

Technical disclosures

Technical disclosures are machine-readable identifiers for AI‑generated content.

For example, inclusion in metadata, technical watermarks, cryptographic signatures.

Agencies should consider using AI systems that use industry-standard provenance technologies, such as those aligned with the standard developed by the Coalition for Content Provenance (C2PA)

7.5    Offer appropriate explanations

Explainability refers to accurately and effectively conveying an AI system’s decision process to a stakeholder, even if they don’t fully understand the specifics of how the model works. Explainability facilitates transparency, independent expert scrutiny and access to justice.

You should be able to clearly explain how a government decision or outcome has been made or informed by AI to a range of technical and non-technical audiences. You should also be aware of any requirements in legislation to provide reasons for decisions, both generally and in relation to the particular class of decisions that you are seeking to make using AI.

Explanations may apply globally (how a model broadly works) or locally (why the model has come to a specific decision). You should determine which is more appropriate for your audience. 

Principles for providing effective explanations

Contrastive

Outline why the AI system output one outcome instead of another outcome.

Selective

Focus on the most-relevant factors contributing to the AI system’s decision process.

Consistent with the audience’s understanding

Align with the audience’s level of technical (or non-technical) background. 

Generalisation to similar cases

Generalise to similar cases to help the audience predict what the AI system will do.

You may wish to refer to Interpretable Machine Learning: A Guide for Making Black Box Models Explainable for further advice and examples.

Tools for explaining non-interpretable models

While explanations for interpretable models (i.e. low complexity with clear parameters) are relatively straightforward, in practice most AI systems have low interpretability and require effective post-hoc explanations that strike a balance between accuracy and simplicity. Among other matters, agencies should also consider what are appropriate timeframes for explanations to be provided in the context of their use case. 

Below are some tools or approaches that can assist with developing explanations; however, explainable AI algorithms are not the only solution to improve system explainability (for example, designing effective explanation interfaces).

Local explanations

Global explanations

Example based

Contrastive, counterfactual, data explorers/visualisation.

Model-agnostic methods

Feature-importance methods

Specifically for neural-network interpretations

Specifically for deep learning in cloud environments

Advice on appropriate explanations is available in the NAIC’s Implementing Australia’s AI Ethics Principles report.

8. Contestability

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.