Risk assessment for use of AI
Risk matrix
Consequence | ||||||
---|---|---|---|---|---|---|
Insignificant | Minor | Moderate | Major | Severe | ||
Likelihood | Almost certain | Medium | Medium | High | High | High |
Likely | Medium | Medium | Medium | High | High | |
Possible | Low | Medium | Medium | High | High | |
Unlikely | Low | Low | Medium | Medium | High | |
Rare | Low | Low | Low | Medium | Medium |
Figure 1: Risk matrix for use of AI
Using the risk matrix, determine the severity of the risks. In considering the consequence and likelihood consult with relevant stakeholders. The risk assessment should reflect the intended scope, function and risk controls of the AI use case.
The following are examples of risks that an agency can consider as part of their assessment.
What is the risk that the use of AI:
- negatively affects public accessibility or inclusivity of government services
- unfairly discriminates against individuals or communities
- perpetuates stereotypes or demeaning representations of individuals or communities
- causes harm to individuals, communities, businesses or the environment
- results in privacy concerns due to the sensitivity of the data being manipulated, parsed or transformed by the system
- results in security concerns due to the sensitivity or classification of the data being manipulated, parsed or transformed by the system
- results in security concerns due to the implementation, sourcing or characteristics of the system
- influences decision-making that affects individuals, communities, businesses or the environment
- poses a reputational risk or undermines public confidence in government
- results in intellectual property concerns due to the system manipulating, transforming or reproducing material for which a third party owns copyright.
Agencies should refer to existing risk management frameworks, such as the Commonwealth Risk Management Policy and internal agency risk management approaches, for guidance in assessing the concepts under the risk matrix at Figure 1.