4. Fairness
4.1 Defining fairness
Fairness is a core principle in the design and use of AI systems, but it is a complex and contextual concept. Australia’s AI Ethics Principles state that AI systems should be inclusive and accessible and should not involve or result in unfair discrimination. However, there are different and sometimes conflicting definitions of fairness, and people may disagree on what is fair.
For example, there is a distinction between individual fairness (treating individuals similarly) and group fairness (similar outcomes across different demographic groups). Different approaches to fairness involve different trade‑offs and value judgments. The most appropriate fairness approach will depend on the specific context and objectives of your AI use case.
When defining fairness for your AI use case, you should be aware that AI models are typically trained on broad sets of data that may contain bias. Bias can arise in data where it is incomplete, unrepresentative or reflects societal prejudices. AI models may reproduce biases present in the training data, which can lead to misleading or unfair outputs, insights or recommendations.
This may disproportionally impact some groups, such as First Nations people, people with disability, LGBTIQ+ communities and multicultural communities. For example, an AI tool used to screen job applicants might systematically disadvantage people from certain backgrounds if trained on hiring data that reflects past discrimination.
When defining fairness for your AI use case, it is recommended that you:
- consult relevant domain experts, affected parties and stakeholders (such as those you have identified at section 2.4) to help you understand the trade‑offs and value judgements that may be involved
- document your definition of fairness in your response to section 4.1, including how you have balanced competing priorities and why you believe it to be appropriate to your use case
- be transparent about your fairness definition and be open to revisiting it based on stakeholder feedback and real‑world outcomes.
You should also ensure that your definition of fairness complies with anti‑discrimination laws. In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation in certain areas of public life, including education and employment. Australia’s federal anti‑discrimination laws are contained in the following legislation:
- Age Discrimination Act 2004
- Disability Discrimination Act 1992
- Racial Discrimination Act 1975
- Sex Discrimination Act 1984.
Resources
- Resources on fairness in AI from the OECD Catalogue of Tools & Metrics for Trustworthy AI
- Fairness Assessor Metrics Pattern from the CSIRO Data61 Responsible AI Pattern Catalogue
4.2 Measuring fairness
You may be able to use a combination of quantitative and qualitative approaches to measuring fairness. Quantitative fairness metrics can allow you to compare outcomes across different groups and assess this against fairness criteria. Qualitative assessments, such as stakeholder engagement and expert review, can provide additional context and surface issues that metrics alone might miss.
Quantifying fairness
The specific quantitative metrics you use to measure fairness will depend on the definition of fairness you have adopted for your use case. When selecting fairness metrics, you should:
- choose metrics that align with your fairness definition, recognising the trade‑offs between different fairness criteria and other objectives like accuracy
- confirm whether you have appropriate data to assess those metrics, including sensitive attributes where appropriate (see Australian Privacy Principles 3.3)
- set clear and measurable acceptance criteria (see guidance for 5.4)
- establish a plan for monitoring these metrics (see 5.6) and processes for remediation, intervention or safely disengaging the AI system if those thresholds are not met.
For examples of commonly used fairness metrics, see the Fairness Assessor Metrics in CSIRO Data61’s Responsible AI Pattern Catalogue.
Qualitatively assessing fairness
Consider some of these qualitative approaches, which may be useful to overcome data limitations and to surface issues that metrics may overlook.
Stakeholder engagement
Consult affected communities, stakeholders and domain experts to understand their perspectives and identify potential issues.
User testing and feedback
Test your AI system with diverse users and solicit their feedback on the fairness and appropriateness of the system’s outputs. Seek out the perspectives of marginalised groups and those groups that may be impacted by the AI system.
Expert review
Engage experts, such as AI ethicists or accessibility and inclusivity specialists, to review the fairness of your system’s outputs and the overall fairness approach and identify potential gaps or unintended consequences.
Resources
- Implementing Australia’s AI Ethics Principles: provides tools and techniques for measuring and minimising bias in AI systems
- List of fairness metrics at Supplementary Table 1 of the research paper A translational perspective towards clinical AI fairness
- Resources on fairness in AI from the OECD Catalogue of Tools & Metrics for Trustworthy AI
- Fairness Assessor Metrics Pattern from the CSIRO Data61 Responsible AI Pattern Catalogue