Understanding Bias in AI
Bias is an unfair preference or prejudice for or against something or someone. Biases can be explicit, in that you are consciously and deliberately aware of them — or implicit, in that you are not.
They are essentially patterns of thinking that influence our perceptions, decisions and interactions.
Generative AI mirrors the complexities of the human mind — the positives and the negatives. And so, AI is also prone to producing biased outputs.
When looking to understand AI bias, the distinction between explicit and implicit biases is essential because each will seep into AI systems differently.
Explicit biases are conscious, intentional, deliberate. They are the prejudices or beliefs we are aware of and might openly express. They often reflect our conscious stance on various issues.
In AI, explicit biases might manifest when the data used to train the system is intentionally skewed, or when the system is designed to produce outcomes that favour certain groups over others.
For instance, when companies rely on AI-driven hiring system to screen job applicants, some models are intentionally designed to favour candidates from specific universities. Therefore, when the AI evaluates resumes, it will assign a higher score to candidates who list this university.
This is explicit bias — it represents a conscious decision made to favour certain candidates over others, rather than completing an unbiased assessment of skills and suitability.
While explicit bias can still compromise fairness, implicit bias is arguably more devious as it can influence outcomes without our awareness.
If explicit bias is the visible iceberg; implicit bias is the murky, uncertain mass beneath.
Implicit biases operate under the radar. They are unconscious attitudes or stereotypes that impact our decisions and behaviours in ways we might not recognise. They are often reflections societal norms or stereotypes that have developed over time.
And they can creep into AI.
In our own experiment with DALL·E, an advanced AI image generator, we input the prompt "leader" and received an image of a man in a business suit.
And, when we changed the prompt to a “female leader”, it yielded an image of Cleopatra.
The AI's portrayal of a generic leader as a man in a suit, and a female leader as a historical figure, reflects ingrained societal stereotypes.
These biases originate from the AI's training data. In this case, Open-AI’s models were trained on the text and code available on the internet — most of which was produced by white and educated men from America. Hence: bias.
In lieu of AI developing a sense of awareness or intent, the onus rests on the companies deploying these platforms to build safeguards that minimise implicit bias, which is being done.
Understanding where bias can occur can help in combatting it.
In the training pipeline of Large Language Models (LLMs), bias can creep in at various stages. Primarily, this occurs during the collection or preparation of data.
Generative AI learns from the data it's fed, making the quality, diversity, and representativeness of this data extremely important. If the data used to train an AI algorithm is not diverse or inclusive, the AI's outputs may reflect these limitations.
Amazon's recruiting tool, for example, which used artificial intelligence to score job candidates, was found to favour men over women. This occurred because Amazon's computer models were trained on patterns in the resumes of existing engineers, which were primarily male.
Bias may also creep in at the point of data labelling, which is the process whereby raw data is annotated to make it understandable for AI. If the annotators' perspectives or interpretations influence the labels, this can influence the way the AI behaves.
We may not identify AI bias until the point of deployment.
New systems must be tested with diverse inputs and monitored for biased outcomes to ensure implicit biases are identified and corrected.
For example, AI models built to predict liver disease from blood tests have been found to be twice as likely to miss disease in women as in men, indicating a gender bias in the algorithms. This was only identified once the system was in use, flagging a need for the training data to be interrogated.
Biases can manifest in various forms, each with its own set of challenges and implications.
Understanding these biases is crucial for teachers who want to create a fair and inclusive learning environment.
-
Selection bias occurs when the data used to train an AI system isn't representative of the population or reality it's meant to model. This form of bias was observed in a fraud detection model that was implemented by a regional bank. The model oversampled applicants over the age of 45, resulting in an unfair association between age and fraudulent activity — resulting in such customers being increasingly denied loans.
-
Confirmation bias occurs when a model is overly reliant on trends or beliefs already present in the training data, leading it to reinforce existing biases and overlook new, emerging patterns. This can lead to the creation of an ‘echo chamber’ effect, where the AI system provides information that confirms existing views.
-
Measurement bias happens when the data collected is inaccurate, often due to errors or faults within the measurement process. For example, a blood pressure cuff giving falsely high readings would lead to an overestimation of medical risk — creating measurement bias if used to train a model.
-
Stereotyping bias occurs when an AI system reinforces harmful stereotypes, often by overgeneralising from the data it was trained on. Stable Diffusion, a generative image model, has been found to perpetuate such bias — exaggerating racial and gender stereotypes in the images it creates.
-
Out-group homogeneity bias describes the tendency to perceive members of an ‘out-group‘ — or minority group — as more similar to each other than members of an ‘in-group’. This can produce or reinforce stereotypical and simplified representations of people from underrepresented groups, as seen in the example of the Dall-E model consistently rendering indigenous Australian men as living in the desert, wearing face paint.
Teachers are uniquely positioned to use AI bias as a teaching tool to promote the fairness and inclusivity.
AI's role in the education system is steadily growing, and it will continue to play an increasingly significant role.
When these systems are deployed by those who understand and can navigate the bias, they become extremely powerful and useful learning tools.
As teachers, it is our role to create a classroom that critically analyses bias:
-
Integrate discussions about AI bias into the curriculum, focussing on technology, ethics, and critical thinking subjects.
-
When using Generative AI in class, proactively highlight what aspects of bias students should be aware of.
-
Ensure students comprehend the inherent biases in these systems and understand that AI outputs are shaped by the quality and nature of the input data.
-
Utilise real-life instances of AI bias, like Dall-E or Amazon cases, to illustrate the real-world impact of bias in AI systems.
-
Encourage students to critically analyse AI-generated content by questioning missing perspectives, especially from marginalised communities, and considering alternative explanations.
-
Assign tasks for students to compare AI-generated content with other sources, like community resources, to assess accuracy and uncover biases, enhancing their understanding of AI output reliability and potential biases.
-
Prompt students to reflect on their learning activities, using statements like "I used to think... now I think..." to help them recognise the significance of critically evaluating AI outputs and acknowledging their evolving viewpoints.
-
Guide students in contemplating potential rules and safeguards that could make AI systems less biased, less harmful, and more accountable, thereby fostering responsible decision-making and ethical awareness concerning AI bias.
Transparency in AI operations needs to be prioritised.
Transparency enables users to understand how decisions are made. This fosters trust and enables the identification and correction of biases.
When determining whether an AI system promotes transparency and accountability, teachers should look for systems that:
-
Allow teachers to understand and scrutinise the AI system's decision-making processes.
-
Incorporate mechanisms for teachers to address and correct algorithmic bias by validating or overriding AI decisions.
-
Prioritise continuous monitoring, feedback and improvement.
-
Implement regular assessments to identify and address biases or inaccuracies in outputs.
-
Establish feedback loops that allow students and teachers to report concerns or biases observed.
Understanding and mitigating bias in the field of AI is a moral imperative.
Understanding what bias is, and being able to identify it, is the first step. As teachers, we have the ability to educate the next generation on what comes next: mitigation.
We can, and should, be equipping students with the tools they need to engage with this technology in a positive way.