Have you ever heard someone make a statement or argument so convincing that you assume it must be true? But then… you later find out… it’s not?
It’s not really lying, because the person genuinely believes what they are telling you. They are, just, wrong.
Well, AI can do the same thing — and it is called AI hallucination.
Consider the 'Emu War'. This real event involved Australian soldiers and an overpopulation of emus in Western Australia in 1932. However, if prompted incorrectly, AI might fabricate details around this event — it might present you with a tale involving emus, soldiers and, perhaps, a medieval knight.
While imaginative, this can distort the truth. This presents obvious issues for students (and, well, anyone) relying on ChatGPT as a single source of truth.
Understanding these causes, and making our own adjustments where possible, can help limit the impact of these hallucinations.
One reason behind AI hallucinations is if the model is being informed by insufficient or low-quality training data. This can lead to unreliable and biased results in AI models, affecting performance. While this data maintenance is a concern for the OpenAI’s of this world, understanding this limitation helps us to explain and emphasise the importance of source verification and cross referencing in research.
Overfitting is another cause of AI hallucination, which comes about when an AI model is so closely tailored to its training data that it loses the ability to generalise.
For example, if a model is trained on a dataset that predominantly contains photos of dogs in the outdoors, it may learn to associate the presence of grass with the image of a dog. As a result, when presented with a new image of a dog in a different context, such as inside a room, the model may be unable to identify the dog because of the lack of grass.
In addition to these architectural explanations, AI hallucinations can also result from poor prompting by the user. The use of slang, for example, may not be understood by the model — yielding irrelevant or hallucinated responses.
This can impact a student’s understanding or grasp on any particular topic, and can pose a threat to academic integrity if inaccurate information is cited in assignments or exams.
The latter is more common than you would expect.
I asked ChatGPT to produce a citation justifying a two-day work week for teachers. The result:
“Smith, J., & Johnson, L. (2023). Rethinking the Educational Workweek: The Case for a Two-Day Schedule for Teachers. Journal of Progressive Education, 58(2), 145-163.” Two more! I requested. It obliged:
“Johnson, L., & Carter, H. (2024). "Balancing Teacher Workload and Learning Outcomes in a Shortened Workweek," Educational Reform Quarterly, 11(1), 88-104.”
“Martinez, S. (2023). "Exploring the Impact of Reduced Teaching Hours on Teacher Well-being and Student Achievement," Global Education Review, 19(4), 200-225.”
But, when I went to verify the sources, they did not exist (sadly…!). It had simply done what I asked: produced two citations, not actual research articles. This is a classic example of poor prompting resulting in fictitious information.
If AI is to become a fixture in the way students learn and write, it is our role as teachers to educate them on how to put it to best use.
Guiding and prompting AI is a skill in and of itself — it requires the ability to write concisely, offer context and use instructional, direct language. The honing of this skill will benefit students both in their schooling, and when they enter the workforce.
When working with AI tools, students should consider:
Providing all relevant information to the model, including any specific data and sources, so that the tool has the proper context through which to generate accurate results.
Looking for ways to reinforce the context of a prompt, such as through Retrieval Augmented Generation (RAG) or including numerous examples in your entry.
Creating data templates for numerical tasks. Providing a structured data template (like a table) can guide the AI in making correct calculations, reducing chances of numerical hallucinations.
Assigning the AI a specific ‘role’ — such as a climate scientist explaining the Great Barrier Reef's bleaching — to limit the scope of its responses, ensuring more factual accuracy.
Providing clear communication of desired and undesired results. Instructing the tool what is not wanted can sometimes be effective. For example, asking for a social analysis of housing affordability in Sydney without focusing on political aspects.
Using simple, direct language. Avoiding complex or vague prompts and using clear, concise, and easy-to-understand language — no slang words! — can help minimise the risk of misinterpretation and hallucinations.
Cross-checking information and scrutinising sources ensures a ‘human-in-the-loop’ approach to AI use that is critical.
In Australian classrooms, this means teaching students to ‘fact check’ AI with reliable sources — official Australian curriculum textbooks, government websites, and peer-reviewed journals are a good place to start.
This practice not only ensures accuracy but also instils a habit of seeking multiple perspectives and sources. As we enter the age of deepfakes and disinformation, having this skill set is more important than ever.