What do teachers need to know about deepfakes?
As humans, we can only detect deepfake speech 73% of the time. And this number is only going up.
AI systems like GPT4 and Midjourney are blurring the lines between truth and deception, making it increasingly challenging to discern what's real and what's not.
In short, our inability to tell the difference between ‘real’ and ‘fake’ digital content is not our fault — it is a reflection of the ever-evolving AI landscape we find ourselves in.
So… what are deepfakes?
Deepfakes, a portmanteau of 'deep learning' and 'fake', have the power to manipulate or fabricate visual and audio content with a high degree of accuracy.
At the heart of deepfakes is the ability to convincingly portray individuals doing or saying things — using their voice intonations and mannerisms — that they never actually did.
These can be harmless, such as in the case of this example with Tom Cruise, or they can depict a person making statements that are entirely out of character, factually incorrect, and even incriminating — which makes our inability to identify a deepfake all the more terrifying.
In 2023, students in Carmel, New York, were accused of creating a deepfake AI video of a middle school principal making racist and violent comments and threats. The video caused significant damage to the principal's reputation, despite being entirely fabricated.
How does deepfake technology actually work?
At the core of deepfake technology are two pivotal components: the generator and the discriminator.
The generator is essentially an AI ‘artist’ with devious intent. It is a computer system with the ability to create counterfeit images or videos from scratch — or alter existing ones. This ‘generator’ component has been educated from a vast dataset of real images or videos, and exists to fulfil a single goal: to produce a creation so refined, so sophisticated, that it is indistinguishable from the real thing.
On the other side is the discriminator. This component of the technology acts as a discerning critic — it scrutinises the generator’s creations, identifying and flagging signs of forgery.
The interplay between these two components is continuous. The discriminator identifies a fake, the generator takes this as feedback and adjusts its technique — over and over again.
This iterative process, known as Generative Adversarial Networks (GANs), has historically driven the ever-increasing sophistication of deepfakes.
However, the recent advancement of AI image generation has changed the game. More and more, we are seeing the circulation of deepfakes that differ from those produced by GANs. Midjourney, for instance, employs a novel AI image generation technique, utlising natural language commands — or, more simply, prompts — to create realistic images.
This is known as the "Layered Transformer Neural Network" (LTNN) — and it represents a significant advancement in the field of deepfake technology because, now, the technology inherently understands the context of your command.
Armed with this understanding, the LTNN method works by breaking down input images into multiple layers and then reconstructing those layers to produce a final, high-resolution image that closely mimics the original content.
The outputs produced by the LTNN method are, put bluntly, better. They are more realistic and more accurate, and further blur the lines between real and artificial creations.
Despite originating in the world of tech, deepfakes are now having real impacts in the ‘real world’.
When used with creative intent, deepfakes present the opportunity to enhance storytelling and improve content. The entertainment industry is already putting this into practice.
Video game developers, for instance, use this technology to clone and alter actors' voices to create highly immersive gaming experiences.
In film, producers are able to de-age actors or generate lifelike performances from those who have passed away, offering intriguing possibilities for on-screen narratives.
However, the darker side of deepfakes is more concerning, especially in the context of spreading misinformation.
The realistic nature of deepfakes makes them potent tools for propagating baseless rumours or misconceptions, with the potential to cause significant societal harm.
In the political arena, deepfakes can fabricate scenarios or speeches, potentially swaying public opinion or disrupting elections. This video of Boris Johnson produced in 2022 examples exactly that — but was generated with technology far inferior to that which is used today.
Moreover, the personal implications of deepfakes can be devastating. Individuals have been targets of deepfake-generated content, placing them in compromising, yet entirely fictional, scenarios.
This form of digital manipulation has found a particularly insidious application in the creation of deepfake pornography which, terrifyingly, has even occurred amongst students.
So, as with many AI advancements, the real-world scenarios in which deepfakes are applied are dual-sided.
On the one hand, there's the potential for groundbreaking applications in entertainment and even education. On the other, there's a stark reminder of the technology's capacity to harm individuals and disrupt societal structures.
As deepfakes grow more prominent, so too will their impact on teachers.
In a classroom setting, where our role is to present accurate and reliable information, deepfakes introduce more than a few layers of doubt and complexity.
As a History and Politics teacher, I often used Google to help me find lesson resources. As they become more convincing and more prolific, deepfakes will pull into question the reliability of the online resources I would once have trusted.
And so, as we begin to encounter more and more deepfaked content, it is inevitable that some teachers mistake this for genuine material and incorporate it into their lesson plans. This will not be the product of lazy teaching; it will be the result of almost perfect deepfake technology. And, if it will affect educated teachers who have discerning eyes, then our students, particularly the younger ones, will struggle to tell the difference even more so. Outside the classroom, the personal reputation of educators is at stake, too.
Deepfakes can fabricate scenarios involving teachers, leading to false accusations and damaging their credibility and professional standing.
We have already seen this take place in 2024, with an American Principal having been recently dismissed over content that he claims was deepfaked by students. If you listen to the recording, it is hard to believe that it was deepfaked. But really, who knows?
So, how can you identify and combat deepfakes?
Deepfake detection education has to be the first line of defence.
We must educate students about the existence and nature of deepfakes. We must teach them to spot signs of deepfakes, such as unnatural movements, inconsistencies in audio, or oddities in lighting and skin texture. We can introduce students to tools and resources that can help in the identification of deepfakes, such as Sentinel, FakeCatcher by Intel, or Microsoft's Video Authenticator Tool.
We need to accept that this technology is here, and arm our students with the information they need to combat it.
In class, we should be emphasising digital integrity.
It will be important for us, as educators, to cultivate a strong sense of digital ethics. In class, we need to discuss the moral implications and potential harm caused by creating and spreading deepfakes. The could be achieved by unpacking and interrogating targeted questions with students, such as:
-
What are the potential negative consequences of deepfakes, and how do they pose severe threats to individuals' privacy and reputation?
-
How can the creation and distribution of deepfakes lead to the spread of misinformation, manipulation of public opinion, and damage to reputations, and what ethical implications does this carry?
-
In what ways can deepfakes be used to deceive, manipulate, and intimidate individuals, leading to distrust, uncertainty, and potential harm?
-
What measures can be taken to increase public awareness about the dangers of deepfakes and educate people about the ethical implications and potential harm caused by the creation and spread of deepfakes?
-
What are the moral and ethical responsibilities of the creators and distributors of deepfakes, and what measures can be implemented to ensure that AI technology and other advanced technologies remain a force for good?
-
Establish clear guidelines and consequences for the misuse of digital tools, emphasising respect and responsibility in the digital world.
Critical analysis skills should be explicitly taught.
The need for robust media literacy skills is more prominent than ever. Students need to understand the world they are operating in, and they need to question it:
-
Schools need to build a classroom culture that values truth and critical inquiry.
-
Students should be taught to critically analyse the sources of digital content, and their ability to discern the reliability of such content should be positively reinforced.
-
We need to encourage a skeptical approach to sensational or controversial content, prompting students to verify information through credible sources. The CRAAP test, originally designed for evaluating information sources, can be adapted to assess the output of AI systems, including generative AI.
This technology is not all bad.
By understanding deepfakes, fostering critical thinking, and promoting digital ethics, teachers have the opportunity to have a significant impact on their way their students interact deepfake technology. As with everything: education is key — and with it, we can look to nurture a future where technology serves to enlighten, not to mislead.