How to cite content produced by Generative AI
There is no doubt that artificial intelligence is redefining the boundaries of content creation. In fact, 89% of students are all over it — admitting to using ChatGPT to help with homework.
So, whatever we’re worried about, it’s already here.
The conversation needs to shift to how academic integrity can be preserved in a world where we have access to an on-the-spot pro in research, drafting and editing.
One important consideration in this mission is citation — the way we credit AI for its work.
The fluidity and sophistication of AI tools blur the lines of authorship and originality.
If you are interacting with an AI tool to create a piece of writing, who really did the work? Which ideas are the work of the AI, and which are yours?
And then, of course, what source — or sources — has the AI drawn upon?
These are not simple questions. In fact, they demand a reevaluation of our approach to academic integrity.
Citing AI-generated content correctly is not just about adhering to stylistic norms.
It is not about what footnotes to put where; it's about preserving the essence of scholarly work — transparency, accountability, and respect for intellectual property.
Each citation, reference, and acknowledgment in a piece of academic writing contributes to its integrity and credibility. They ensure credit is given where it's due, distinguish original thought from referenced material, and provide a trail for readers to trace the genesis of ideas.
In the instance of AI, citation is less about accountability and more about transparency. No matter how human it may seem, AI lacks consciousness — it cannot take responsibility for the ideas it produces.
Instead, by citing AI, we are acknowledging the process through which a piece of writing has been created.
Unlike content derived directly from human authors, AI-generated content is the product of complex algorithms processing vast datasets. This can inadvertently lead to the replication of existing texts.
Differentiating between human-generated, original work and AI-assisted content is, therefore, an ethical imperative.
Through citations, the intellectual labour that powers AI tools is respected, and the collaborative nature of knowledge creation is acknowledged.
While official guidance is still being developed, there are some simple steps that can be taken to recognise AI's role in content creation.
At a basic level, we should be documenting the following elements when using AI tools:
-
Tool name and version (e.g., ChatGPT 3.5).
-
The time and date that the AI tool was used.
-
The exact prompt or query that was inputted into the tool.
-
The response provided by AI should be saved for reference.
-
Any follow-up interactions had with the the tool should be noted.
-
The name of the person who used the tool.
Each of the three key citation styles — MLA, APA and Chicago — offer different guidelines on how citation should occur.
The Modern Language Association (MLA) offers a flexible template of core elements to evaluate and cite sources, including AI-generated content. This flexibility allows writers to apply its style to new types of sources.
The MLA recommends the following approach is taken when citing AI:
-
The ‘author element’ should not reference the AI tool. This stance aligns with the broader consensus in the academic community that, while AI can generate content, it lacks the consciousness and accountability typically associated with authorship. Instead, AI can be acknowledged through narrative citations or detailed descriptions throughout the work.
-
The ‘title of source’ element should describe what the AI tool generated, including information regarding the prompt if not addressed in the main text.
-
The ‘title of container’ element should be used to specify the name of the AI tool (e.g., ChatGPT).
-
The ‘version element’ should clearly name the version of the AI tool used (e.g., ChatGPT 3.5).
-
The ‘publisher element’ should identify the name of the company or entity that created the AI tool (e.g., OpenAI for ChatGPT).
-
The ‘date element’ can be used to record the date that the AI-generated content was produced.
-
The ‘location element’ can be populated with a general URL for the AI tool. If a unique URL has been generated for the specific interaction, include this instead.
By applying these elements in this way, you ensure that AI-generated content is cited with the same rigour and transparency as traditionally sourced material.
On the other hand, the American Psychological Association (APA) suggests describing the use of AI tools in research. It recommends detailing how the tool was used in the ’method’ section of a paper — providing the prompt used along with any relevant text generated in response.
Where content has been produced using AI, the APA recommends crediting the creator of the algorithm (e.g., OpenAI for ChatGPT) with a reference list entry and an in-text citation.
The Chicago Manual of Style recommends acknowledging AI tools in the text, with the option for a more formal citation in footnotes or endnotes, depending on the writing context. It suggests using language such as “the following summary was generated by ChatGPT” throughout a piece of writing, crediting where and how AI has been used.
For more formal works or research papers, Chicago suggests a numbered footnote or endnote to cite the AI-generated content (for example: "1. Text generated by ChatGPT, OpenAI, [date], [URL].").
Similar to MLA and APA, Chicago emphasises the importance of ensuring the prompts used to instruct AI are outlined — either within your writing itself, or within the citation note.
AI-generated content should be subject to the same scrutiny as traditional sources — if not more.
AI tools are powerful, but they are not infallible.
AI generates its content based on vast datasets, which may not always be accurate or up-to-date. Scruntising and cross-referencing information with other reputable sources is essential in ensuring the reliability of information.
Furthermore, where AI references secondary sources, or summarises a range of materials provided, these should be verified to ensure your work is accurately grounded.
As explained in my article around the architecture and function of AI models, we also need to recognise and be alert to AI’s limitations. AI lacks the ability to critically analyse or provide a truly original perspective, therefore, it should be considered as a tool to augment research and writing — but not replace it.
The landscape of AI in academic writing is rapidly evolving, and so are the guidelines for using — and citing — AI-generated content.
As educators, writers and students, it is our responsibility to take a proactive approach to artificial intelligence: understand how it works, stay informed about the latest recommendations for use and citation, read the advice published by academic institutions.
Using AI tools is actually okay — good, even. Students and academic writers will and are using it to support their work. What’s important is making sure we apply the same rigour to verifying its information, and provide readers with transparency about how it has informed our work.