Generative AI Considerations in Academic Research
Introduction
Generative artificial intelligence (GenAI) has without a doubt changed the nature of work across many fields, including academic research. GenAI systems are trained on vast amounts of data and use machine learning processes to produce text, audio, or visual content in response to user prompts. AI has been around for decades assisting in the background of everyday tasks like grammar checking, navigating with maps, and voice and facial recognition on cellphones and other devices. However, the technological advancements that led to the release of user-friendly tools like ChatGPT have transformed how humans interact with AI, and specifically GenAI, in recent years.
Currently, the landscape of GenAI tools, policies and regulations, and professional norms for academic researchers is evolving rapidly. This state of flux leaves researchers with questions about how they can, should, and should not use GenAI technologies in their work. Therefore, the purpose of this guidance is to increase researchers’ awareness of the opportunities and limitations of using GenAI in academic research. The internal and external policies, guides, and resources shared on this page should inform responsible decision-making about integrating GenAI tools into research activities.
Considerations for Using GenAI in Academic Research
The use of GenAI in academic research is a hotly debated topic because it offers unique and competing benefits and challenges. All members of the research team should be aware of the opportunities and risks involved in using GenAI before integrating this technology into their workflows and research activities.
Opportunities
Efficiency
GenAI tools can complete certain tasks such as summarizing text and writing code in less time and with comparable outcomes to humans. By automating or speeding up administrative and/or time-consuming tasks, researchers have more time to dedicate to other important and intellectually stimulating tasks.
Scale
GenAI can facilitate engagement with data sets and literature at a much larger scale compared to manual processes. This can expand researchers’ capabilities for data collection, analysis, and presentation, and allow for deeper exploration of research topics. Developing tools for testing research processes, products, and systems (e.g., prototypes, simulations, test environments) is also made easier with AI.
Creativity
With its powers of speed and pattern recognition, GenAI can enhance the development of insights by illuminating connections between topics and stimulating novel ideas. Through critical engagement with GenAI tools, researchers can more effectively consider questions and concepts through multiple perspectives. This can foster interdisciplinary work and introduce new methodological approaches in established fields.
Accessibility
GenAI tools can help level the playing field in academic publishing by assisting with translation and editorial corrections for authors whose first language is not English. GenAI tools can also make tasks that previously required a high degree of experience and expertise, like coding and audiovisual content creation, more accessible to a greater number of researchers.
Risks
Data ownership
Researchers need to be aware of how GenAI tools use information, as some terms of use may violate legal or copyright requirements. Broader and yet unresolved legal issues exist around the copyright of data that AI systems are trained on or have access to, and the content that they produce based on these data, including patent claims.
Data privacy
GenAI systems may use the text or data that users put into chatbots to train their algorithms. As a result, data that one user inputs may end up in the output of another user’s chat. Researchers must therefore stay informed of their evolving legal and ethical responsibilities when it comes to data privacy, and exercise caution when inputting data to avoid privacy breaches.
Bias
GenAI systems produce content based on data that they have been trained on. Without critical evaluation and intervention, biases (e.g., overt or covert sexism and racism) or omissions (e.g., perspectives that challenge dominant ways of thinking) in those training data can be reflected in the generated content and perpetuate harm.
Accuracy and validity
GenAI systems may produce plausible yet inaccurate or invalid responses to user prompts. These fabricated or inaccurate responses can be referred to as ‘hallucinations’. Researchers may encounter hallucinations when GenAI outputs cite sources (e.g., articles, books) that don’t exist. It is the duty of researchers to verify the accuracy of generated content.
Transparency and replicability
Many GenAI systems are referred to as ‘black boxes’ because the algorithms that underly their decision-making processes are not easily understood. These hidden internal workings challenge the replicability of content generation (e.g., the same user prompt can result in different responses each time its entered) and make transparency in the research process difficult to report.
Academic integrity and authorship
GenAI systems may produce word-for-word copies of text from sources, which could lead to plagiarism if left unchecked by researchers. There is also consensus that GenAI tools do not meet the criteria for authorship in peer-reviewed journals, making researchers responsible for any content they submit.
Professional identity
Since GenAI is positioned as a tool that can read, write, and ‘think’, many researchers wonder about the impact of using GenAI will have on their professional identity as an academic. GenAI cannot replace human experience and expertise and should only be integrated into research activities in ethical and legal ways that align with one’s personal and professional values, style, methodological approaches, insight generation strategies, and other unique ways of working and creating.
Environmental impact
An often-hidden risk of using GenAI tools is the environmental impact. Using GenAI is estimated to require 6-10 times more energy than traditional web searches and requires significant amounts of fresh water to cool system processors. Researchers may consider these impacts when selecting research tools.
Academic Perspectives and Commentary
Explore some perspectives on the benefits and risks of using GenAI from academics in these engaging commentaries and discussions.
AI and Science: What 1,600 Researchers Think
Van Noorden, R., & Perkel, J.M. (2023, September 27). AI and science: what 1,600 researchers think. Nature.
This article from Nature surveyed over 1,600 researchers to gauge their views on the impact of AI in scientific research. While many scientists are enthusiastic about AI’s potential to revolutionize their work by speeding up data processing and computations, saving time or money, and making it possible to process new kinds of data, there are significant concerns. Researchers worry that AI could lead to more reliance on pattern recognition without understanding, entrench biases, and make fraud easier. The article underscores the dual nature of AI in science, highlighting both its promising advancements and the notable risks.
Pitfalls and Potentials: Integrating Generative AI into Your Research and Scholarship
University of British Columbia (2024, February 15). Webinar Recording
In a conversation between Dr. Jeff Clune and Dr. Gail Murphy from the University of British Columbia, they discuss some basics of how GenAI works and its implications for researchers. They tackle questions about intellectual property and emphasize that prompting GenAI systems should be an iterative process, where users collaborate with the system to achieve desired results. Dr. Clune highlights the importance of not blindly trusting the accuracy of GenAI outputs, urging researchers to take responsibility for any content derived from these systems. They also discuss the ethical use of GenAI, noting its potential to enhance productivity and creativity, while stressing that researchers remain accountable for their own work.
Using ChatGPT (ChattieG) to Write Good
Mewburn, I. (2023, May 2). Using ChatGPT (ChattieG) to write good. The Thesis Whisperer.
This light-hearted article from The Thesis Whisperer explores the practical applications and ethical considerations of using tools like ChatGPT in academic settings. It highlights how GenAI can assist researchers by generating ideas, summarizing concepts, and drafting or revising text. The author emphasizes the importance of using GenAI responsibly, ensuring that it complements rather than replaces critical thinking and originality. The article also discusses potential pitfalls, such as over-reliance on AI and the risk of perpetuating biases. Overall, it provides a balanced view of how GenAI can be a valuable tool in academia when used thoughtfully and ethically, humorously recommending to “imagine [GenAI] as a talented, but easily misled, intern/research assistant.”
Academic Authors ‘shocked’ After Taylor & Francis Sells Access to Their Research to Microsoft AI
Battersby, M. (2024, July 19). Academic authors ‘shocked’ after Taylor & Francis sells access to their research to Microsoft AI. The Bookseller.
This article from The Bookseller reports on the controversy surrounding Taylor & Francis’ decision to sell access to its authors’ research to Microsoft for AI development, without informing the authors. The deal, worth nearly $10 million, has left many academics feeling betrayed and concerned about the lack of transparency and consent. Authors were not given the opportunity to opt out and are not receiving additional compensation for the use of their work. The Society of Authors has expressed its concern over publishers making such deals without consulting the creators, highlighting the ethical implications and the need for better communication and policies to protect authors’ rights.
Artificial Intelligence is Creating a New Colonial World Order
Hao, K. (2022, April 19). Artificial Intelligence is Creating a New Colonial World Order. MIT Technology Review.
Discover how artificial intelligence is reshaping global power dynamics in MIT Technology Review’s compelling series on AI colonialism. The four articles explore the complex interplay between technology and power and bring to light examples of how AI technology further dispossesses communities. By drawing unsettling parallels with historical acts of colonialism, this series underscores the need for ethical and equitable AI development and applications in our future.
What Now? AI Podcast: Innovation for Good
University of Toronto. (2014, April 18). What Now? AI Podcast Episode: Innovation for Good.
This episode of U of T’s What Now? AI podcast explores the impact of AI on drug discovery and healthcare. Dr. Christine Allen discusses how AI accelerates drug formulation, potentially reducing the high failure rates in clinical trials. She highlights the ability of AI to customize drug formulations for special populations, such as pediatric patients, ensuring better and faster therapeutic outcomes. Dr. Andrew Pinto emphasizes the role of AI in addressing social determinants of health, automating administrative tasks, and improving equity in healthcare delivery. The discussion underscores the importance of ethical considerations and the need for regulatory frameworks to ensure AI benefits all patient groups without exacerbating disparities.
Policies and Guidelines for Researchers
Due to the rapid change of GenAI technologies, policies and guidance are emerging and being updated regularly. Below we list several important policies and guidelines, but we urge readers to check requirements directly to ensure they are following up to date requirements.
Last updated: November 14, 2024
Guidance on the use of Artificial Intelligence in the development and review of research grant proposals
Government of Canada. Published November 2024
The Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences and Humanities Research Council of Canada (SSHRC), and the Canada Foundation for Innovation (CFI) released new policy guidance in November 2024. The guidance addresses the use of generative AI in preparation and evaluation of grant applications:
- Applicants must state if and how generative AI has been used in the development of their application and are required to follow specific instructions, which will be provided for each funding opportunity as they become available.
- Use of publicly available generative AI tools for evaluating grant applications is strictly prohibited.
The guidance is based on advice provided by a panel of external experts established in November 2023.
Authorship and AI tools
Committe on Publication Ethics (COPE) position statement. Published February 2023
The position states that AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. Authors who use AI tools must be transparent in disclosing which tools were used and how they were used. Authors are fully responsible for the content of their manuscript, including those parts produced by an AI tool.
Artificial Intelligence in Research: Policy Considerations and Guidance
National Institutes of Health. Updated August 2024
NIH’s robust system of policies and requirements help guide the research community on using emerging technology, including AI responsibly. Researchers can find policies related to participant protection, data management and sharing, health information, intellectual property, peer review, and other important research considerations.
Explore these internal resources to learn more about how U of T is advancing AI research and innovation. You’ll also find guidance and best practices for using GenAI tools in your research: