Generative AI Considerations in Academic Research

On This Page:

Introduction

Generative artificial intelligence (GenAI) has without a doubt changed the nature of work across many fields, including academic research. GenAI systems are trained on vast amounts of data and use machine learning processes to produce text, audio, or visual content in response to user prompts. AI has been around for decades assisting in the background of everyday tasks like grammar checking, navigating with maps, and voice and facial recognition on cellphones and other devices. However, the technological advancements that led to the release of user-friendly tools like ChatGPT have transformed how humans interact with AI, and specifically GenAI, in recent years.

Currently, the landscape of GenAI tools, policies and regulations, and professional norms for academic researchers is evolving rapidly. This state of flux leaves researchers with questions about how they can, should, and should not use GenAI technologies in their work. Therefore, the purpose of this guidance is to increase researchers’ awareness of the opportunities and limitations of using GenAI in academic research. The internal and external policies, guides, and resources shared on this page should inform responsible decision-making about integrating GenAI tools into research activities.

Weighted scale with icons of computers, locks, and people

Considerations for Using GenAI in Academic Research

The use of GenAI in academic research is a hotly debated topic because it offers unique and competing benefits and challenges. All members of the research team should be aware of the opportunities and risks involved in using GenAI before integrating this technology into their workflows and research activities.

Opportunities

Efficiency

GenAI tools can complete certain tasks such as summarizing text and writing code in less time and with comparable outcomes to humans. By automating or speeding up administrative and/or time-consuming tasks, researchers have more time to dedicate to other important and intellectually stimulating tasks.

Scale

GenAI can facilitate engagement with data sets and literature at a much larger scale compared to manual processes. This can expand researchers’ capabilities for data collection, analysis, and presentation, and allow for deeper exploration of research topics. Developing tools for testing research processes, products, and systems (e.g., prototypes, simulations, test environments) is also made easier with AI.

Creativity

With its powers of speed and pattern recognition, GenAI can enhance the development of insights by illuminating connections between topics and stimulating novel ideas. Through critical engagement with GenAI tools, researchers can more effectively consider questions and concepts through multiple perspectives. This can foster interdisciplinary work and introduce new methodological approaches in established fields.

Accessibility

GenAI tools can help level the playing field in academic publishing by assisting with translation and editorial corrections for authors whose first language is not English. GenAI tools can also make tasks that previously required a high degree of experience and expertise, like coding and audiovisual content creation, more accessible to a greater number of researchers.

Risks

Data ownership

Researchers need to be aware of how GenAI tools use information, as some terms of use may violate legal or copyright requirements. Broader and yet unresolved legal issues exist around the copyright of data that AI systems are trained on or have access to, and the content that they produce based on these data, including patent claims.

Data privacy

GenAI systems may use the text or data that users put into chatbots to train their algorithms. As a result, data that one user inputs may end up in the output of another user’s chat. Researchers must therefore stay informed of their evolving legal and ethical responsibilities when it comes to data privacy, and exercise caution when inputting data to avoid privacy breaches.

Bias

GenAI systems produce content based on data that they have been trained on. Without critical evaluation and intervention, biases (e.g., overt or covert sexism and racism) or omissions (e.g., perspectives that challenge dominant ways of thinking) in those training data can be reflected in the generated content and perpetuate harm.

Accuracy and validity

GenAI systems may produce plausible yet inaccurate or invalid responses to user prompts. These fabricated or inaccurate responses can be referred to as ‘hallucinations’. Researchers may encounter hallucinations when GenAI outputs cite sources (e.g., articles, books) that don’t exist. It is the duty of researchers to verify the accuracy of generated content.

Transparency and replicability

Many GenAI systems are referred to as ‘black boxes’ because the algorithms that underly their decision-making processes are not easily understood. These hidden internal workings challenge the replicability of content generation (e.g., the same user prompt can result in different responses each time its entered) and make transparency in the research process difficult to report.

Academic integrity and authorship

GenAI systems may produce word-for-word copies of text from sources, which could lead to plagiarism if left unchecked by researchers. There is also consensus that GenAI tools do not meet the criteria for authorship in peer-reviewed journals, making researchers responsible for any content they submit. 

Professional identity

Since GenAI is positioned as a tool that can read, write, and ‘think’, many researchers wonder about the impact of using GenAI will have on their professional identity as an academic. GenAI cannot replace human experience and expertise and should only be integrated into research activities in ethical and legal ways that align with one’s personal and professional values, style, methodological approaches, insight generation strategies, and other unique ways of working and creating.

Environmental impact

An often-hidden risk of using GenAI tools is the environmental impact. Using GenAI is estimated to require 6-10 times more energy than traditional web searches and requires significant amounts of fresh water to cool system processors. Researchers may consider these impacts when selecting research tools.

A profile of a face using blue and white digital lines and numbers

Academic Perspectives and Commentary​

Explore some perspectives on the benefits and risks of using GenAI from academics in these engaging commentaries and discussions.

How are Researchers using AI? Survey Reveals Pros and Cons for Science

Naddaf, M. (2025, February 4). How are researchers using AI? Survey reveals pros and cons for science. Nature.

A survey of nearly 5,000 researchers from over 70 countries, conducted by Wiley, reveals the growing acceptance and use of generative AI tools in scientific research. More than half of the respondents believe AI outperforms humans in tasks like summarizing research findings and detecting errors. However, only 45% of the researchers have used AI tools in their work, mostly for tasks related to writing and preparing manuscripts. The article highlights the imminent integration of AI in research and the need for support to navigate its possibilities.

Researchers Study the Role of AI in Academic Writing

University of Waterloo (2025, January 29). Researchers study the role of AI in academic writing 

Research from the University of Waterloo highlights AI’s potential to enhance academic writing and peer reviewing. AI-augmented abstracts were rated by experienced peer-reviewers as more honest and compelling than purely human-written ones. However, reviewers noted that content completely generated by AI often lacked human insight. The study emphasizes the need for a balance between AI and human expertise, suggesting ethical guidelines and transparency in AI use for research publications. Read the journal article or listen to authors discuss their findings in this podcast episode.

Pitfalls and Potentials: Integrating Generative AI into Your Research and Scholarship

University of British Columbia (2024, February 15). Webinar Recording

In a conversation between Dr. Jeff Clune and Dr. Gail Murphy from the University of British Columbia, they discuss some basics of how GenAI works and its implications for researchers. They tackle questions about intellectual property and emphasize that prompting GenAI systems should be an iterative process, where users collaborate with the system to achieve desired results. Dr. Clune highlights the importance of not blindly trusting the accuracy of GenAI outputs, urging researchers to take responsibility for any content derived from these systems. They also discuss the ethical use of GenAI, noting its potential to enhance productivity and creativity, while stressing that researchers remain accountable for their own work.

Using AI in Peer Review Is a Breach of Confidentiality

Lauer, M., Constant, S., & Wernimont, A. (2023, June 23). Using AI in Peer Review Is a Breach of Confidentiality. Extramural Nexus.

Using case examples, this article warns National Institute for Health (NIH) peer reviewers that while it may be tempting to use AI to save time while writing critiques, feeding application materials to an AI tool strictly violates NIH’s guidance on Integrity and Confidentiality in NIH Peer Review. Applicants must trust that peer reviewers are using their own expertise and judgment to evaluate the proposal, and that their research ideas will remain confidential.

Academic Authors ‘shocked’ After Taylor & Francis Sells Access to Their Research to Microsoft AI

Battersby, M. (2024, July 19). Academic authors ‘shocked’ after Taylor & Francis sells access to their research to Microsoft AI. The Bookseller.

This article from The Bookseller reports on the controversy surrounding Taylor & Francis’ decision to sell access to its authors’ research to Microsoft for AI development, without informing the authors. The deal, worth nearly $10 million, has left many academics feeling betrayed and concerned about the lack of transparency and consent. Authors were not given the opportunity to opt out and are not receiving additional compensation for the use of their work. The Society of Authors has expressed its concern over publishers making such deals without consulting the creators, highlighting the ethical implications and the need for better communication and policies to protect authors’ rights.

Artificial Intelligence is Creating a New Colonial World Order

Hao, K. (2022, April 19). Artificial Intelligence is Creating a New Colonial World Order. MIT Technology Review.

Discover how artificial intelligence is reshaping global power dynamics in MIT Technology Review’s compelling series on AI colonialism. The four articles explore the complex interplay between technology and power and bring to light examples of how AI technology further dispossesses communities. By drawing unsettling parallels with historical acts of colonialism, this series underscores the need for ethical and equitable AI development and applications in our future.  

What Now? AI Podcast: Innovation for Good

University of Toronto. (2024, April 18). What Now? AI Podcast Episode: Innovation for Good.

This episode of U of T’s What Now? AI podcast explores the impact of AI on drug discovery and healthcare. Dr. Christine Allen discusses how AI accelerates drug formulation, potentially reducing the high failure rates in clinical trials. She highlights the ability of AI to customize drug formulations for special populations, such as pediatric patients, ensuring better and faster therapeutic outcomes. Dr. Andrew Pinto emphasizes the role of AI in addressing social determinants of health, automating administrative tasks, and improving equity in healthcare delivery. The discussion underscores the importance of ethical considerations and the need for regulatory frameworks to ensure AI benefits all patient groups without exacerbating disparities.

Magnifying glass over a book showing the word policy

Policies and Guidelines for Researchers

Due to the rapid change of GenAI technologies, policies and guidance are emerging and being updated regularly. Below we list several important policies and guidelines, but we urge readers to check requirements directly to ensure they are following up to date requirements.

Last updated: December 6, 2024

The Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences and Humanities Research Council of Canada (SSHRC), and the Canada Foundation for Innovation (CFI) released new policy guidance in November 2024. The guidance addresses the use of generative AI in preparation and evaluation of grant applications:

  • Applicants are responsible for ensuring that information included in their grant applications is true, accurate and complete and that all sources are appropriately acknowledged and referenced. Applicants should be aware that using generative AI may lead to the presentation of information without proper recognition of authorship or acknowledgement.
  • Applicants must state if and how generative AI has been used in the development of their application and are required to follow specific instructions, which will be provided for each funding opportunity as they become available.
  • Use of publicly available generative AI tools for evaluating grant applications is strictly prohibited.

The guidance is based on advice provided by a panel of external experts established in November 2023.

This policy statement clarifies CIHR’s position on the use of AI-based assistants and transcription technology in peer review meetings:

  • All meeting participants – whether attending in person or virtually – are prohibited from using AI-based software or applications that automatically transcribe and/or summarize spoken dialogue. This restriction extends to pre- and post- meeting discussions related to funding applications.

Since peer review meetings contain protected information and materials, using AI assistants and transcription services violates the Guide on Handling Documents Used in Peer Review

Authorship and AI tools

Committee on Publication Ethics (COPE) position statement. Published February 2023

The position states that AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. Authors who use AI tools must be transparent in disclosing which tools were used and how they were used. Authors are fully responsible for the content of their manuscript, including those parts produced by an AI tool.

Artificial Intelligence in Research: Policy Considerations and Guidance

National Institutes of Health. Updated August 2024

NIH’s robust system of policies and requirements help guide the research community on using emerging technology, including AI responsibly. Researchers can find policies related to participant protection, data management and sharing, health information, intellectual property, peer review, and other important research considerations.

Scroll to Top