Skip to main content Skip to navigation

The Role of AI in Scholarly Communications: Ensuring Best Practices

PDF

Written by Karen Gutzman and Christina Gattone

Artificial Intelligence (AI) tools have rapidly gained prominence in various fields for scholarly communications. A 2023 Nature news article reported that over 30% of the 1,600 surveyed researchers relied on chatbots and LLMs for various tasks such as manuscript writing (1). Chatbots are conversational tools that interact with users, answering questions and streamlining repetitive tasks. These chatbots can incorporate large language models (LLMs) to enhance their language capabilities and improve their responses. Some well-known examples of chatbots include ChatGPT (OpenAI), Claude 2 (Anthropic), Gemini (Google), Copilot (Microsoft), and  Llama 2 (Meta).

back to top
 

Usefulness of AI in Authorship

The potential of AI is far reaching. Researchers can use AI technologies to inspire creativity, organize ideas, and speed up the writing process. AI-powered writing tools can provide suggestions for improving language usage, style, and tone, which can be particularly useful for non-native English speakers. Also, AI algorithms are available to help clinicians and researchers search for and apply information (2), and analyze, summarize or translate research articles (3). 

Even scholarly publishers have embraced AI to improve the editorial workflow. Editors and their staff may use plagiarism detection tools (4) on submitted articles, or they may use chatbots to assist in writing summaries of peer review reports or text for editorial decisions. Additionally, journal staff can use AI tools to create audio summaries or visual abstracts for published content. 

back to top
 

Ethical Considerations and Implications

Concerns related to the use of AI continue to be debated in the scholarly communication sphere. The Committee on Publication Ethics (COPE) and the World Association of Medical Editors published a statement in February 2023 that AI tools cannot be considered authors of scientific papers (5). The statement emphasized that AI tools lack the ability to take responsibility for research, manage conflicts of interest, or handle copyright and licensing agreements.  The ethical landscape of employing AI throughout the research process involves a range of considerations that impact several core elements of scholarly communication.

  • Publishing

Scholarly publishers have developed varying approaches regarding the use of AI-generated content. For example, publishers such as AMA’s JAMA Network discourage the use of AI in written content. Others like Springer Nature allow AI tools for writing but require authors to be transparent about this in their submission. Most publishers allow for AI tools to be used as part of the formal research design but require authors to disclose this information in their methods or acknowledgements section and include details such as the specific tool and how it was used in their work (6, 7).

More broadly, publishers are concerned about the rise of paper mills generating fake research papers as the use of advanced LLMs capable of generating text makes it harder to detect fraudulent papers that were once identifiable by their poor content and grammar (8). Publishers are also concerned about the authenticity of digital objects, such as images, audio, and video, all of which can be generated or manipulated by AI tools. Various coalitions and initiatives hope to address this issue by developing standards for identifying digitally altered objects (9, 10).

  • Authorship

Authors should be aware that using AI for writing can introduce the risk of unintentional plagiarism, where AI is unable to properly source or cite literature. For editors, the concern is also about overt plagiarism, where authors take credit for writing content generated by AI. Another deep concern is that LLMS used by chatbots rely on statistical relationships between written words, which can perpetuate outdated or harmful biases and racism when generating new text (11). 

  • Peer Review and Grant Review
CC Icon Statue” by Creative Commons, generated in part by the DALL-E 2 AI platform.
 

Many journal editors require peer-reviewers to declare their use of AI-tools for writing or summarizing their reviews due to concerns about sensitive or proprietary information in manuscripts, and the potential for bias or false information from AI-tools (12). Taking a much stronger approach, the NIH has told grant reviewers they are not allowed to use AI in their reviews. The NIH emphasized the importance of maintaining confidentiality and explained that the use of AI tools in analyzing and critiquing these materials violates the confidentiality expectations and integrity of the review process (13).

  • Copyright

Creative Commons (CC) licenses enable creators to specify how others can use their work while retaining copyright ownership, but these licenses do not override existing copyright limitations and exceptions (14). This is especially relevant in the context of AI, where the training of AI models using copyrighted works may be protected under fair use (in the US) and text/data mining exceptions (in Europe), but the degree to which these protections apply (or apply all) depends on specific use cases. Court cases currently being decided (15), as well as a large project by the U.S. Copyright Office on AI (16) will help clarify copyright law in relation to AI technologies.

back to top
 

AI and Responsible Use

Recognizing the need for responsible use of AI, the United States issued an Executive Order in October 2023. The order put federal agencies under a short timeline to create new standards for AI that promote safety, protect privacy, advance equity and civil rights, and promote innovation and competition (17).  A potentially helpful resource for researchers is the Decision Tree for the Responsible Application of Artificial Intelligence developed by AAAS to help researchers integrate ethical principles into the development and implementation of AI technologies. The guide provides practical steps and appropriate questions for researchers to consider when considering the use of AI in their work (18). It is very important for researchers to be aware of the limitations and biases of AI when they are using these tools in scholarly communications. Transparent and ethical use of AI by researchers, authors, peer-reviewers, and publishers can enhance academic research while still maintaining its integrity.  

back to top
 

Updated: March 25, 2024