Skip to main content Skip to navigation

Information Literacy & Generative AI: Supporting the responsible use of AI in biomedical research

PDF

OpenAI’s ChatGPT and other systems based on large language models (LLM) including Elicit, SciNote, Writefull and Galactica have created major debates in academic circles. These systems are not only deft at writing text in responses to prompts, some of them like Elicit, Med-PaLM and Galactica can also search the literature and suggest research questions or insights about available knowledge in relation to a certain topic or question.

Different scholarly domains are actively discussing various opportunities and implications of using LLM in academia. For example, the potential impact of LLM on the future of student essays, medical education, MBA examination, and LLM implications for the fields of law and mental healthcare are just a few of these discourses. Furthermore, journal editors have deliberated the repercussions of using LLM for the communication of research results, highlighting how they challenge existing publication standards and have accordingly suggested guiding principles or drafted policies on how LLM should be used.

Given how quickly LLM captured the attention of the world and seeped into daily workflows in academia, institutions and administrators are challenged to prepare their communities and reflect on how these tools could change the existing norms and relationships. The absence of a clear understanding about the data and methods that were used to train LLM combined with a lack of transparency around how LLM render results to a query aggravates the suspicion that they are prone to various ethical issues. In academic contexts, where being unbiased is not only valued and encouraged, but is a prerequisite to reliability and objectivity, these ethical issues could threaten the integrity of research as well as institutional reputation. For example, conflating LLM output and factual/unbiased information may lead to incorrect conclusions or recommendations based on false information, resulting in research waste and exacerbation of misinformation.

Overreliance on LLM output can result in deprecation of critical thinking and skepticism, which are hallmarks of academic work. This can be dangerous in fields with major real-life implications (e.g., on how societies function and organize themselves), including medicine and engineering where mistakes could have significant consequences. This is not to say that using LLM in other areas is risk-free though. Since using data driven research is prone to propagate and amplify existing biases and reinforce societal prejudices, employing LLM in social science disciplines such as psychology or sociology could result in discrimination and heighten existing injustices and inequities.

As adoption and use of various AI and LLM in different contexts (e.g., commerce, media) increases, it is reasonable to consider the impact of these technologies on universities’ missions of promoting and encouraging discovery, education, service to the community, and personal and intellectual growth. We believe that education is one of the most effective ways to keep up with new technologies and promote their responsible use, nurturing community and capacity building. Motivated by fruitful discussions in a recent webinar entitled “Let’s ChatGPT”, our team at Galter Health Sciences Library, is partnering with the Institute for Augmented Intelligence in Medicine (I.AIM) to launch hands-on training for the biomedical research community about responsible use of AI and LLM. These sessions will be taught by Mohammad Hosseini, PhD, Preventive Medicine (Health and Biomedical Informatics), to provide attendees with an in-depth understanding of these technologies, their application, and strategies for responsible use. Industry partnerships will enable access to different technologies and support collaborative ongoing development of training materials. To motivate and support students toward interdisciplinary learning and responsible use of LLM in research, we will offer research opportunities to investigate and enable equitable use of generative AI and LLM in research. Stay tuned, and reach out if you would like to ChatGPT!

back to top
 

Updated: June 11, 2023