AI for Grant Writing
Use with Caution
March 5, 2024 - by John Knox
The academic world is abuzz about the ability of artificial intelligence (AI) to help draft and revise scientific text. Its benefits, especially for non-native English writers, include the capacity to summarize entire articles, simplify jargon-laden paragraphs, and improve the clarity and conciseness of drafts.
But with the good comes the bad, and three grant writing experts in the Division of Cardiovascular Medicine want researchers to understand common pitfalls to avoid as they use AI as a tool in preparing grant applications.
“While there is immense promise with this technology, we worry about researchers using AI without fully understanding its consequences,” says Elizabeth Seckel, director of Strategic Research Development for the Division of Cardiovascular Medicine.
“For example, there is the risk of having their grants administratively rejected for plagiarism – or unknowingly having their precious grant text incorporated into data training sets and suggested to their competitors asking similar prompts in the future,” she adds.
Seckel, Brandi Stephens, PhD, and Fatima Rodriguez, MD, offer many similar pieces of practical advice in “Ten Simple Rules to Leverage Large Language Models for Getting Grants,” which was published March 1, 2024 in PLOS Computational Biology.
Although other articles have discussed using AI for scientific writing, “Ten Simple Rules…” is the first peer-reviewed article specifically focused on using AI for grant writing.
“While AI has the power to augment and optimize our grant writing processes, one key take away from our study has to do with large language models. They are powerful tools to aid in efficiency of the grant writing process, yet they are not substitutes for the scientific process of asking important, impactful clinical questions and testing hypotheses,” says Rodriguez, associate professor of Cardiovascular Medicine.
As a highly-active researcher in cardiology, Rodriguez works closely with Seckel and Stephens on developing grant proposals. Some of her work has focused on the use of large language models to learn from data about patient perspectives and barriers to adherence to cardiovascular therapies.
“Brandi and I work with a number of postdoctoral fellows, clinical fellows, and faculty like Fatima, and more and more of them are asking about using Chat GPT or other programs in their grants. So our intention in writing this paper was to let them know the main risks of using those AI tools along with some of the benefits,” Seckel says.
“Although the integration of AI in academia may appear inevitable, it can be intimidating to those who are unfamiliar with its nuances. Our paper simply outlines how best to use AI to your advantage and have it serve as a valuable tool for submitting competitive grant proposals,” adds Stephens, research development strategist in the Division of Cardiovascular Medicine.
“As specialists assisting grant writers at every stage, we have observed first-hand the increased usage of AI in grant proposals. Even grant funding agencies are establishing guidelines regarding the permissible use of AI in grant submissions. That’s why we conceptualized the idea of writing a ‘10 simple rules’ paper that focuses on AI usage and grant development,” she says.
“This is incredibly exciting technology, and we're only going to learn its full potential by using it. So as people move through their grant writing journey and put together these massive applications, I urge them to think about how they can start to incorporate large language model chatbots like Chat GPT into their grant writing process – trying different prompts, working with it in different sections of the application, reviewing its output for errors, and so on. Then, if a user sees that something is working, they should share it with other people so we can all learn together,” says Seckel.
To help in that effort, Seckel and Stephens created a GitHub repository to collate and curate resources for using AI to develop more competitive grant applications. Among other resources, the repository includes sample prompts that researchers can use when writing their applications. Seckel, Stephens, and Rodriguez invite researchers to browse and contribute to the repository in the course of their grant submissions.
Sidebar: A Large Language Model Primer
Large language models are the foundation for chatbots like Chat GPT (Generative Pre-trained Transformer). Chatbots are computer programs that simulate and process human conversation, either written or spoken. Users of large language models contribute data in the form of text that adds to an enormous database of words. Large language models use complex statistical algorithms with this increasingly large database of language to generate grammatically and semantically correct text. The text is generated by repeatedly predicting the next word, often in response to prompts (natural language orders that describe the task to be performed). Concerns over large language models include ethics, privacy, and the models’ inability to estimate the uncertainty or truth of their predictions – resulting in erroneous facts and references, commonly called “hallucinations.”
Your next recommended read
Tri-Valley – A Growing Nexus for Team Science Research
February 14, 2024. Dramatic growth in clinical care at Stanford Health Care Tri-Valley over the last decade, spurred by a rising, diverse patient population, is opening up new opportunities for clinical research and team science.