A couple of months ago, I woke up to mayhem on social media. Angry posts about how a paper featuring an image of a rodent with grotesque, biologically inaccurate genitalia had passed the peer-review process flooded my feed. In the next couple of months, more such instances arose: in one paper’s introduction, the authors erroneously left in a ChatGPT prompt, while in another instance, an AI chat response found its way into an article summary. Although the authors of the rat-gate paper had declared the usage of AI as per journal guidelines, these examples have led to a scientific discourse on the boundaries of AI usage in scientific publishing.

There’s not a straightforward answer in this case. On one hand, AI can help scientists communicate better in many ways. When Chat GPT was new, some open minded scientists started their intro slides at a conference with, “I asked Chat GPT about funding in my field” type of data. It was a fun approach to excite the audience and succinctly put forth their problem statement. Similarly, for non-native English speakers who struggle with writing papers, borrowing the right words from an AI tool could enable them to effectively convey their scientific findings. On the flip side, easy access to these technologies poses a risk for an increase in data manipulation and unreliable papers. 

While the jury is out on how AI will alter scientific publishing in the long run, in my opinion, scientists need to move past the everything-or-nothing mindset and see AI for what it is: a tool. Just like one cannot put random numbers into a calculator and expect it to file their taxes, scientists should not use AI without due diligence. Even as the scientific community navigates this wobbly path today, they can find a happy medium by simply putting human intelligence first.

What do you think about using AI in scientific publishing? 

Share Your Opinion