A growing wave of AI assistance—from drafting text to peer reviews—is reshaping the landscape of scientific communication. Tools like ChatGPT are already being used by researchers to draft manuscripts and respond to reviewers quickly, promising faster completion of papers and more time devoted to actual science.


Advocates highlight the potential equity boost, especially for non-native English speakers, who may benefit from AI tools that help refine language. Many early-career researchers report that language barriers contribute disproportionately to rejection, and AI could help bridge that gap.


However, this optimism is shadowed by serious concerns. AI-generated text isn't always accurate and may introduce fabrications or false citations. Academic publishers fear that such tools could facilitate the rise of “paper mills”—operations that produce fake but realistic-seeming research.


The limitations of AI detection tools add to the dilemma. Current systems struggle to reliably distinguish AI-written text from human-authored work, raising fears that distorted or fraudulent content may slip through peer review.


Publishers are responding in different ways. Some, like Science, have opted for outright bans on generative AI use, while others—including Nature—require full transparency and disclosure of AI contributions. Still, there is no industry-wide standard yet.


Notably, submission systems at some publishers now require verifiable institutional email addresses and may include video calls with authors and referees, aiming to authenticate authorship and ensure authors have indeed conducted the research.


Meanwhile, researchers and editors are calling for uniform guidelines. A recent study found that while a minority of journals and publishers have AI-use policies, the majority are still without formal rules. Experts are working toward creating clear, standardized protocols to govern AI use in scientific publishing.