AI in Science: From Helper to Nonsense Generator

In the scientific world, a scandal has broken out, associated with the use of artificial intelligence in publications. Analysts are sounding the alarm: machines are not only assisting researchers, but also creating serious problems for the academic community.

Recently, a number of publications with explicitly generated images have appeared in scientific journals (source). Some of these include an infographic with a rat depicted with an incredibly large genital organ, and an illustration of human legs with excess bones. Surprisingly, these poor-quality materials managed to pass through the review process, raising concerns about the quality of the auditing system in certain publications.

The issue extends beyond just illustrations. The ChatGPT chat bot, launched in November 2022, has revolutionized the way scientists approach their work. Controversies arose in March this year surrounding an article published by Elsevier, where the introduction included the suspicious phrase: “Of course, here is a possible introduction for your topic” – a clear indication of ChatGPT’s influence.

Andrew Gray, the librarian at University College London, conducted an analysis of millions of scientific papers to detect signs of AI usage. He searched for texts featuring an excessive use of certain words like “careful,” “complex,” or “laudable,” which AI tends to favor. Gray discovered that in 2023 alone, at least 60,000 articles were generated using chat bots, which accounts for more than 1% of the total number of publications per year.

Simultaneously, the number of revoked scientific papers is on the rise. According to the American group Retraction Watch, over 13,000 articles were retracted last year – marking a record high in history.

Ivan Orange, a co-founder of Retraction Watch, asserts that AI has enabled unethical scientists to mass-produce poor-quality materials. These so-called “paper factories” engage in producing low-quality, plagiarized articles and selling their rights to anyone, which is a major cause for concern.

Elizabeth Bic, a Dutch researcher specializing in identifying manipulations with scientific images, is also raising an alarm. She estimates that approximately 2% of all studies are products of “paper factories,” and this figure is rapidly increasing due to the new opportunities AI presents for abuse.

The issue is exacerbated by a culture that has permeated scientific circles over many decades, forcing scientists to abide by the “publish or die” principle. Orange criticizes the current system, stating: “Publishers have

/Reports, release notes, official announcements.