Weird rat, AI Fake Science - trust the what?
Description
Once upon a time, we said, "trust the science," the method of finding answers.
Then came this new study, rooted in the scientific method, carefully and cleverly outlining new research. It was peer-reviewed and approved for publication.
It gave us this rat with giant balls. I mean, they're twice the size of its head parts. It probably needs a wheelbarrow to walk down the street.
But then somebody noticed in the journal that the words read a little weird, like Midjourney gibberish.
How does this kind of work get published and approved? Is science research being undermined and hurt by AI?
Episode 58 Playlist
1:38 AND EVERY DAY – We discover the fraud is increasing
4:17 until one day what can we do about it
7:28 BECAUSE OF THAT – Science publishing started looking at how to detect, and create, fake science articles
11:03 Biggest challenge to detecting fake AI scientific content
12:19 The Risks of AI Detection to Good Scientific research
13:30 Trying to stop AI generated content...with AI?
17:12 Science starts developing AI tools to help create, not just stop
20:43 AI Fake science put 19 journals out of business, what could it bring?
AI in Scientific Research: Promise and Peril
While fake videos and audio dominate the headlines, abuse of AI is popping up in unexpected places in the field of science.
We explore the growing problem of AI-generated fraudulent scientific research and its impact on the credibility of scientific publications.
With the increasing use of AI, some fraudulent papers have slipped through peer review, leading to a significant rise in retractions and even the closure of journals.
The episode concludes by discussing the potential benefits of using AI as a tool in scientific research, provided there is transparency and proper human oversight.
Ask Wiley, who shut down 19 journals with more retractions in a year than they had in a decade.
The answer is surprising. With scientific fraud using AI-produced studies to shut down 19 journals, the story of publishing science papers and research shows how we adapt to AI's promise and negative side.
Introduction to AI-produced Fake Science
* Trust in scientific research meets a challenge in AI-generated fraud.
* Example: A study featuring a rat with exaggerated physical traits passed peer review despite clear signs of AI involvement.
Impact on Scientific Publishing
* The rise in AI-generated fraudulent papers has led to the shutdown of journals.
* Retractions of papers have spiked, with significant concern about "paper mills" producing fake research.
Challenges with AI in Science
* The use of AI in generating scientific content poses risks, including false positives in AI detection tools.
* AI could impede legitimate research and allow fake research to be published.
Interviews and Insights
* Jason Rodgers discusses the challenges in detecting AI-generated scientific content and the implications for research integrity.
Jason Rodgers is a Producer — Audio Production Specialist with Bitesize Bio
Affiliation: Liverpool John Moores University (LJMU), Applied Forensic Technology Research Group (AFTR)
Qualifications: BSc Audio Engineering - University of the Highlands & Islands (2017), MSc Audio Forensics & Restoration - Liverpool John Moores University (2023)
Professional Bodies: Audio Engineering Society - Member, Chartered Society of Forensic Sciences - Associate Member (ACSFS)
LinkedIn: https://www.linkedin.com/in/ca33ag3/
AI as a Tool in Research
* AI has been used to assist in drug discovery and could benefit scientific research if used transparently.
Conclusion
* Transparency and human oversight in AI-assisted research will help ensure scientific integrity.
Let's find out why those 19 journals shut down and why. Starting with an easy question:
Does AI have any benefit to scientific research from now on? Or is it a Pandora's box?
Every day, we discover that fraud is increasing. Though it's still only 2% of what's out there. So, this is only part of the industry. But there's danger involved, and scientists are worried.
How we handle danger makes all the difference in AI.
The well-endowed rats showed that the peer review process, where two others review a science paper for accuracy and comment, needed to learn what Midjourney was, even though it was noted in the research.
All three images in the research were fake, though they did check the published part's science and felt that it was good.
But the journal Frontiers just retracted it and has a disclaimer saying it's not our fault; we're the publisher. The problem lies with the person publishing.
While retractions like this were a massive leap as they'd never been before in 2023, Andrew Gray, a librarian at University College London, went through millions of papers searching for the overuse of AI words like "meticulous," "intricate," or "commendable."
However, these may become citations for other research. Gray says we will see increased numbers in 2024. Thirteen thousand papers were retracted in 2023.
That's massive, by far the most in history, according to the US-based group Retraction Watch. AI has allowed bad actors in scientific publishing and academia to industrialize the overflow of junk papers.
Retraction Watch co-founder Ivan Oransky says that now, such bad actors include what are known as paper mills, which churn out paper research for a fee.
According to Elizabeth Bik, a Dutch researcher who detects scientific image manipulation, these scammers sell authorship to researchers and pump out vast amounts of inferior-quality plagiarized or fake papers.
Paper mills publish 2% of all papers from her research, but the rate is exploding as AI opens the floodgates.
So now we all of a sudden understand that this isn't just a problem. AI-generated science is a threat to what we see in scientific journals. One day, they woke up, and the industry said, what can we do about it?
Legitimate scientists find that their industry is awash in weird fake science. Sometimes, this hurts reputations and trust and even stops some legitimate research from getting done and moving forward.
In its August edition, Resources Policy, an academic journal under the Elsevier publishing umbrella, featured a peer-reviewed study about how e-commerce has affected fossil fuel efficiency in developing nations.
But buried in the report was a curious sentence:
"Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table."
Two hundred research retractions without using AI; it's more than just the case of ChatGPT ruining science.
But what's happening is the industry is trying to stop this.
Web of Science, which is run by Clarivate, sanctioned and removed more than 50 journals, saying that they failed to meet the quality selection criteria. Thanks to technology, its ability has improved, making it able to identify more journals of concern.
It sounds like AI, and it probably is. Well, that's good to do.
It's after the fact after we've seen it published. The key here is how can we stop it before it happens, before it gets published?
And how can we encourage scientific researchers to use AI correctly? We need something to stop it before it happens.
Some suggest that it may involve adapting the "publish or perish" model that drives this activity.
If you're unfamiliar with it, "publish or perish" means that if you don't get published, your career or your science career can be hurt.
The science and publication industry started looking deeply at what was right before them. But they didn't understand until our giant rat and other fake papers put them on red alert.
Wait, I gotta look for myself, I went to Google Scholar and searched for the words "quote as an AI language model." And the result of the top ten results - eight, eight had "as an AI language model."
And if you've ever used ChatGPT, you know what that means. Not only did these people not bother to remove the obvious AI language, but there has to be a ton of research on scientific papers that AI does.
But the people creating them know to remove the AI words and run the text through a basic grammar c