Top Headlines

Feeds

AI‑Generated Papers Flood Conferences, Prompting Citation Crisis and Educational Backlash

Updated (2 articles)

AI Tools Accelerate Paper Production and Fabricated Citations Rapid‑writing systems introduced in late 2024 now draft entire sections and suggest specific references, enabling researchers to submit manuscripts at unprecedented speed. A scan of 17,000 submissions to ACL, NAACL and EMNLP from 2024‑2025 uncovered 295 papers containing at least one invented citation, up from 20 in 2024 to 275 in 2025—still under 2 % of the total but a stark rise that overwhelms peer‑review capacity [2]. These “AI scientist” tools not only generate text but also insert non‑existent sources that later propagate through citation databases, creating a network of “ghost entries.”

Peer Review Overload Allows Bogus References to Slip Through Reviewers report handling dozens of papers within days, turning evaluation into a formalistic checklist rather than substantive scrutiny. Co‑author Yusuke Sakai described completing ten reviews in a single week, noting that even flagged false references often remain uncorrected [2]. The surge in AI‑driven submissions leaves reviewers with limited time to verify each citation, especially as erroneous database entries are copied across multiple manuscripts, amplifying misinformation.

Academic Community Warns of Creativity and Critical Thinking Decline Commentators argue that easy access to AI‑generated content encourages speed over deep thinking, eroding disciplined essay writing and reading habits among students and professionals [1]. The flood of AI‑produced papers, many with fabricated references, threatens the integrity of scholarly communication and fuels propaganda that undermines democratic discourse. Scholars call for safeguarding the humanities as a bulwark against this intellectual regression, emphasizing that genuine imagination cannot be replaced by algorithmic “hallucinations.”

Institutions Respond with Device Restrictions and Review Reforms In Denmark, schools have begun banning mobile phones, laptops and other digital tools to revive traditional, device‑free learning environments [1]. Researchers propose automated screening of manuscripts with three or more suspicious citations and a shift to continuous, yearly review models akin to “megatidsskrifter” to improve reliability [2]. These measures aim to balance AI’s complementary role with robust human oversight, preserving critical thought while curbing the spread of fabricated scholarship.

Sources

Timeline

Late 2024 – New “AI scientist” systems debut, automatically searching literature, suggesting specific citations and drafting review sections, which later become a primary source of fabricated references in scholarly papers [2].

2024 – Researchers identify only 20 conference papers containing at least one false citation among 17,000 submissions to ACL, NAACL and EMNLP, marking the first measurable appearance of “phantom citations” [2].

2025 – The count of papers with fabricated references surges to 275, reaching 295 total across 2024‑2025 and still under 2 % of examined submissions, as AI‑generated writing tools accelerate manuscript preparation [2].

2025 – AI tools enable rapid production of scientific articles, many embedding fabricated or misattributed references, overwhelming peer‑review systems and spreading misinformation across the research ecosystem [1].

2025 – Denmark implements a nationwide policy restricting mobile phones, laptops and other digital devices in classrooms, aiming to revive device‑free, conventional learning methods [1].

2025 – The decline of essay writing, reading and critical discourse—exacerbated by AI‑driven propaganda and deepfakes—begins to erode democratic participation and the capacity for free thought [1].

2025‑2026 – Universities receive calls to safeguard the humanities as a bulwark for critical thinking, positioning AI as a complementary tool rather than a substitute for human creativity [1].

Feb 17, 2026 – A study published reveals 295 conference papers with fabricated citations, attributes the problem to “ghost entries” in major research databases that propagate errors, and notes reviewers cannot verify every citation under tight deadlines; co‑author Yusuke Sakai describes the review process as “more formalistic than substantive” [2].

Feb 18, 2026 – An op‑ed warns that AI encourages speed over deep thinking, risking intellectual regression as students and professionals outsource cognitive labour, and dismisses LLM “hallucinations” as misguided predictions that shrink humanity’s definition [1].

Future (post‑2026) – Researchers propose automated screening that flags papers with three or more suspicious references for manual verification and a continuous‑yearly review model to replace the current conference‑centric evaluation, aiming to restore reliability in scholarly publishing [2].

External resources (3 links)