Точка Синхронізації

AI Archive of Human History

GhostCite: A Large-Scale Analysis of Citation Validity in the Age of Large Language Models
| USA | technology

GhostCite: A Large-Scale Analysis of Citation Validity in the Age of Large Language Models

#Large Language Models #GhostCite #CiteVerifier #citation validity #AI hallucinations #academic writing #scholarly research #arXiv

📌 Key Takeaways

  • Researchers have identified 'ghost citations' as a systemic threat to scientific integrity in the age of AI.
  • A new study on arXiv quantifies how Large Language Models frequently fabricate non-existent academic references.
  • The 'CiteVerifier' open-source framework was developed to automatically detect and mitigate these AI hallucinations.
  • The proliferation of invalid citations risks collapsing the trust required for scientific claims and peer reviews.

📖 Full Retelling

Researchers specializing in artificial intelligence and academic integrity published a comprehensive study titled 'GhostCite' on the arXiv preprint server in February 2025, detailing the systemic threat of fabricated scientific citations generated by Large Language Models (LLMs). The paper introduces a new measurement framework to quantify how frequently AI-driven academic writing produces 'ghost citations'—references that appear legitimate but do not actually exist. This investigation was launched to address growing concerns that the increasing reliance on LLMs for research and drafting is undermining the fundamental reliability of the global scientific record. The study highlights a critical vulnerability in the current academic ecosystem: while citations serve as the bedrock of scientific trust, the hallucination tendencies of AI tools are polluting datasets with non-existent sources. By analyzing citation validity at a large scale, the authors reveal that the problem is not merely an occasional error but a structural risk. As LLMs become integrated into the workflows of students and professional researchers alike, the distinction between verified scholarly work and AI-generated fabrications is becoming increasingly blurred, posing a danger to the peer-review process and future literature reviews. To combat this phenomenon, the research team developed and released 'CiteVerifier,' an open-source framework designed to automate the detection of invalid citations. CiteVerifier works by cross-referencing AI-generated bibliographies against established scholarly databases to identify discrepancies and outright fictions. By providing this tool to the public, the researchers aim to offer a scalable solution for journals, reviewers, and universities to verify the authenticity of references. The project emphasizes that while AI can assist in content creation, rigorous verification mechanisms are essential to prevent the erosion of academic rigor and preserve the sanctity of scientific discourse.

🏷️ Themes

Artificial Intelligence, Academic Integrity, Science & Technology

📚 Related People & Topics

Large language model

Type of machine learning model

A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...

Wikipedia →

Hallucination (artificial intelligence)

Hallucination (artificial intelligence)

Erroneous AI-generated content

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation, or delusion) is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where...

Wikipedia →

🔗 Entity Intersection Graph

Connections for Large language model:

View full profile →

📄 Original Source Content
arXiv:2602.06718v1 Announce Type: cross Abstract: Citations provide the basis for trusting scientific claims; when they are invalid or fabricated, this trust collapses. With the advent of Large Language Models (LLMs), this risk has intensified: LLMs are increasingly used for academic writing, yet their tendency to fabricate citations (``ghost citations'') poses a systemic threat to citation validity. To quantify this threat and inform mitigation, we develop CiteVerifier, an open-source framew

Original source

More from USA

News from Other Countries

🇵🇱 Poland

🇬🇧 United Kingdom

🇺🇦 Ukraine

🇮🇳 India