PreScience: A Benchmark for Forecasting Scientific Contributions
#PreScience #Scientific forecasting #AI benchmark #Research prediction #LACERScore #Scientific contributions #AI research #Generative tasks
📌 Key Takeaways
- PreScience decomposes scientific research into four interdependent generative tasks
- The benchmark uses a dataset of 98K AI-related papers with 502K total citations
- Researchers developed LACERScore, a novel metric for evaluating contribution similarity
- Current AI models show moderate performance in scientific forecasting tasks
- AI-generated research is less diverse and novel than human-authored research
📖 Full Retelling
Researchers led by Anirudh Ajith and Amanpreet Singh introduced PreScience, a scientific forecasting benchmark that decomposes the research process into four interdependent generative tasks, on arXiv on February 24, 2026, to develop AI systems capable of forecasting scientific advances and helping researchers identify collaborators and impactful research directions. The PreScience benchmark is built on a carefully curated dataset of 98,000 recent AI-related research papers, featuring disambiguated author identities, temporally aligned scholarly metadata, and a structured graph of companion author publication histories and citations spanning 502,000 total papers. The research team developed baselines and evaluations for each task, including LACERScore, a novel LLM-based measure of contribution similarity that outperforms previous metrics and approximates inter-annotator agreement. The study found that substantial headroom remains in each task, with frontier LLMs achieving only moderate similarity to the ground-truth in contribution generation tasks, where GPT-5 averaged 5.6 on a 1-10 scale. When composed into a 12-month end-to-end simulation of scientific production, the resulting synthetic corpus was systematically less diverse and less novel than human-authored research from the same period, highlighting significant challenges in AI's ability to replicate the creative diversity of human scientific progress.
🏷️ Themes
Artificial Intelligence, Scientific Research, Benchmarking, Forecasting
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.20459 [Submitted on 24 Feb 2026] Title: PreScience: A Benchmark for Forecasting Scientific Contributions Authors: Anirudh Ajith , Amanpreet Singh , Jay DeYoung , Nadav Kunievsky , Austin C. Kozlowski , Oyvind Tafjord , James Evans , Daniel S. Weld , Tom Hope , Doug Downey View a PDF of the paper titled PreScience: A Benchmark for Forecasting Scientific Contributions, by Anirudh Ajith and Amanpreet Singh and Jay DeYoung and Nadav Kunievsky and Austin C. Kozlowski and Oyvind Tafjord and James Evans and Daniel S. Weld and Tom Hope and Doug Downey View PDF Abstract: Can AI systems trained on the scientific record up to a fixed point in time forecast the scientific advances that follow? Such a capability could help researchers identify collaborators and impactful research directions, and anticipate which problems and methods will become central next. We introduce PreScience -- a scientific forecasting benchmark that decomposes the research process into four interdependent generative tasks: collaborator prediction, prior work selection, contribution generation, and impact prediction. PreScience is a carefully curated dataset of 98K recent AI-related research papers, featuring disambiguated author identities, temporally aligned scholarly metadata, and a structured graph of companion author publication histories and citations spanning 502K total papers. We develop baselines and evaluations for each task, including LACERScore, a novel LLM-based measure of contribution similarity that outperforms previous metrics and approximates inter-annotator agreement. We find substantial headroom remains in each task -- e.g. in contribution generation, frontier LLMs achieve only moderate similarity to the ground-truth (GPT-5, averages 5.6 on a 1-10 scale). When composed into a 12-month end-to-end simulation of scientific production, the resulting synthetic corpus is systematically less diverse and less novel than human-authored re...
Read full article at source