SP
BravenNow
Exploring Human Behavior During Abstract Rule Inference and Problem Solving with the Cognitive Abstraction and Reasoning Corpus
| USA | technology | ✓ Verified - arxiv.org

Exploring Human Behavior During Abstract Rule Inference and Problem Solving with the Cognitive Abstraction and Reasoning Corpus

#Cognitive Abstraction and Reasoning Corpus #Abstract Reasoning #Human Cognition #Problem Solving #Artificial Intelligence #Rule Inference #Cognitive Strategies

📌 Key Takeaways

  • Researchers introduced CogARC, a human-adapted subset of ARC for studying abstract reasoning
  • 260 participants solved 75 abstract visual reasoning problems across two experiments
  • Participants showed 80-90% accuracy but varied widely in performance and strategy
  • The research provides insights into how humans generalize, misgeneralize, and adapt strategies under uncertainty

📖 Full Retelling

Researchers led by Caroline Ahn, in collaboration with six other scientists, introduced the Cognitive Abstraction and Reasoning Corpus (CogARC) in a paper published on arXiv on February 25, 2026, to investigate the cognitive strategies underlying human abstract reasoning abilities and how people rapidly learn and apply rules from sparse examples. The CogARC represents a diverse human-adapted subset of the original Abstraction and Reasoning Corpus (ARC), which was developed primarily to benchmark abstract reasoning capabilities in artificial intelligence systems. Across two comprehensive experiments, the research team administered CogARC to 260 human participants who freely generated solutions to 75 abstract visual reasoning problems that required inferring input-output rules from minimal examples to transform test inputs into correct outputs. The study recorded participants' behavior at high temporal resolution, capturing example viewing patterns, edit sequences, and multiple submission attempts to analyze problem-solving approaches. Participants demonstrated generally strong performance with mean accuracy of approximately 90% in the first experiment (with 40 participants) and 80% in the second experiment (with 220 participants), though performance varied significantly across both problems and individuals. The research revealed that more challenging problems elicited longer deliberation times and greater divergence in solution strategies, while participants initiated responses more quickly as the task progressed, though with a slight decline in accuracy, suggesting increased familiarity with task structure rather than improved rule-learning abilities. Notably, even incorrect solutions frequently showed high convergence, with some solution trajectories progressing directly toward outcomes while others involved extended exploration or partial restarts before reaching conclusions.

🏷️ Themes

Cognitive Science, Artificial Intelligence, Human Reasoning

📚 Related People & Topics

Problem solving

Problem solving

Process of achieving a goal by overcoming obstacles

Problem solving is the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields.

View Profile → Wikipedia ↗
Artificial intelligence

Artificial intelligence

Intelligence of machines

# Artificial Intelligence (AI) **Artificial Intelligence (AI)** is a specialized field of computer science dedicated to the development and study of computational systems capable of performing tasks typically associated with human intelligence. These tasks include learning, reasoning, problem-solvi...

View Profile → Wikipedia ↗

Entity Intersection Graph

No entity connections available yet for this article.

Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.22408 [Submitted on 25 Feb 2026] Title: Exploring Human Behavior During Abstract Rule Inference and Problem Solving with the Cognitive Abstraction and Reasoning Corpus Authors: Caroline Ahn , Quan Do , Leah Bakst , Michael P. Pascale , Joseph T. McGuire , Michael E. Hasselmo , Chantal E. Stern View a PDF of the paper titled Exploring Human Behavior During Abstract Rule Inference and Problem Solving with the Cognitive Abstraction and Reasoning Corpus, by Caroline Ahn and 6 other authors View PDF HTML Abstract: Humans exhibit remarkable flexibility in abstract reasoning, and can rapidly learn and apply rules from sparse examples. To investigate the cognitive strategies underlying this ability, we introduce the Cognitive Abstraction and Reasoning Corpus , a diverse human-adapted subset of the Abstraction and Reasoning Corpus which was originally developed to benchmark abstract reasoning in artificial intelligence. Across two experiments, CogARC was administered to a total of 260 human participants who freely generated solutions to 75 abstract visual reasoning problems. Success required inferring input-output rules from a small number of examples to transform the test input into one correct test output. Participants' behavior was recorded at high temporal resolution, including example viewing, edit sequences, and multi-attempt submissions. Participants were generally successful (mean accuracy ~90% for experiment 1 40), ~80% for experiment 2 220) across problems), but performance varied widely across problems and participants. Harder problems elicited longer deliberation times and greater divergence in solution strategies. Over the course of the task, participants initiated responses more quickly but showed a slight decline in accuracy, suggesting increased familiarity with the task structure rather than improved rule-learning ability. Importantly, even incorrect solutions were often highly convergent, even when t...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine