SP
BravenNow
Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment
| USA | technology | ✓ Verified - arxiv.org

Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment

📖 Full Retelling

arXiv:2603.13236v1 Announce Type: new Abstract: AI-related incidents are becoming increasingly frequent and severe, ranging from safety failures to misuse by malicious actors. In such complex situations, identifying which elements caused an adverse outcome, the problem of cause selection, is a critical first step for establishing liability. This paper investigates folk perceptions of causal responsibility in causal chain structures when AI systems are involved in harmful outcomes. We conduct hu

Entity Intersection Graph

No entity connections available yet for this article.

}
Original Source
--> Computer Science > Artificial Intelligence arXiv:2603.13236 [Submitted on 17 Feb 2026] Title: Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment Authors: Maria Victoria Carro , David Lagnado View a PDF of the paper titled Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment, by Maria Victoria Carro and 1 other authors View PDF HTML Abstract: AI-related incidents are becoming increasingly frequent and severe, ranging from safety failures to misuse by malicious actors. In such complex situations, identifying which elements caused an adverse outcome, the problem of cause selection, is a critical first step for establishing liability. This paper investigates folk perceptions of causal responsibility in causal chain structures when AI systems are involved in harmful outcomes. We conduct human experiments to examine judgments of causality, blame, foreseeability, and counterfactual reasoning. Our findings show that: (1) When AI agency was moderate (human sets the goal, AI determines the means) or high (AI sets the goal and the means), participants attributed greater causal responsibility to the AI. However, under low AI agency (where a human sets both a goal and means) participants assigned greater causal responsibility to the human despite their temporal distance from the outcome and despite both agents intended it, suggesting an effect of autonomy; (2) When we reversed roles between human and AI, participants consistently judged the human as more causal, even when both agents perform the same action; (3) The developer, despite being distant in the chain, was judged highly causal, reducing causal attributions to the human user but not to the AI; (4) Decomposing the AI into a large language model and an agentic component showed that the agentic part was judged as more causal in the chain. Overall, our research provides evidence on how people perceive the causal contribution of AI in both misuse and misalignment scenario...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine