SP
BravenNow
Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment
| USA | technology | ✓ Verified - arxiv.org

Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment

#AI causality #human attribution #agency #misuse #misalignment #ethical AI #responsibility

📌 Key Takeaways

  • Humans attribute causality to AI differently based on perceived agency, misuse, and misalignment.
  • AI's perceived agency influences whether humans hold it responsible for outcomes.
  • Misuse of AI by humans shifts blame away from the technology itself.
  • Misalignment between AI goals and human values complicates causal attribution.
  • Understanding these factors is crucial for developing ethical AI governance frameworks.

📖 Full Retelling

arXiv:2603.13236v1 Announce Type: new Abstract: AI-related incidents are becoming increasingly frequent and severe, ranging from safety failures to misuse by malicious actors. In such complex situations, identifying which elements caused an adverse outcome, the problem of cause selection, is a critical first step for establishing liability. This paper investigates folk perceptions of causal responsibility in causal chain structures when AI systems are involved in harmful outcomes. We conduct hu

🏷️ Themes

AI Ethics, Human Perception

Entity Intersection Graph

No entity connections available yet for this article.

Deep Analysis

Why It Matters

This research matters because it examines how people assign responsibility when AI systems cause harm, which directly impacts legal frameworks, corporate liability, and public trust in AI technologies. It affects policymakers who must create regulations, companies developing AI systems, and consumers who interact with AI daily. Understanding these attribution patterns is crucial for developing fair accountability mechanisms as AI becomes more integrated into society.

Context & Background

  • Previous research shows humans often struggle with assigning causality in complex technological systems, tending to anthropomorphize machines
  • Legal systems worldwide are grappling with how to assign liability for AI-caused harms, with current frameworks often treating AI as tools rather than agents
  • High-profile AI incidents like autonomous vehicle accidents and algorithmic bias cases have raised public awareness of AI responsibility questions
  • Philosophical debates about AI agency and moral responsibility have intensified as AI capabilities advance

What Happens Next

This research will likely inform upcoming regulatory discussions about AI liability in 2024-2025, particularly in the EU AI Act implementation and US AI policy development. Expect follow-up studies examining how these attribution patterns affect jury decisions in AI-related court cases. Technology companies may adjust their AI development practices and liability disclaimers based on these findings.

Frequently Asked Questions

What are the three main categories of AI causality mentioned in the research?

The research examines agency (AI acting autonomously), misuse (humans using AI improperly), and misalignment (AI behaving in unintended ways despite proper use). These categories help distinguish different types of AI-related incidents for attribution purposes.

Why do attribution patterns matter for AI development?

How people assign causality affects legal liability, public acceptance, and regulatory approaches. If people consistently blame humans for AI failures, developers might face less pressure for safety measures. Conversely, if AI is seen as autonomous agents, different accountability structures would be needed.

How might this research affect AI regulations?

The findings could influence whether regulations treat AI as tools (with human operators responsible) or as potential agents (with some autonomous responsibility). This distinction affects everything from product liability laws to insurance requirements for AI systems.

What practical implications does this have for companies using AI?

Companies may need to reconsider their risk management strategies, insurance coverage, and user education based on how people attribute causality. Understanding these patterns can help design better human-AI interfaces and clearer responsibility frameworks.

How does this relate to existing research on technology responsibility?

This builds on previous work examining responsibility attribution for other technologies, but addresses unique aspects of AI including learning capabilities, autonomy, and opacity. The research likely compares AI attribution patterns to those for other complex systems like automated machinery or software.

}
Original Source
--> Computer Science > Artificial Intelligence arXiv:2603.13236 [Submitted on 17 Feb 2026] Title: Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment Authors: Maria Victoria Carro , David Lagnado View a PDF of the paper titled Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment, by Maria Victoria Carro and 1 other authors View PDF HTML Abstract: AI-related incidents are becoming increasingly frequent and severe, ranging from safety failures to misuse by malicious actors. In such complex situations, identifying which elements caused an adverse outcome, the problem of cause selection, is a critical first step for establishing liability. This paper investigates folk perceptions of causal responsibility in causal chain structures when AI systems are involved in harmful outcomes. We conduct human experiments to examine judgments of causality, blame, foreseeability, and counterfactual reasoning. Our findings show that: (1) When AI agency was moderate (human sets the goal, AI determines the means) or high (AI sets the goal and the means), participants attributed greater causal responsibility to the AI. However, under low AI agency (where a human sets both a goal and means) participants assigned greater causal responsibility to the human despite their temporal distance from the outcome and despite both agents intended it, suggesting an effect of autonomy; (2) When we reversed roles between human and AI, participants consistently judged the human as more causal, even when both agents perform the same action; (3) The developer, despite being distant in the chain, was judged highly causal, reducing causal attributions to the human user but not to the AI; (4) Decomposing the AI into a large language model and an agentic component showed that the agentic part was judged as more causal in the chain. Overall, our research provides evidence on how people perceive the causal contribution of AI in both misuse and misalignment scenario...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine