SP
BravenNow
When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design
| USA | technology | ✓ Verified - arxiv.org

When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design

#Agentic AI #Human-Centered AI #AI Intervention #Contextual Sensitivity #AI Ethics #Scene-Context-Behavior Model #AI Design Principles

📌 Key Takeaways

  • Researchers propose a new model for AI decision-making about when to act
  • The model integrates Scene, Context, and Human Behavior Factors
  • Five design principles guide AI intervention depth, timing, and intensity
  • The approach aims to create AI systems that act with contextual sensitivity

📖 Full Retelling

Syoung Jung and five fellow researchers published a groundbreaking paper on arXiv on February 26, 2026, proposing a human-centered model to address the critical challenge of when AI should intervene in human activities, as current agentic AI systems often lack principled judgment about appropriate action timing and purpose. The research introduces a conceptual framework that reframes AI behavior as an interpretive outcome integrating three key components: Scene (observable situations), Context (user-constructed meaning), and Human Behavior Factors (determinants shaping behavioral likelihood). This multidisciplinary approach draws from humanities, social sciences, HCI, and engineering to distinguish between what is merely observable and what is meaningful to users, explaining how identical scenarios can lead to different interpretations and outcomes. To translate this theoretical model into practical design guidance, the researchers derived five agent design principles—behavioral alignment, contextual sensitivity, temporal appropriateness, motivational calibration, and agency preservation—that specifically address intervention depth, timing, intensity, and restraint in AI systems.

🏷️ Themes

AI Ethics, Human-Computer Interaction, AI Decision Making

📚 Related People & Topics

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...

View Profile → Wikipedia ↗

AI agent

Systems that perform tasks without human intervention

In the context of generative artificial intelligence, AI agents (also referred to as compound AI systems or agentic AI) are a class of intelligent agents distinguished by their ability to operate autonomously in complex environments. Agentic AI tools prioritize decision-making over content creation ...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Ethics of artificial intelligence:

🏢 Anthropic 10 shared
🌐 Pentagon 10 shared
🏢 OpenAI 7 shared
👤 Dario Amodei 4 shared
🌐 National security 3 shared
View full profile
Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.22814 [Submitted on 26 Feb 2026] Title: When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design Authors: Soyoung Jung , Daehoo Yoon , Sung Gyu Koh , Young Hwan Kim , Yehan Ahn , Sung Park View a PDF of the paper titled When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design, by Soyoung Jung and 5 other authors View PDF Abstract: Agentic AI increasingly intervenes proactively by inferring users' situations from contextual data yet often fails for lack of principled judgment about when, why, and whether to act. We address this gap by proposing a conceptual model that reframes behavior as an interpretive outcome integrating Scene (observable situation), Context (user-constructed meaning), and Human Behavior Factors (determinants shaping behavioral likelihood). Grounded in multidisciplinary perspectives across the humanities, social sciences, HCI, and engineering, the model separates what is observable from what is meaningful to the user and explains how the same scene can yield different behavioral meanings and outcomes. To translate this lens into design action, we derive five agent design principles (behavioral alignment, contextual sensitivity, temporal appropriateness, motivational calibration, and agency preservation) that guide intervention depth, timing, intensity, and restraint. Together, the model and principles provide a foundation for designing agentic AI systems that act with contextual sensitivity and judgment in interactions. Subjects: Artificial Intelligence (cs.AI) ; Human-Computer Interaction (cs.HC) Cite as: arXiv:2602.22814 [cs.AI] (or arXiv:2602.22814v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2602.22814 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Sung Park [ view email ] [v1] Thu, 26 Feb 2026 09:56:37 UTC (474 KB) Full-text links: Acces...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine