SP
BravenNow
Not All Trust is the Same: Effects of Decision Workflow and Explanations in Human-AI Decision Making
| USA | technology | ✓ Verified - arxiv.org

Not All Trust is the Same: Effects of Decision Workflow and Explanations in Human-AI Decision Making

#trust #decision workflow #explanations #human-AI collaboration #reliance #AI systems #user interaction

📌 Key Takeaways

  • Human-AI trust varies based on decision workflow and explanation types
  • Different workflows (e.g., sequential vs. joint) impact user reliance on AI
  • Explanations can enhance trust but effectiveness depends on workflow context
  • Study highlights need for tailored AI systems to optimize human-AI collaboration

📖 Full Retelling

arXiv:2603.05229v1 Announce Type: cross Abstract: A central challenge in AI-assisted decision making is achieving warranted, well-calibrated trust. Both overtrust (accepting incorrect AI recommendations) and undertrust (rejecting correct advice) should be prevented. Prior studies differ in the design of the decision workflow - whether users see the AI suggestion immediately (1-step setup) or have to submit a first decision beforehand (2-step setup) -, and in how trust is measured - through self

🏷️ Themes

Human-AI Interaction, Trust Dynamics

📚 Related People & Topics

Decision-making

Decision-making

Process to choose a course of action

In psychology, decision-making (also spelled decision making and decisionmaking) is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rational or irrational. The decision-making process is a r...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Decision-making:

🌐 Machine learning 1 shared
View full profile

Mentioned Entities

Decision-making

Decision-making

Process to choose a course of action

}
Original Source
--> Computer Science > Human-Computer Interaction arXiv:2603.05229 [Submitted on 5 Mar 2026] Title: Not All Trust is the Same: Effects of Decision Workflow and Explanations in Human-AI Decision Making Authors: Laura Spillner , Rachel Ringe , Robert Porzel , Rainer Malaka View a PDF of the paper titled Not All Trust is the Same: Effects of Decision Workflow and Explanations in Human-AI Decision Making, by Laura Spillner and 3 other authors View PDF HTML Abstract: A central challenge in AI-assisted decision making is achieving warranted, well-calibrated trust. Both overtrust (accepting incorrect AI recommendations) and undertrust (rejecting correct advice) should be prevented. Prior studies differ in the design of the decision workflow - whether users see the AI suggestion immediately (1-step setup) or have to submit a first decision beforehand (2-step setup) -, and in how trust is measured - through self-reports or as behavioral trust, that is, reliance. We examined the effects and interactions of the type of decision workflow, the presence of explanations, and users' domain knowledge and prior AI experience. We compared reported trust, reliance (agreement rate and switch rate), and overreliance. Results showed no evidence that a 2-step setup reduces overreliance. The decision workflow also did not directly affect self-reported trust, but there was a crossover interaction effect with domain knowledge and explanations, suggesting that the effects of explanations alone may not generalize across workflow setups. Finally, our findings confirm that reported trust and reliance behavior are distinct constructs that should be evaluated separately in AI-assisted decision making. Comments: Accepted at Conversations 2025 Symposium Subjects: Human-Computer Interaction (cs.HC) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2603.05229 [cs.HC] (or arXiv:2603.05229v1 [cs.HC] for this version) https://doi.org/10.48550/arXiv.2603.05229 Focus to learn more arXiv-issued DOI via DataCi...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine