SP
BravenNow
Through the Looking-Glass: AI-Mediated Video Communication Reduces Interpersonal Trust and Confidence in Judgments
| USA | technology | ✓ Verified - arxiv.org

Through the Looking-Glass: AI-Mediated Video Communication Reduces Interpersonal Trust and Confidence in Judgments

#AI #video communication #interpersonal trust #judgment confidence #social impact #technology ethics #mediated communication

📌 Key Takeaways

  • AI-mediated video communication reduces interpersonal trust between users.
  • It also decreases confidence in judgments made during such interactions.
  • The study highlights potential negative social impacts of AI in communication tools.
  • Findings suggest a need for caution in adopting AI for sensitive communications.

📖 Full Retelling

arXiv:2603.18868v1 Announce Type: cross Abstract: AI-based tools that mediate, enhance or generate parts of video communication may interfere with how people evaluate trustworthiness and credibility. In two preregistered online experiments (N = 2,000), we examined whether AI-mediated video retouching, background replacement and avatars affect interpersonal trust, people's ability to detect lies and confidence in their judgments. Participants watched short videos of speakers making truthful or d

🏷️ Themes

AI Ethics, Communication Technology

📚 Related People & Topics

Artificial intelligence

Artificial intelligence

Intelligence of machines

# Artificial Intelligence (AI) **Artificial Intelligence (AI)** is a specialized field of computer science dedicated to the development and study of computational systems capable of performing tasks typically associated with human intelligence. These tasks include learning, reasoning, problem-solvi...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Artificial intelligence:

🏢 OpenAI 14 shared
🌐 Reinforcement learning 4 shared
🏢 Anthropic 4 shared
🌐 Large language model 3 shared
🏢 Nvidia 3 shared
View full profile

Mentioned Entities

Artificial intelligence

Artificial intelligence

Intelligence of machines

Deep Analysis

Why It Matters

This research matters because AI-mediated video communication is becoming increasingly common in professional, educational, and personal contexts, potentially undermining the foundation of human relationships. It affects anyone who uses video calls for job interviews, remote work, healthcare consultations, or legal proceedings where trust and accurate judgment are critical. The findings suggest that widespread adoption of AI-enhanced video could fundamentally alter how we perceive and interact with others in digital spaces, with implications for everything from business negotiations to mental health therapy conducted via telehealth platforms.

Context & Background

  • Video conferencing usage exploded during the COVID-19 pandemic, with platforms like Zoom reporting 300 million daily meeting participants in 2020
  • AI video enhancement tools (like background blur, eye contact correction, and appearance filters) have become standard features in platforms like Microsoft Teams, Google Meet, and Apple FaceTime
  • Previous research has shown that even subtle digital manipulations can affect perceptions, such as studies finding that virtual backgrounds influence credibility judgments in professional settings
  • The 'uncanny valley' phenomenon in robotics and animation suggests humans feel discomfort when human-like representations are almost but not perfectly realistic
  • Trust in digital communication has been a growing concern, with surveys showing declining confidence in online interactions compared to face-to-face communication

What Happens Next

Expect increased scrutiny of AI video features by organizations that rely on remote communication, potentially leading to corporate policies restricting certain enhancements during important meetings. Regulatory bodies may begin examining whether AI-mediated video requires disclosure requirements, similar to truth-in-advertising laws. Technology companies will likely invest in research to develop 'trust-preserving' AI features that minimize the negative effects identified in this study. Academic conferences and journals will see follow-up studies examining specific AI manipulations (like voice modulation or expression smoothing) and their individual impacts on different types of relationships.

Frequently Asked Questions

What specific AI features were tested in this research?

The study likely examined common AI enhancements like real-time appearance filters, gaze correction (making it appear users maintain eye contact), background manipulation, and voice modulation. These features are designed to create more 'polished' communication but may inadvertently reduce authenticity perceptions that are crucial for trust building.

Does this mean we should avoid all AI-enhanced video communication?

Not necessarily—the research suggests being mindful about when and how to use these tools. For casual social interactions, AI enhancements might be harmless, but for situations requiring genuine connection and accurate judgment (like job interviews or medical consultations), minimizing artificial manipulation may be advisable until more research is available.

How was trust measured in this study?

Researchers typically measure trust through standardized psychological scales assessing perceived reliability, willingness to share sensitive information, confidence in the other person's intentions, and behavioral measures like cooperation in simulated scenarios. The study likely compared these metrics between AI-enhanced and unmodified video interactions.

Are some people more affected than others by AI-mediated communication?

Individual differences likely exist—people who are generally more skeptical of technology or who rely heavily on nonverbal cues might be disproportionately affected. Cultural factors may also play a role, as different societies place varying emphasis on specific communication norms that AI might inadvertently disrupt.

Could AI video features eventually be designed to increase rather than decrease trust?

Yes, future AI systems could potentially be designed with 'trust-aware' algorithms that preserve authentic cues while removing only distracting elements. This would require careful research into which modifications preserve versus undermine genuine human connection, representing an important direction for ethical AI development.

}
Original Source
--> Computer Science > Human-Computer Interaction arXiv:2603.18868 [Submitted on 19 Mar 2026] Title: Through the Looking-Glass: AI-Mediated Video Communication Reduces Interpersonal Trust and Confidence in Judgments Authors: Nelson Navajas Fernández , Jeffrey T. Hancock , Maurice Jakesch View a PDF of the paper titled Through the Looking-Glass: AI-Mediated Video Communication Reduces Interpersonal Trust and Confidence in Judgments, by Nelson Navajas Fern\'andez and 1 other authors View PDF HTML Abstract: AI-based tools that mediate, enhance or generate parts of video communication may interfere with how people evaluate trustworthiness and credibility. In two preregistered online experiments 2,000), we examined whether AI-mediated video retouching, background replacement and avatars affect interpersonal trust, people's ability to detect lies and confidence in their judgments. Participants watched short videos of speakers making truthful or deceptive statements across three conditions with varying levels of AI mediation. We observed that perceived trust and confidence in judgments declined in AI-mediated videos, particularly in settings in which some participants used avatars while others did not. However, participants' actual judgment accuracy remained unchanged, and they were no more inclined to suspect those using AI tools of lying. Our findings provide evidence against concerns that AI mediation undermines people's ability to distinguish truth from lies, and against cue-based accounts of lie detection more generally. They highlight the importance of trustworthy AI mediation tools in contexts where not only truth, but also trust and confidence matter. Subjects: Human-Computer Interaction (cs.HC) ; Artificial Intelligence (cs.AI cs.MM) Cite as: arXiv:2603.18868 [cs.HC] (or arXiv:2603.18868v1 [cs.HC] for this version) https://doi.org/10.48550/arXiv.2603.18868 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Nelson Nava...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine