Точка Синхронізації

AI Archive of Human History

An Information-Theoretic Framework for Comparing Voice and Text Explainability
| USA | technology

An Information-Theoretic Framework for Comparing Voice and Text Explainability

#XAI #Information Theory #Machine Learning #Voice Interface #Trust Calibration #arXiv #Explainable AI

📌 Key Takeaways

  • Researchers have developed a new mathematical framework based on information theory to compare voice and text AI explanations.
  • The study addresses the need for better trust calibration and user comprehension in Explainable AI (XAI) systems.
  • The framework treats the delivery of an explanation as a communication channel to measure information transfer efficiency.
  • Findings suggest that the modality of an explanation significantly impacts how users perceive and trust machine learning models.

📖 Full Retelling

Researchers specializing in artificial intelligence published a new study on the arXiv preprint server this week, introducing a pioneering information-theoretic framework designed to evaluate the effectiveness of voice versus text-based explanations in AI systems. The study addresses a critical gap in Explainable Artificial Intelligence (XAI), where most transparency tools have historically relied on visual or written data. By modeling explanation delivery as a communication channel, the research team aims to quantify how different sensory modalities influence a human user's ability to comprehend complex machine learning logic and calibrate their trust in automated decisions. The core of the research shifts the focus from the content of an explanation to the medium through which it is delivered. While text-based explanations allow for rapid scanning and re-reading, voice-based interactions introduce unique variables such as tone, pacing, and cognitive load. The authors argue that treating these interactions through the lens of information theory allows for a more mathematical and rigorous assessment of 'mutual information'—the measure of how much information is successfully transferred from the AI's internal model to the user’s mental model. This framework is particularly relevant as AI integration expands into hands-free or eyes-free environments, such as autonomous driving or smart home assistants, where textual interfaces are impractical. By analyzing trust calibration, the researchers explore whether users are more likely to over-rely on a system simply because it 'speaks' to them, or if the ephemeral nature of audio leads to lower comprehension compared to persistent text. This work sets the stage for more personalized XAI systems that can adapt their communication style based on the complexity of the task and the specific needs of the human operator.

🏷️ Themes

Artificial Intelligence, Human-Computer Interaction, Communication Theory

📚 Related People & Topics

Machine learning

Study of algorithms that improve automatically through experience

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances i...

Wikipedia →

Xai

Topics referred to by the same term

Xai, XAI or xAI may refer to:

Wikipedia →

Information theory

Scientific study of digital information

Information theory is the mathematical study of the quantification, storage, and communication of a particular type of mathematically defined information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of H...

Wikipedia →

🔗 Entity Intersection Graph

Connections for Machine learning:

View full profile →

📄 Original Source Content
arXiv:2602.07179v1 Announce Type: cross Abstract: Explainable Artificial Intelligence (XAI) aims to make machine learning models transparent and trustworthy, yet most current approaches communicate explanations visually or through text. This paper introduces an information theoretic framework for analyzing how explanation modality specifically, voice versus text affects user comprehension and trust calibration in AI systems. The proposed model treats explanation delivery as a communication chan

Original source

More from USA

News from Other Countries

🇵🇱 Poland

🇬🇧 United Kingdom

🇺🇦 Ukraine

🇮🇳 India