SP
BravenNow
🏒
🌐 Entity

Explainable artificial intelligence

AI whose outputs can be understood by humans

πŸ“Š Rating

9 news mentions Β· πŸ‘ 0 likes Β· πŸ‘Ž 0 dislikes

πŸ“Œ Topics

  • Explainable AI (7)
  • Machine Learning (3)
  • Cybersecurity (1)
  • IoT Systems (1)
  • Neural Network Calibration (1)
  • Control Theory Applications (1)
  • Human-AI Interaction (1)
  • AI fairness (1)
  • Multimodal systems (1)
  • Physics-based modeling (1)
  • Medical AI (1)
  • Brain tumor diagnosis (1)

🏷️ Keywords

Explainable AI (9) Β· Neural Networks (2) Β· Deep Learning (2) Β· IoT DDoS detection (1) Β· Transfer learning (1) Β· Convolutional neural networks (1) Β· Resource constraints (1) Β· DenseNet (1) Β· MobileNet (1) Β· Cybersecurity (1) Β· Control Theory (1) Β· Machine Learning Calibration (1) Β· Human-AI Interaction (1) Β· Physics-Inspired AI (1) Β· Algorithmic fairness (1) Β· Multimodal bias (1) Β· Physics-based characterization (1) Β· Large language models (1) Β· Cross-modal bias (1) Β· Transformer dynamics (1)

πŸ“– Key Information

Within artificial intelligence (AI), explainable AI (XAI), generally overlapping with interpretable AI or explainable machine learning (XML), is a field of research that explores methods that provide humans with the ability of intellectual oversight over AI algorithms. The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms, to make them more understandable and transparent. This addresses users' requirement to assess safety and scrutinize the automated decision making in applications.

πŸ“° Related News (9)

πŸ”— Entity Intersection Graph

Deep learning(3)Neural network(2)Efficiency(1)Transparency(1)Information retrieval(1)Medical imaging(1)Information system(1)Xai(1)Reference architecture(1)Convolutional neural network(1)Explainable artificial intelligence

People and organizations frequently mentioned alongside Explainable artificial intelligence:

πŸ”— External Links