OpenDeception: Learning Deception and Trust in Human-AI Interaction via Multi-Agent Simulation
#LLM #OpenDeception #AI ethics #deception detection #multi-agent simulation #human-AI interaction #arXiv
📌 Key Takeaways
- OpenDeception is a new framework designed to measure deception risks in human-AI dialogues.
- The system includes a benchmark of 50 real-world scenarios covering various deceptive contexts.
- Researchers used multi-agent simulations to observe how trust is built and broken during interactions.
- The framework evaluates safety from both the AI's perspective and the human user's perspective.
📖 Full Retelling
🏷️ Themes
Artificial Intelligence, AI Safety, Cybersecurity
📚 Related People & Topics
Large language model
Type of machine learning model
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...
Ethics of artificial intelligence
The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...
🔗 Entity Intersection Graph
Connections for Large language model:
- 🌐 Reinforcement learning (7 shared articles)
- 🌐 Machine learning (5 shared articles)
- 🌐 Theory of mind (2 shared articles)
- 🌐 Generative artificial intelligence (2 shared articles)
- 🌐 Automation (2 shared articles)
- 🌐 Rag (2 shared articles)
- 🌐 Scientific method (2 shared articles)
- 🌐 Mafia (disambiguation) (1 shared articles)
- 🌐 Robustness (1 shared articles)
- 🌐 Capture the flag (1 shared articles)
- 👤 Clinical Practice (1 shared articles)
- 🌐 Wearable computer (1 shared articles)
📄 Original Source Content
arXiv:2504.13707v3 Announce Type: replace Abstract: As large language models (LLMs) are increasingly deployed as interactive agents, open-ended human-AI interactions can involve deceptive behaviors with serious real-world consequences, yet existing evaluations remain largely scenario-specific and model-centric. We introduce OpenDeception, a lightweight framework for jointly evaluating deception risk from both sides of human-AI dialogue. It consists of a scenario benchmark with 50 real-world dec