Multimodal Multi-Agent Empowered Legal Judgment Prediction
#Legal Judgment Prediction #JurisMMA #Multimodal #Multi-Agent #Artificial Intelligence #Legal Technology #Machine Learning
📌 Key Takeaways
- JurisMMA decomposes trial tasks into standardized stages for more effective legal judgment prediction
- Researchers created JurisMM dataset with over 100,000 Chinese judicial records including multimodal data
- The framework was validated through experiments on both JurisMM and LawBench benchmarks
- JurisMMA shows effectiveness beyond LJP for broader legal applications
- The research was accepted to IEEE International Conference on Acoustics, Speech, and Signal Processing 2026
📖 Full Retelling
Researchers including Zhaolu Kang, Junhao Gong, and their team introduced JurisMMA, a novel framework for Legal Judgment Prediction (LJP), in a paper published on arXiv on February 19, 2026, aiming to overcome limitations in traditional methods that struggle with multiple allegations, diverse evidence, and lack adaptability. The JurisMMA framework effectively decomposes trial tasks, standardizes processes, and organizes them into distinct stages, providing a more systematic approach to predicting legal case outcomes. Additionally, the researchers developed JurisMM, a comprehensive dataset containing over 100,000 recent Chinese judicial records, featuring both text and multimodal video-text data to enable thorough evaluation of their framework. The effectiveness of JurisMMA was validated through experiments conducted on both the newly created JurisMM dataset and the existing benchmark LawBench, demonstrating that this innovative approach not only excels in Legal Judgment Prediction but also shows potential for broader legal applications, offering new perspectives for future legal methodologies and dataset development.
🏷️ Themes
Artificial Intelligence, Legal Technology, Machine Learning
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Computation and Language arXiv:2601.12815 [Submitted on 19 Jan 2026 ( v1 ), last revised 19 Feb 2026 (this version, v5)] Title: Multimodal Multi-Agent Empowered Legal Judgment Prediction Authors: Zhaolu Kang , Junhao Gong , Qingxi Chen , Hao Zhang , Jiaxin Liu , Rong Fu , Zhiyuan Feng , Yuan Wang , Simon Fong , Kaiyue Zhou View a PDF of the paper titled Multimodal Multi-Agent Empowered Legal Judgment Prediction, by Zhaolu Kang and Junhao Gong and Qingxi Chen and Hao Zhang and Jiaxin Liu and Rong Fu and Zhiyuan Feng and Yuan Wang and Simon Fong and Kaiyue Zhou View PDF HTML Abstract: Legal Judgment Prediction aims to predict the outcomes of legal cases based on factual descriptions, serving as a fundamental task to advance the development of legal systems. Traditional methods often rely on statistical analyses or role-based simulations but face challenges with multiple allegations, diverse evidence, and lack adaptability. In this paper, we introduce JurisMMA, a novel framework for LJP that effectively decomposes trial tasks, standardizes processes, and organizes them into distinct stages. Furthermore, we build JurisMM, a large dataset with over 100,000 recent Chinese judicial records, including both text and multimodal video-text data, enabling comprehensive evaluation. Experiments on JurisMM and the benchmark LawBench validate our framework's effectiveness. These results indicate that our framework is effective not only for LJP but also for a broader range of legal applications, offering new perspectives for the development of future legal methods and datasets. Comments: Accepted to the IEEE International Conference on Acoustics, Speech, and Signal Processing 2026 Subjects: Computation and Language (cs.CL) ; Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Multiagent Systems (cs.MA) Cite as: arXiv:2601.12815 [cs.CL] (or arXiv:2601.12815v5 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2601.12815 Focus to learn more arX...
Read full article at source