SP
BravenNow
AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments
| USA | technology | ✓ Verified - arxiv.org

AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments

#AI-assisted moot courts #justice-specific questioning #oral arguments #legal training #simulation technology

📌 Key Takeaways

  • AI tools are being developed to simulate questioning by specific justices in moot court preparations.
  • These tools aim to help lawyers anticipate and practice responses to individual justices' styles and tendencies.
  • The technology uses historical data from oral arguments to model justices' questioning patterns.
  • This innovation could enhance legal training and improve advocacy in real court settings.

📖 Full Retelling

arXiv:2603.04718v1 Announce Type: cross Abstract: In oral arguments, judges probe attorneys with questions about the factual record, legal claims, and the strength of their arguments. To prepare for this questioning, both law schools and practicing attorneys rely on moot courts: practice simulations of appellate hearings. Leveraging a dataset of U.S. Supreme Court oral argument transcripts, we examine whether AI models can effectively simulate justice-specific questioning for moot court-style t

🏷️ Themes

Legal Technology, AI Simulation

Entity Intersection Graph

No entity connections available yet for this article.

}
Original Source
--> Computer Science > Computation and Language arXiv:2603.04718 [Submitted on 5 Mar 2026] Title: AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments Authors: Kylie Zhang , Nimra Nadeem , Lucia Zheng , Dominik Stammbach , Peter Henderson View a PDF of the paper titled AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments, by Kylie Zhang and 4 other authors View PDF HTML Abstract: In oral arguments, judges probe attorneys with questions about the factual record, legal claims, and the strength of their arguments. To prepare for this questioning, both law schools and practicing attorneys rely on moot courts: practice simulations of appellate hearings. Leveraging a dataset of U.S. Supreme Court oral argument transcripts, we examine whether AI models can effectively simulate justice-specific questioning for moot court-style training. Evaluating oral argument simulation is challenging because there is no single correct question for any given turn. Instead, effective questioning should reflect a combination of desirable qualities, such as anticipating substantive legal issues, detecting logical weaknesses, and maintaining an appropriately adversarial tone. We introduce a two-layer evaluation framework that assesses both the realism and pedagogical usefulness of simulated questions using complementary proxy metrics. We construct and evaluate both prompt-based and agentic oral argument simulators. We find that simulated questions are often perceived as realistic by human annotators and achieve high recall of ground truth substantive legal issues. However, models still face substantial shortcomings, including low diversity in question types and sycophancy. Importantly, these shortcomings would remain undetected under naive evaluation approaches. Comments: Accepted at CS & Law 2026 Subjects: Computation and Language (cs.CL) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2603.04718 [cs.CL] (or arXiv:2603.04718v1 [cs....
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine