SP
BravenNow
MedClarify: An information-seeking AI agent for medical diagnosis with case-specific follow-up questions
| USA | technology | ✓ Verified - arxiv.org

MedClarify: An information-seeking AI agent for medical diagnosis with case-specific follow-up questions

#MedClarify #information‑seeking AI #large language model #diagnostic uncertainty #expected information gain #clinical decision support #iterative reasoning #follow‑up questions #medical diagnosis

📌 Key Takeaways

  • MedClarify is an information‑seeking AI agent that computes a list of candidate diagnoses and selects follow‑up questions with the highest expected information gain.
  • The agent operates within a single‑session dialogue, mirroring the systematic history‑taking process used by clinicians.
  • Experiments show that MedClarify reduces diagnostic errors by roughly 27 percentage points compared with a standard single‑shot large language model baseline.
  • The study highlights fundamental limitations of current medical LLMs, which often produce multiple equally likely diagnoses from incomplete case data.
  • MedClarify’s information‑theoretic approach demonstrates that targeted, uncertainty‑aware questioning can meaningfully enhance AI support for medical decision‑making.
  • This research opens a path toward more effective human‑AI dialogues in clinical settings by embedding iterative reasoning and uncertainty management into conversational agents.

📖 Full Retelling

A team of researchers – Hui Min Wong, Philip Heesen, Pascal Janetzky, Martin Bendszus, and Stefan Feuerriegel – presented MedClarify, an AI agent that generates case‑specific follow‑up questions to aid medical diagnosis. The work was published as a preprint on arXiv on 19 February 2026. The goal of the project is to address the gap between initial, often incomplete patient histories and the iterative, uncertainty‑driven questioning that real clinicians use, thereby improving diagnostic accuracy in medical natural‑language‑model applications.

🏷️ Themes

Artificial Intelligence, Medical Informatics, Diagnostic Reasoning, Human‑AI Interaction, Information Theory, Clinical Decision Support

Entity Intersection Graph

No entity connections available yet for this article.

Deep Analysis

Why It Matters

MedClarify shows that AI can ask targeted follow-up questions to reduce diagnostic uncertainty, improving accuracy by 27 percentage points compared to single-shot models. This advances AI as a practical decision‑support tool in real‑world clinical settings.

Context & Background

  • Large language models are used for medical diagnosis but struggle with incomplete patient data.
  • MedClarify generates case‑specific follow‑up questions based on expected information gain.
  • The study reports a 27pp reduction in diagnostic errors versus a standard baseline.

What Happens Next

Future work will focus on integrating MedClarify into electronic health record systems and conducting prospective clinical trials to validate its safety and effectiveness. Regulatory approval and user‑interface design will be key next steps.

Frequently Asked Questions

What is MedClarify?

An AI agent that asks follow‑up questions to clarify patient information and narrow down differential diagnoses.

How does it reduce errors?

By selecting questions with the highest expected information gain, it gathers missing data that most reduces uncertainty.

Will it replace clinicians?

No, it is intended as a decision‑support tool that augments clinician judgment, not replace it.

How can researchers access the model?

The paper and code are available on arXiv and linked repositories; researchers can download and test the model on their own data.

Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.17308 [Submitted on 19 Feb 2026] Title: MedClarify: An information-seeking AI agent for medical diagnosis with case-specific follow-up questions Authors: Hui Min Wong , Philip Heesen , Pascal Janetzky , Martin Bendszus , Stefan Feuerriegel View a PDF of the paper titled MedClarify: An information-seeking AI agent for medical diagnosis with case-specific follow-up questions, by Hui Min Wong and Philip Heesen and Pascal Janetzky and Martin Bendszus and Stefan Feuerriegel View PDF Abstract: Large language models are increasingly used for diagnostic tasks in medicine. In clinical practice, the correct diagnosis can rarely be immediately inferred from the initial patient presentation alone. Rather, reaching a diagnosis often involves systematic history taking, during which clinicians reason over multiple potential conditions through iterative questioning to resolve uncertainty. This process requires considering differential diagnoses and actively excluding emergencies that demand immediate intervention. Yet, the ability of medical LLMs to generate informative follow-up questions and thus reason over differential diagnoses remains underexplored. Here, we introduce MedClarify, an AI agent for information-seeking that can generate follow-up questions for iterative reasoning to support diagnostic decision-making. Specifically, MedClarify computes a list of candidate diagnoses analogous to a differential diagnosis, and then proactively generates follow-up questions aimed at reducing diagnostic uncertainty. By selecting the question with the highest expected information gain, MedClarify enables targeted, uncertainty-aware reasoning to improve diagnostic performance. In our experiments, we first demonstrate the limitations of current LLMs in medical reasoning, which often yield multiple, similarly likely diagnoses, especially when patient cases are incomplete or relevant information for diagnosis is missing. We then show...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine