SparkMe: Adaptive Semi-Structured Interviewing for Qualitative Insight Discovery
#SparkMe#Large Language Models#Semi-structured interviewing#Qualitative research#AI automation#Adaptive interviewing#Human-computer interaction#arXiv
📌 Key Takeaways
SparkMe is a multi-agent LLM interviewer designed for adaptive semi-structured interviews
It balances systematic coverage of predefined topics with adaptive exploration
SparkMe improves topic guide coverage by 4.7% over the best baseline
It elicits richer emergent insights while using fewer conversational turns
Domain experts rated SparkMe highly for producing adaptive interviews
📖 Full Retelling
Researchers David Anugraha, Vishakh Padmakumar, and Diyi Yang introduced SparkMe, a multi-agent large language model interviewer system on February 24, 2026, to address the challenge of collecting qualitative insights at scale, which has been constrained by the time and availability of experts to conduct semi-structured interviews. The research paper, published on arXiv, presents SparkMe as an innovative approach to qualitative interviewing that balances systematic coverage of predefined topics with adaptive exploration, allowing the system to pursue follow-ups, deep dives, and emergent themes that arise organically during conversation. The researchers formulated adaptive semi-structured interviewing as an optimization problem over the interviewer's behavior, defining interview utility as a trade-off between coverage of a predefined interview topic guide, discovery of relevant emergent themes, and interview cost measured by length. SparkMe employs deliberative planning via simulated conversation rollouts to select questions with high expected utility, representing a significant advancement in automated interviewing technology. The system was rigorously evaluated through controlled experiments with LLM-based interviewees and a user study with 70 participants across 7 professions on the impact of AI on their workflows, demonstrating superior performance compared to previous approaches while maintaining high-quality adaptive interviews that surface profession-specific insights.
🏷️ Themes
Artificial Intelligence, Human-Computer Interaction, Research Methodology
Qualitative research is a type of research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews...
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...
No entity connections available yet for this article.
Original Source
--> Computer Science > Human-Computer Interaction arXiv:2602.21136 [Submitted on 24 Feb 2026] Title: SparkMe: Adaptive Semi-Structured Interviewing for Qualitative Insight Discovery Authors: David Anugraha , Vishakh Padmakumar , Diyi Yang View a PDF of the paper titled SparkMe: Adaptive Semi-Structured Interviewing for Qualitative Insight Discovery, by David Anugraha and 2 other authors View PDF Abstract: Qualitative insights from user experiences are critical for informing product and policy decisions, but collecting such data at scale is constrained by the time and availability of experts to conduct semi-structured interviews. Recent work has explored using large language models to automate interviewing, yet existing systems lack a principled mechanism for balancing systematic coverage of predefined topics with adaptive exploration, or the ability to pursue follow-ups, deep dives, and emergent themes that arise organically during conversation. In this work, we formulate adaptive semi-structured interviewing as an optimization problem over the interviewer's behavior. We define interview utility as a trade-off between coverage of a predefined interview topic guide, discovery of relevant emergent themes, and interview cost measured by length. Based on this formulation, we introduce SparkMe, a multi-agent LLM interviewer that performs deliberative planning via simulated conversation rollouts to select questions with high expected utility. We evaluate SparkMe through controlled experiments with LLM-based interviewees, showing that it achieves higher interview utility, improving topic guide coverage (+4.7% over the best baseline) and eliciting richer emergent insights while using fewer conversational turns than prior LLM interviewing approaches. We further validate SparkMe in a user study with 70 participants across 7 professions on the impact of AI on their workflows. Domain experts rate SparkMe as producing high-quality adaptive interviews that surface helpful profess...