SP
BravenNow
Hybrid LLM-Embedded Dialogue Agents for Learner Reflection: Designing Responsive and Theory-Driven Interactions
| USA | technology | ✓ Verified - arxiv.org

Hybrid LLM-Embedded Dialogue Agents for Learner Reflection: Designing Responsive and Theory-Driven Interactions

#Hybrid Dialogue Systems #Large Language Models #Learner Reflection #Pedagogical Theory #Educational Technology #Self-Regulated Learning #Robotics Education #AI Responsiveness

📌 Key Takeaways

  • Researchers developed a hybrid dialogue system combining LLMs with rule-based frameworks for educational settings
  • The system was tested in a culturally responsive robotics summer camp environment
  • Results showed richer learner reflections but also challenges with repetitiveness and misalignment
  • The hybrid approach aims to bridge pedagogical theory with AI responsiveness

📖 Full Retelling

Researchers led by Paras Sharma and a team of eight collaborators from various institutions have developed a hybrid dialogue system that embeds Large Language Models within a theory-aligned, rule-based framework to support learner reflections in educational settings, as detailed in their paper submitted to arXiv on February 24, 2026. The innovative system addresses the limitations of both traditional rule-based dialogue systems, which offer structured scaffolding but struggle with engagement shifts, and LLMs, which can generate context-sensitive responses but lack alignment with established pedagogical theories. Implemented in a culturally responsive robotics summer camp, the hybrid approach combines the theoretical grounding of self-regulated learning theory with the adaptive responsiveness of LLMs to create more effective learning interactions. The research represents a significant advancement in educational technology by attempting to bridge the gap between theoretically sound pedagogical approaches and the flexible responsiveness of modern AI systems. The hybrid dialogue system operates on a dual-layer architecture where the rule-based component provides structure based on self-regulated learning theory, while the LLM component determines when and how to prompt deeper reflections based on evolving conversation context. This design allows for both consistent educational scaffolding and adaptive responses to individual learner needs and engagement levels.

🏷️ Themes

Educational Technology, AI in Education, Human-Computer Interaction

📚 Related People & Topics

Educational technology

Educational technology

Use of technology in education to enhance learning and teaching

Educational technology (commonly abbreviated as edutech or edtech) refers to the use of computer hardware, software, and educational theory and practice to facilitate learning and teaching. When referred to with its abbreviation, "EdTech", it often refers to the industry of companies that create edu...

View Profile → Wikipedia ↗

Large language model

Type of machine learning model

A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Educational technology:

🌐 Large language model 3 shared
🌐 Reinforcement learning 2 shared
🌐 Ethics of artificial intelligence 1 shared
🌐 Hyperbolic space 1 shared
🏢 OpenAI 1 shared
View full profile
Original Source
--> Computer Science > Human-Computer Interaction arXiv:2602.20486 [Submitted on 24 Feb 2026] Title: Hybrid LLM-Embedded Dialogue Agents for Learner Reflection: Designing Responsive and Theory-Driven Interactions Authors: Paras Sharma , YuePing Sha , Janet Shufor Bih Epse Fofang , Brayden Yan , Jess A. Turner , Nicole Balay , Hubert O. Asare , Angela E.B. Stewart , Erin Walker View a PDF of the paper titled Hybrid LLM-Embedded Dialogue Agents for Learner Reflection: Designing Responsive and Theory-Driven Interactions, by Paras Sharma and 7 other authors View PDF HTML Abstract: Dialogue systems have long supported learner reflections, with theoretically grounded, rule-based designs offering structured scaffolding but often struggling to respond to shifts in engagement. Large Language Models , in contrast, can generate context-sensitive responses but are not informed by decades of research on how learning interactions should be structured, raising questions about their alignment with pedagogical theories. This paper presents a hybrid dialogue system that embeds LLM responsiveness within a theory-aligned, rule-based framework to support learner reflections in a culturally responsive robotics summer camp. The rule-based structure grounds dialogue in self-regulated learning theory, while the LLM decides when and how to prompt deeper reflections, responding to evolving conversation context. We analyze themes across dialogues to explore how our hybrid system shaped learner reflections. Our findings indicate that LLM-embedded dialogues supported richer learner reflections on goals and activities, but also introduced challenges due to repetitiveness and misalignment in prompts, reducing engagement. Subjects: Human-Computer Interaction (cs.HC) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2602.20486 [cs.HC] (or arXiv:2602.20486v1 [cs.HC] for this version) https://doi.org/10.48550/arXiv.2602.20486 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Subm...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine