"Don't Do That!": Guiding Embodied Systems through Large Language Model-based Constraint Generation
#Large Language Models #Constraint Generation #Robotic Navigation #Embodied Systems #STPR Framework #Python Code Generation #AI Safety #Human-Robot Interaction
📌 Key Takeaways
- Researchers developed STPR framework using LLMs to translate natural language constraints into executable code
- The approach transforms informal constraints like 'what not to do' into structured Python functions
- Experiments showed full compliance across multiple constraints with short runtime performance
- The framework works with smaller LLMs, making it accessible for various applications with low inference costs
📖 Full Retelling
Researchers Amin Seffo, Aladin Djuhera, Masataro Asai, and Holger Boche have developed STPR, a novel constraint generation framework that utilizes Large Language Models to translate complex natural language constraints into executable Python code, addressing a significant challenge in robotic navigation planning, in a paper published on arXiv on February 24, 2026, building upon their initial submission from June 2025. The research tackles a growing problem in robotics where complex spatial, mathematical, and conditional constraints from natural language have been difficult to translate into formal descriptions that robotic systems can understand and execute. STPR circumvents this challenge by leveraging the strong coding capabilities of LLMs to transform language-based instructions—particularly those framed as 'what not to do'—into structured, transparent Python functions that can be directly implemented in planning algorithms. The framework demonstrates remarkable accuracy in translating even complex mathematical constraints while avoiding potential hallucinations that can plague LLM applications, with experiments in a simulated Gazebo environment confirming full compliance across multiple constraints and scenarios while maintaining short runtime performance.
🏷️ Themes
Artificial Intelligence, Robotics, Natural Language Processing
📚 Related People & Topics
Large language model
Type of machine learning model
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...
Entity Intersection Graph
Connections for Large language model:
🌐
Educational technology
4 shared
🌐
Reinforcement learning
3 shared
🌐
Machine learning
2 shared
🌐
Artificial intelligence
2 shared
🌐
Benchmark
2 shared
Original Source
--> Computer Science > Artificial Intelligence arXiv:2506.04500 [Submitted on 4 Jun 2025 ( v1 ), last revised 24 Feb 2026 (this version, v2)] Title: "Don't Do That!": Guiding Embodied Systems through Large Language Model-based Constraint Generation Authors: Amin Seffo , Aladin Djuhera , Masataro Asai , Holger Boche View a PDF of the paper titled "Don't Do That!": Guiding Embodied Systems through Large Language Model-based Constraint Generation, by Amin Seffo and 3 other authors View PDF HTML Abstract: Recent advancements in large language models have spurred interest in robotic navigation that incorporates complex spatial, mathematical, and conditional constraints from natural language into the planning problem. Such constraints can be informal yet highly complex, making it challenging to translate into a formal description that can be passed on to a planning algorithm. In this paper, we propose STPR, a constraint generation framework that uses LLMs to translate constraints (expressed as instructions on ``what not to do'') into executable Python functions. STPR leverages the LLM's strong coding capabilities to shift the problem description from language into structured and transparent code, thus circumventing complex reasoning and avoiding potential hallucinations. We show that these LLM-generated functions accurately describe even complex mathematical constraints, and apply them to point cloud representations with traditional search algorithms. Experiments in a simulated Gazebo environment show that STPR ensures full compliance across several constraints and scenarios, while having short runtimes. We also verify that STPR can be used with smaller, code-specific LLMs, making it applicable to a wide range of compact models at low inference cost. Comments: Preprint; under review Subjects: Artificial Intelligence (cs.AI) cs.RO) Cite as: arXiv:2506.04500 [cs.AI] (or arXiv:2506.04500v2 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2506.04500 Focus to learn mor...
Read full article at source