SP
BravenNow
Learning to Rewrite Tool Descriptions for Reliable LLM-Agent Tool Use
| USA | technology | ✓ Verified - arxiv.org

Learning to Rewrite Tool Descriptions for Reliable LLM-Agent Tool Use

#LLM-based agents #Tool interfaces #Curriculum learning #Trace-Free+ #AI research #Machine learning optimization #Natural language processing #AI scalability

📌 Key Takeaways

  • LLM agent performance depends on both the agent and tool interface quality
  • Current tool interfaces are human-oriented and create bottlenecks in large tool sets
  • Trace-Free+ uses curriculum learning to transfer supervision from trace-rich to trace-free settings
  • The approach shows strong cross-domain generalization and scales to over 100 candidate tools

📖 Full Retelling

Researchers Ruocheng Guo, Kaiwen Dong, Xiang Gao, and Kamalika Das introduced Trace-Free+, a curriculum learning framework for improving tool interfaces for LLM-based agents, in their paper submitted to arXiv on February 23, 2026, addressing the bottleneck of human-oriented tool interfaces that limit agent performance when selecting from large candidate tool sets. The researchers identified that while prior work has focused heavily on agent fine-tuning, tool interfaces—including natural language descriptions and parameter schemas—remain largely human-oriented and often become a bottleneck, particularly when agents must select from extensive candidate tool sets. Existing approaches to improving these interfaces rely on execution traces, which are frequently unavailable in cold-start or privacy-constrained settings, and typically optimize each tool independently, limiting scalability and generalization to unseen tools. To overcome these limitations, the team developed Trace-Free+, a curriculum learning framework that progressively transfers supervision from trace-rich settings to trace-free deployment, encouraging the model to abstract reusable interface-usage patterns and tool usage outcomes. To support this approach, the researchers constructed a large-scale dataset of high-quality tool interfaces using a structured workflow over a diverse collection of tools. Experiments conducted on StableToolBench and RestBench demonstrated consistent gains on unseen tools, strong cross-domain generalization, and robustness as the number of candidate tools scales to over 100, proving that tool interface optimization is a practical and deployable complement to agent fine-tuning.

🏷️ Themes

Artificial Intelligence, Machine Learning, Natural Language Processing

📚 Related People & Topics

Curriculum learning

Technique in machine learning

Curriculum learning is a technique in machine learning in which a model is trained on examples of increasing difficulty, where the definition of "difficulty" may be provided externally or discovered as part of the training process. This is intended to attain good performance more quickly, or to conv...

View Profile → Wikipedia ↗

Natural language processing

Processing of natural language by a computer

Natural language processing (NLP) is the processing of natural language information by a computer. NLP is a subfield of computer science and is closely associated with artificial intelligence. NLP is also related to information retrieval, knowledge representation, computational linguistics, and ling...

View Profile → Wikipedia ↗
Artificial intelligence

Artificial intelligence

Intelligence of machines

# Artificial Intelligence (AI) **Artificial Intelligence (AI)** is a specialized field of computer science dedicated to the development and study of computational systems capable of performing tasks typically associated with human intelligence. These tasks include learning, reasoning, problem-solvi...

View Profile → Wikipedia ↗

Entity Intersection Graph

No entity connections available yet for this article.

Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.20426 [Submitted on 23 Feb 2026] Title: Learning to Rewrite Tool Descriptions for Reliable LLM-Agent Tool Use Authors: Ruocheng Guo , Kaiwen Dong , Xiang Gao , Kamalika Das View a PDF of the paper titled Learning to Rewrite Tool Descriptions for Reliable LLM-Agent Tool Use, by Ruocheng Guo and 3 other authors View PDF HTML Abstract: The performance of LLM-based agents depends not only on the agent itself but also on the quality of the tool interfaces it consumes. While prior work has focused heavily on agent fine-tuning, tool interfaces-including natural language descriptions and parameter schemas-remain largely human-oriented and often become a bottleneck, especially when agents must select from large candidate tool sets. Existing approaches to improving tool interfaces rely on execution traces, which are frequently unavailable in cold-start or privacy-constrained settings, and typically optimize each tool independently, limiting scalability and generalization to unseen tools. We propose Trace-Free+, a curriculum learning framework that progressively transfers supervision from trace-rich settings to trace-free deployment, encouraging the model to abstract reusable interface-usage patterns and tool usage outcomes. To support this approach, we construct a large-scale dataset of high-quality tool interfaces using a structured workflow over a diverse collection of tools. Experiments on StableToolBench and RestBench show consistent gains on unseen tools, strong cross-domain generalization, and robustness as the number of candidate tools scales to over 100, demonstrating that tool interface optimization is a practical and deployable complement to agent fine-tuning. Comments: Preprint Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2602.20426 [cs.AI] (or arXiv:2602.20426v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2602.20426 Focus to learn more arXiv-issued DOI via DataCite (pending registra...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine