SP
BravenNow
A Survey on the Optimization of Large Language Model-based Agents
| USA | technology | ✓ Verified - arxiv.org

A Survey on the Optimization of Large Language Model-based Agents

#Large Language Models #LLM optimization #AI agents #parameter-driven methods #reinforcement learning #prompt engineering #ACM Computing Surveys #arXiv

📌 Key Takeaways

  • Researchers published a comprehensive survey on LLM-based agent optimization
  • Current optimization methods often perform suboptimally in complex environments
  • The paper categorizes approaches into parameter-driven and parameter-free methods
  • The research addresses gaps in specialized optimization for agent functionalities

📖 Full Retelling

A team of researchers led by Shangheng Du, along with Jiabao Zhao, Jinxin Shi, Zhentao Xie, Xin Jiang, Yanhong Bai, and Liang He, published a comprehensive survey paper on optimizing Large Language Model-based agents in ACM Computing Surveys in July 2026, after initially submitting it to arXiv in March 2025 and revising it in February 2026. The paper addresses the growing adoption of LLM-based agents across various fields for autonomous decision-making and interactive tasks, highlighting that current optimization approaches relying on prompt design or fine-tuning often result in limited effectiveness in complex environments. The researchers identified a critical gap in existing literature, noting that while general LLM optimization techniques exist, they lack specialized approaches for essential agent functionalities such as long-term planning, dynamic environmental interaction, and complex decision-making. In their comprehensive analysis, the authors categorize optimization approaches into parameter-driven methods—including fine-tuning, reinforcement learning, and hybrid strategies—and parameter-free strategies involving prompt engineering and external knowledge retrieval. The survey also examines the datasets and benchmarks used for evaluation, reviews key applications of LLM-based agents across industries, and discusses major challenges facing the field along with promising future research directions, providing a valuable resource for both researchers and practitioners working with advanced AI systems.

🏷️ Themes

Artificial Intelligence, Machine Learning, Language Models

📚 Related People & Topics

ACM Computing Surveys

Academic journal

ACM Computing Surveys is peer-reviewed quarterly scientific journal and is published by the Association for Computing Machinery. It publishes survey articles and tutorials related to computer science and computing. The journal was established in 1969 with William S. Dorn as founding editor-in-chief.

View Profile → Wikipedia ↗

AI agent

Systems that perform tasks without human intervention

In the context of generative artificial intelligence, AI agents (also referred to as compound AI systems or agentic AI) are a class of intelligent agents distinguished by their ability to operate autonomously in complex environments. Agentic AI tools prioritize decision-making over content creation ...

View Profile → Wikipedia ↗

Large language model

Type of machine learning model

A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...

View Profile → Wikipedia ↗

Entity Intersection Graph

No entity connections available yet for this article.

Original Source
--> Computer Science > Artificial Intelligence arXiv:2503.12434 [Submitted on 16 Mar 2025 ( v1 ), last revised 24 Feb 2026 (this version, v2)] Title: A Survey on the Optimization of Large Language Model-based Agents Authors: Shangheng Du , Jiabao Zhao , Jinxin Shi , Zhentao Xie , Xin Jiang , Yanhong Bai , Liang He View a PDF of the paper titled A Survey on the Optimization of Large Language Model-based Agents, by Shangheng Du and 6 other authors View PDF HTML Abstract: With the rapid development of Large Language Models , LLM-based agents have been widely adopted in various fields, becoming essential for autonomous decision-making and interactive tasks. However, current work typically relies on prompt design or fine-tuning strategies applied to vanilla LLMs, which often leads to limited effectiveness or suboptimal performance in complex agent-related environments. Although LLM optimization techniques can improve model performance across many general tasks, they lack specialized optimization towards critical agent functionalities such as long-term planning, dynamic environmental interaction, and complex decision-making. Although numerous recent studies have explored various strategies to optimize LLM-based agents for complex agent tasks, a systematic review summarizing and comparing these methods from a holistic perspective is still lacking. In this survey, we provide a comprehensive review of LLM-based agent optimization approaches, categorizing them into parameter-driven and parameter-free methods. We first focus on parameter-driven optimization, covering fine-tuning-based optimization, reinforcement learning-based optimization, and hybrid strategies, analyzing key aspects such as trajectory data construction, fine-tuning techniques, reward function design, and optimization algorithms. Additionally, we briefly discuss parameter-free strategies that optimize agent behavior through prompt engineering and external knowledge retrieval. Finally, we summarize the dataset...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine