Toward Expert Investment Teams:A Multi-Agent LLM System with Fine-Grained Trading Tasks
#Large Language Models #Multi-Agent Systems #Financial Trading #Task Decomposition #Risk-Adjusted Returns #Portfolio Optimization #Japanese Stock Data
📌 Key Takeaways
- Fine-grained task decomposition significantly improves risk-adjusted returns in trading systems
- Alignment between analytical outputs and decision preferences is critical for system performance
- The framework was validated using comprehensive Japanese stock market data
- Portfolio optimization achieved superior performance by exploiting low correlation with stock indices
📖 Full Retelling
Researchers Kunihiro Miyazaki, Takanobu Kawahara, Stephen Roberts, and Stefan Zohren introduced a novel multi-agent large language model framework for financial trading on February 26, 2026, through arXiv, addressing the limitations of current autonomous trading systems that fail to account for real-world workflow complexities. The paper, titled 'Toward Expert Investment Teams: A Multi-Agent LLM System with Fine-Grained Trading Tasks,' highlights how mainstream approaches often rely on abstract instructions that overlook the intricacies of actual trading workflows, leading to degraded inference performance and less transparent decision-making. The researchers propose a framework that explicitly decomposes investment analysis into fine-grained tasks rather than providing coarse-grained instructions, representing a significant advancement in the application of AI to financial markets. Their methodology involved evaluating the proposed system using comprehensive Japanese stock data including prices, financial statements, news, and macroeconomic information under a leakage-controlled backtesting setting to ensure robust results. The experimental findings demonstrate that this fine-grained task decomposition significantly improves risk-adjusted returns compared to conventional coarse-grained designs, with additional analysis revealing that alignment between analytical outputs and downstream decision preferences is a critical driver of system performance. Furthermore, the researchers conducted standard portfolio optimization by exploiting low correlation with the stock index and variance of each system's output, achieving superior performance that could potentially revolutionize automated trading strategies.
🏷️ Themes
Artificial Intelligence, Financial Trading, Multi-Agent Systems
📚 Related People & Topics
Large language model
Type of machine learning model
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...
Entity Intersection Graph
Connections for Large language model:
🌐
Educational technology
4 shared
🌐
Reinforcement learning
3 shared
🌐
Machine learning
2 shared
🌐
Artificial intelligence
2 shared
🌐
Benchmark
2 shared
Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.23330 [Submitted on 26 Feb 2026] Title: Toward Expert Investment Teams:A Multi-Agent LLM System with Fine-Grained Trading Tasks Authors: Kunihiro Miyazaki , Takanobu Kawahara , Stephen Roberts , Stefan Zohren View a PDF of the paper titled Toward Expert Investment Teams:A Multi-Agent LLM System with Fine-Grained Trading Tasks, by Kunihiro Miyazaki and 2 other authors View PDF HTML Abstract: The advancement of large language models has accelerated the development of autonomous financial trading systems. While mainstream approaches deploy multi-agent systems mimicking analyst and manager roles, they often rely on abstract instructions that overlook the intricacies of real-world workflows, which can lead to degraded inference performance and less transparent decision-making. Therefore, we propose a multi-agent LLM trading framework that explicitly decomposes investment analysis into fine-grained tasks, rather than providing coarse-grained instructions. We evaluate the proposed framework using Japanese stock data, including prices, financial statements, news, and macro information, under a leakage-controlled backtesting setting. Experimental results show that fine-grained task decomposition significantly improves risk-adjusted returns compared to conventional coarse-grained designs. Crucially, further analysis of intermediate agent outputs suggests that alignment between analytical outputs and downstream decision preferences is a critical driver of system performance. Moreover, we conduct standard portfolio optimization, exploiting low correlation with the stock index and the variance of each system's output. This approach achieves superior performance. These findings contribute to the design of agent structure and task configuration when applying LLM agents to trading systems in practical settings. Comments: 14 pages, 3 figures Subjects: Artificial Intelligence (cs.AI) ; Trading and Market Microstructure (q-fin....
Read full article at source