TTSR: Test-Time Self-Reflection for Continual Reasoning Improvement
📖 Full Retelling
arXiv:2603.03297v1 Announce Type: cross
Abstract: Test-time Training enables model adaptation using only test questions and offers a promising paradigm for improving the reasoning ability of large language models (LLMs). However, it faces two major challenges: test questions are often highly difficult, making self-generated pseudo-labels unreliable, and existing methods lack effective mechanisms to adapt to a model's specific reasoning weaknesses, leading to inefficient learning. To address the
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Computation and Language arXiv:2603.03297 [Submitted on 6 Feb 2026] Title: TTSR: Test-Time Self-Reflection for Continual Reasoning Improvement Authors: Haoyang He , Zihua Rong , Liangjie Zhao , Yunjia Zhao , Lan Yang , Honggang Zhang View a PDF of the paper titled TTSR: Test-Time Self-Reflection for Continual Reasoning Improvement, by Haoyang He and Zihua Rong and Liangjie Zhao and Yunjia Zhao and Lan Yang and Honggang Zhang View PDF HTML Abstract: Test-time Training enables model adaptation using only test questions and offers a promising paradigm for improving the reasoning ability of large language models . However, it faces two major challenges: test questions are often highly difficult, making self-generated pseudo-labels unreliable, and existing methods lack effective mechanisms to adapt to a model's specific reasoning weaknesses, leading to inefficient learning. To address these issues, we propose \textbf , a self-reflective test-time self-evolving training framework. TTSR employs a single pretrained language model that alternates between the roles of a \textit and a \textit at test time. The Student focuses on solving problems and learning from synthesized variant questions, while the Teacher analyzes the Student's failed reasoning trajectories, summarizes recurring reasoning weaknesses, and synthesizes targeted variant questions accordingly. This process guides the model to improve within a learnable regime through a continual self-evolving loop. Experimental results on multiple challenging mathematical reasoning benchmarks show that TTSR consistently improves reasoning performance and generalizes well across different model backbones and general-domain reasoning tasks. These findings suggest that teacher-mediated self-reflection provides an effective pathway for stable and continual reasoning improvement at test time. Comments: work in progress Subjects: Computation and Language (cs.CL) ; Artificial Intelligence (cs.AI); Machine Lear...
Read full article at source