SP
BravenNow
Fast and Effective On-policy Distillation from Reasoning Prefixes
| USA | technology | ✓ Verified - arxiv.org

Fast and Effective On-policy Distillation from Reasoning Prefixes

#on‑policy distillation #student policy sampling #reasoning prefixes #token‑level supervision #training cost #generalisation #arXiv preprint

📌 Key Takeaways

  • On‑policy distillation (OPD) samples student trajectories and uses a teacher for token‑level supervision.
  • OPD provides better generalisation compared to off‑policy distillation by not depending only on verifiable terminal rewards.
  • The sampling requirement of OPD incurs significant training cost, particularly for lengthy responses.
  • The paper proposes a fast and effective approach that utilizes reasoning prefixes to mitigate this cost.
  • Initial analysis in the study highlights the computational challenges and suggests potential efficiencies.

📖 Full Retelling

WHO: The authors of a new research paper. WHAT: They present *Fast and Effective On‑policy Distillation from Reasoning Prefixes*, a technique that builds upon on‑policy distillation (OPD) for language models. WHERE: The paper is posted as a preprint on arXiv under the identifier arXiv:2602.15260v1. WHEN: It is part of the early 2026 batch of preprints (the numbering scheme indicates February 2026). WHY: OPD improves generalisation by supervising the student model at the token level rather than relying solely on terminal rewards, but the need for on‑the‑fly sampling of student trajectories is costly, especially for long outputs. The authors aim to demonstrate that reasoning prefixes can accelerate this process and reduce training overhead.

🏷️ Themes

Machine Learning, Natural Language Processing, Reinforcement Learning, Model Distillation

Entity Intersection Graph

No entity connections available yet for this article.

Deep Analysis

Why It Matters

On-policy distillation improves model generalization by supervising token-level trajectories, but it is expensive; this work proposes a faster method that retains benefits, making advanced distillation more practical for large models.

Context & Background

  • Distillation transfers knowledge from teacher to student models
  • On-policy distillation samples student trajectories during training, unlike off-policy which uses pre-recorded data
  • Long response generation increases sampling cost, limiting scalability

What Happens Next

The proposed technique will be tested on larger benchmarks, integrated into training pipelines, and may inspire further cost‑saving distillation strategies.

Frequently Asked Questions

What is on-policy distillation?

It is a training method where the student model generates sequences during training and the teacher supervises each token of those sequences.

How does the new method reduce training cost?

By efficiently sampling only necessary prefixes and using faster token‑level supervision, it cuts the number of on‑the‑fly trajectories needed.

Original Source
arXiv:2602.15260v1 Announce Type: cross Abstract: On-policy distillation (OPD), which samples trajectories from the student model and supervises them with a teacher at the token level, avoids relying solely on verifiable terminal rewards and can yield better generalization than off-policy distillation. However, OPD requires expensive on-the-fly sampling of the student policy during training, which substantially increases training cost, especially for long responses. Our initial analysis shows t
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine