SP
BravenNow
Boosting ASR Robustness via Test-Time Reinforcement Learning with Audio-Text Semantic Rewards
| USA | technology | ✓ Verified - arxiv.org

Boosting ASR Robustness via Test-Time Reinforcement Learning with Audio-Text Semantic Rewards

#ASR #robustness #reinforcement learning #audio-text semantic #test-time adaptation #speech recognition #acoustic environments

📌 Key Takeaways

  • Researchers propose a test-time reinforcement learning method to improve ASR robustness.
  • The approach uses audio-text semantic rewards to guide model adaptation.
  • It aims to enhance performance in noisy or challenging acoustic environments.
  • The method operates during inference without requiring retraining on new data.

📖 Full Retelling

arXiv:2603.05231v1 Announce Type: cross Abstract: Recently, Automatic Speech Recognition (ASR) systems (e.g., Whisper) have achieved remarkable accuracy improvements but remain highly sensitive to real-world unseen data (data with large distribution shifts), including noisy environments and diverse accents. To address this issue, test-time adaptation (TTA) has shown great potential in improving the model adaptability at inference time without ground-truth labels, and existing TTA methods often

🏷️ Themes

ASR Robustness, Reinforcement Learning

📚 Related People & Topics

ASR

Topics referred to by the same term

ASR may refer to:

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for ASR:

🌐 Artificial intelligence 1 shared
🌐 TTS 1 shared
🌐 Large language model 1 shared
View full profile

Mentioned Entities

ASR

Topics referred to by the same term

}
Original Source
--> Computer Science > Sound arXiv:2603.05231 [Submitted on 5 Mar 2026] Title: Boosting ASR Robustness via Test-Time Reinforcement Learning with Audio-Text Semantic Rewards Authors: Linghan Fang , Tianxin Xie , Li Liu View a PDF of the paper titled Boosting ASR Robustness via Test-Time Reinforcement Learning with Audio-Text Semantic Rewards, by Linghan Fang and 1 other authors View PDF HTML Abstract: Recently, Automatic Speech Recognition systems (e.g., Whisper) have achieved remarkable accuracy improvements but remain highly sensitive to real-world unseen data (data with large distribution shifts), including noisy environments and diverse accents. To address this issue, test-time adaptation has shown great potential in improving the model adaptability at inference time without ground-truth labels, and existing TTA methods often rely on pseudo-labeling or entropy minimization. However, by treating model confidence as a learning signal, these methods may reinforce high-confidence errors, leading to confirmation bias that undermines adaptation. To overcome these limitations, we present ASR-TRA, a novel Test-time Reinforcement Adaptation framework inspired by causal intervention. More precisely, our method introduces a learnable decoder prompt and utilizes temperature-controlled stochastic decoding to generate diverse transcription candidates. These are scored by a reward model that measures audio-text semantic alignment, and the resulting feedback is used to update both model and prompt parameters via reinforcement learning. Comprehensive experiments on LibriSpeech with synthetic noise and L2 Arctic accented English datasets demonstrate that our method achieves higher accuracy while maintaining lower latency than existing TTA baselines. Ablation studies further confirm the effectiveness of combining audio and language-based rewards, highlighting our method's enhanced stability and interpretability. Overall, our approach provides a practical and robust solution for dep...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine