SP
BravenNow
When Sensors Fail: Temporal Sequence Models for Robust PPO under Sensor Drift
| USA | technology | ✓ Verified - arxiv.org

When Sensors Fail: Temporal Sequence Models for Robust PPO under Sensor Drift

#PPO #sensor drift #temporal sequence models #robustness #reinforcement learning

📌 Key Takeaways

  • Researchers propose using temporal sequence models to enhance PPO algorithm robustness against sensor drift.
  • Sensor drift, a common issue in real-world applications, degrades reinforcement learning performance.
  • The approach leverages historical sensor data to correct errors and maintain policy stability.
  • Experiments show improved resilience in continuous control tasks under simulated drift conditions.

📖 Full Retelling

arXiv:2603.04648v1 Announce Type: cross Abstract: Real-world reinforcement learning systems must operate under distributional drift in their observation streams, yet most policy architectures implicitly assume fully observed and noise-free states. We study robustness of Proximal Policy Optimization (PPO) under temporally persistent sensor failures that induce partial observability and representation shift. To respond to this drift, we augment PPO with temporal sequence models, including Transfo

🏷️ Themes

Reinforcement Learning, Sensor Reliability

📚 Related People & Topics

PPO

Topics referred to by the same term

PPO may refer to:

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for PPO:

🌐 Russia 1 shared
🌐 Ukraine 1 shared
🌐 Shahed 1 shared
View full profile

Mentioned Entities

PPO

Topics referred to by the same term

}
Original Source
--> Computer Science > Machine Learning arXiv:2603.04648 [Submitted on 4 Mar 2026] Title: When Sensors Fail: Temporal Sequence Models for Robust PPO under Sensor Drift Authors: Kevin Vogt-Lowell , Theodoros Tsiligkaridis , Rodney Lafuente-Mercado , Surabhi Ghatti , Shanghua Gao , Marinka Zitnik , Daniela Rus View a PDF of the paper titled When Sensors Fail: Temporal Sequence Models for Robust PPO under Sensor Drift, by Kevin Vogt-Lowell and 6 other authors View PDF HTML Abstract: Real-world reinforcement learning systems must operate under distributional drift in their observation streams, yet most policy architectures implicitly assume fully observed and noise-free states. We study robustness of Proximal Policy Optimization under temporally persistent sensor failures that induce partial observability and representation shift. To respond to this drift, we augment PPO with temporal sequence models, including Transformers and State Space Models , to enable policies to infer missing information from history and maintain performance. Under a stochastic sensor failure process, we prove a high-probability bound on infinite-horizon reward degradation that quantifies how robustness depends on policy smoothness and failure persistence. Empirically, on MuJoCo continuous-control benchmarks with severe sensor dropout, we show Transformer-based sequence policies substantially outperform MLP, RNN, and SSM baselines in robustness, maintaining high returns even when large fractions of sensors are unavailable. These results demonstrate that temporal sequence reasoning provides a principled and practical mechanism for reliable operation under observation drift caused by sensor unreliability. Comments: Accepted at ICLR 2026 CAO Workshop Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2603.04648 [cs.LG] (or arXiv:2603.04648v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.04648 Focus to learn more arXiv-issued DOI via DataCite (pending...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine