SP
BravenNow
Causally Robust Reward Learning from Reason-Augmented Preference Feedback
| USA | technology | ✓ Verified - arxiv.org

Causally Robust Reward Learning from Reason-Augmented Preference Feedback

#reward learning #preference feedback #causal robustness #AI alignment #human values #reason-augmented #reward hacking

📌 Key Takeaways

  • Researchers propose a method to learn reward functions from human preferences enhanced with causal reasoning.
  • The approach aims to improve robustness by incorporating causal explanations alongside preference data.
  • It addresses challenges in aligning AI systems with human values by reducing reward hacking.
  • The method leverages structured feedback to better capture underlying human intentions.

📖 Full Retelling

arXiv:2603.04861v1 Announce Type: new Abstract: Preference-based reward learning is widely used for shaping agent behavior to match a user's preference, yet its sparse binary feedback makes it especially vulnerable to causal confusion. The learned reward often latches onto spurious features that merely co-occur with preferred trajectories during training, collapsing when those correlations disappear or reverse at test time. We introduce ReCouPLe, a lightweight framework that uses natural langua

🏷️ Themes

AI Alignment, Causal Inference

📚 Related People & Topics

AI alignment

Conformance of AI to intended objectives

In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for AI alignment:

🌐 Large language model 7 shared
🌐 AI safety 3 shared
🌐 Reinforcement learning from human feedback 2 shared
🌐 Cultural bias 1 shared
🏢 OpenAI 1 shared
View full profile

Mentioned Entities

AI alignment

Conformance of AI to intended objectives

}
Original Source
--> Computer Science > Artificial Intelligence arXiv:2603.04861 [Submitted on 5 Mar 2026] Title: Causally Robust Reward Learning from Reason-Augmented Preference Feedback Authors: Minjune Hwang , Yigit Korkmaz , Daniel Seita , Erdem Bıyık View a PDF of the paper titled Causally Robust Reward Learning from Reason-Augmented Preference Feedback, by Minjune Hwang and 3 other authors View PDF HTML Abstract: Preference-based reward learning is widely used for shaping agent behavior to match a user's preference, yet its sparse binary feedback makes it especially vulnerable to causal confusion. The learned reward often latches onto spurious features that merely co-occur with preferred trajectories during training, collapsing when those correlations disappear or reverse at test time. We introduce ReCouPLe, a lightweight framework that uses natural language rationales to provide the missing causal signal. Each rationale is treated as a guiding projection axis in an embedding space, training the model to score trajectories based on features aligned with that axis while de-emphasizing context that is unrelated to the stated reason. Because the same rationales (e.g., "avoids collisions", "completes the task faster") can appear across multiple tasks, ReCouPLe naturally reuses the same causal direction whenever tasks share semantics, and transfers preference knowledge to novel tasks without extra data or language-model fine-tuning. Our learned reward model can ground preferences on the articulated reason, aligning better with user intent and generalizing beyond spurious features. ReCouPLe outperforms baselines by up to 1.5x in reward accuracy under distribution shifts, and 2x in downstream policy performance in novel tasks. We have released our code at this https URL Comments: Published in International Conference on Learning Representations 2026 Subjects: Artificial Intelligence (cs.AI) ; Machine Learning (cs.LG cs.RO) Cite as: arXiv:2603.04861 [cs.AI] (or arXiv:2603.04861v1 [cs....
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine