SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems
📖 Full Retelling
arXiv:2603.03536v1 Announce Type: cross
Abstract: Current LLM-based conversational recommender systems (CRS) primarily optimize recommendation accuracy and user satisfaction. We identify an underexplored vulnerability in which recommendation outputs may negatively impact users by violating personalized safety constraints, when individualized safety sensitivities -- such as trauma triggers, self-harm history, or phobias -- are implicitly inferred from the conversation but not respected during re
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Computation and Language arXiv:2603.03536 [Submitted on 3 Mar 2026] Title: SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems Authors: Haochang Hao , Yifan Xu , Xinzhuo Li , Yingqiang Ge , Lu Cheng View a PDF of the paper titled SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems, by Haochang Hao and 4 other authors View PDF HTML Abstract: Current LLM-based conversational recommender systems primarily optimize recommendation accuracy and user satisfaction. We identify an underexplored vulnerability in which recommendation outputs may negatively impact users by violating personalized safety constraints, when individualized safety sensitivities -- such as trauma triggers, self-harm history, or phobias -- are implicitly inferred from the conversation but not respected during recommendation. We formalize this challenge as personalized CRS safety and introduce SafeRec, a new benchmark dataset designed to systematically evaluate safety risks in LLM-based CRS under user-specific constraints. To further address this problem, we propose SafeCRS, a safety-aware training framework that integrates Safe Supervised Fine-Tuning (Safe-SFT) with Safe Group reward-Decoupled Normalization Policy Optimization (Safe-GDPO) to jointly optimize recommendation quality and personalized safety alignment. Extensive experiments on SafeRec demonstrate that SafeCRS reduces safety violation rates by up to 96.5% relative to the strongest recommendation-quality baseline while maintaining competitive recommendation quality. Warning: This paper contains potentially harmful and offensive content. Comments: 14 pages, 4 figures Subjects: Computation and Language (cs.CL) ; Artificial Intelligence (cs.AI); Information Retrieval (cs.IR) Cite as: arXiv:2603.03536 [cs.CL] (or arXiv:2603.03536v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2603.03536 Focus to learn more arXiv-issued DOI via DataCite (pen...
Read full article at source