Preventing Safety Drift in Large Language Models via Coupled Weight and Activation Constraints
π Full Retelling
arXiv:2604.12384v1 Announce Type: new
Abstract: Safety alignment in Large Language Models (LLMs) remains highly fragile during fine-tuning, where even benign adaptation can degrade pre-trained refusal behaviors and enable harmful responses. Existing defenses typically constrain either weights or activations in isolation, without considering their coupled effects on safety. In this paper, we first theoretically demonstrate that constraining either weights or activations alone is insufficient for
π Related People & Topics
Large language model
Type of machine learning model
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...
Entity Intersection Graph
Connections for Large language model:
π
Artificial intelligence
3 shared
π
Reinforcement learning
3 shared
π
Educational technology
2 shared
π
Benchmark
2 shared
π’
OpenAI
2 shared
Mentioned Entities
Original Source
arXiv:2604.12384v1 Announce Type: new
Abstract: Safety alignment in Large Language Models (LLMs) remains highly fragile during fine-tuning, where even benign adaptation can degrade pre-trained refusal behaviors and enable harmful responses. Existing defenses typically constrain either weights or activations in isolation, without considering their coupled effects on safety. In this paper, we first theoretically demonstrate that constraining either weights or activations alone is insufficient for
Read full article at source