SP
BravenNow
PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding
| USA | technology | ✓ Verified - arxiv.org

PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding

#PromptCD #Polarity-Prompt Contrastive Decoding #AI alignment #Test-time behavior enhancement #Large language models #Vision-language models #3H alignment objectives

📌 Key Takeaways

  • PromptCD operates at test time without requiring additional training data
  • The method uses paired positive and negative prompts to enhance AI behavior
  • Significant improvements demonstrated on '3H' alignment objectives for LLMs
  • Enhances VQA performance for vision-language models through visual attention reinforcement

📖 Full Retelling

For large language models (LLMs), the researchers demonstrated consistent and substantial improvements across the '3H' alignment objectives: helpfulness, honesty, and harmlessness. These results indicate that PromptCD can effectively enhance AI behavior in critical domains without the prohibitive costs associated with traditional alignment methods. For vision-language models (VLMs), the team further analyzed the contrastive effects on visual attention patterns, showing that PromptCD significantly improves Visual Question Answering (VQA) performance by reinforcing behavior-consistent visual grounding. This dual applicability across different model types represents a significant step toward more generalizable AI alignment techniques. The researchers emphasize that their approach provides a simple, general, and cost-efficient strategy for reliable behavior control across multiple AI modalities, addressing a key limitation in current AI safety research.

🏷️ Themes

AI alignment, Test-time enhancement, Cost-efficient AI, Behavior control

📚 Related People & Topics

Large language model

Type of machine learning model

A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...

View Profile → Wikipedia ↗

AI alignment

Conformance of AI to intended objectives

In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Large language model:

🌐 Educational technology 4 shared
🌐 Reinforcement learning 3 shared
🌐 Machine learning 2 shared
🌐 Artificial intelligence 2 shared
🌐 Benchmark 2 shared
View full profile
Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.20696 [Submitted on 24 Feb 2026] Title: PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding Authors: Baolong Bi , Yuyao Ge , Shenghua Liu , Yuchen He , Siqian Tong , Lizhe Chen , Lingrui Mei , Zehao Li , Yiwei Wang , Yujun Cai , Ming-Hsuan Yang , Xueqi Cheng View a PDF of the paper titled PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding, by Baolong Bi and 11 other authors View PDF Abstract: Reliable AI systems require large language models to exhibit behaviors aligned with human preferences and values. However, most existing alignment approaches operate at training time and rely on additional high-quality data, incurring significant computational and annotation costs. While recent work has shown that contrastive decoding can leverage a model's internal distributions to improve specific capabilities, its applicability remains limited to narrow behavioral scopes and scenarios. In this work, we introduce Polarity-Prompt Contrastive Decoding , a test-time behavior control method that generalizes contrastive decoding to broader enhancement settings. PromptCD constructs paired positive and negative guiding prompts for a target behavior and contrasts model responses-specifically token-level probability distributions in LLMs and visual attention patterns in VLMs-to reinforce desirable outcomes. This formulation extends contrastive decoding to a wide range of enhancement objectives and is applicable to both LLMs and Vision-Language Models without additional training. For LLMs, experiments on the "3H" alignment objectives (helpfulness, honesty, and harmlessness) demonstrate consistent and substantial improvements, indicating that post-trained models can achieve meaningful self-enhancement purely at test time. For VLMs, we further analyze contrastive effects on visual attention, showing that PromptCD significantly improves VQA performance by reinforcing beha...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine