SP
BravenNow
SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models
| USA | technology | ✓ Verified - arxiv.org

SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models

#SalamahBench #Arabic language models #safety evaluation #benchmark #standardization #AI ethics #NLP

📌 Key Takeaways

  • SalamahBench is a new benchmark for evaluating safety in Arabic language models
  • It aims to standardize safety assessments across different Arabic AI systems
  • The benchmark addresses unique linguistic and cultural challenges in Arabic content
  • It facilitates comparative analysis and improvement of model safety protocols

📖 Full Retelling

arXiv:2603.04410v1 Announce Type: cross Abstract: Safety alignment in Language Models (LMs) is fundamental for trustworthy AI. However, while different stakeholders are trying to leverage Arabic Language Models (ALMs), systematic safety evaluation of ALMs remains largely underexplored, limiting their mainstream uptake. Existing safety benchmarks and safeguard models are predominantly English-centric, limiting their applicability to Arabic Natural Language Processing (NLP) systems and obscuring

🏷️ Themes

AI Safety, Arabic NLP

📚 Related People & Topics

NLP

Topics referred to by the same term

NLP commonly refers to:

View Profile → Wikipedia ↗

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for NLP:

🌐 Urdu 1 shared
🌐 Persian 1 shared
🌐 Bert 1 shared
View full profile

Mentioned Entities

NLP

Topics referred to by the same term

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered t

}
Original Source
--> Computer Science > Computation and Language arXiv:2603.04410 [Submitted on 3 Feb 2026] Title: SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models Authors: Omar Abdelnasser , Fatemah Alharbi , Khaled Khasawneh , Ihsen Alouani , Mohammed E. Fouda View a PDF of the paper titled SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models, by Omar Abdelnasser and 4 other authors View PDF Abstract: Safety alignment in Language Models is fundamental for trustworthy AI. However, while different stakeholders are trying to leverage Arabic Language Models , systematic safety evaluation of ALMs remains largely underexplored, limiting their mainstream uptake. Existing safety benchmarks and safeguard models are predominantly English-centric, limiting their applicability to Arabic Natural Language Processing systems and obscuring fine-grained, category-level safety vulnerabilities. This paper introduces SalamaBench, a unified benchmark for evaluating the safety of ALMs, comprising $8,170$ prompts across $12$ different categories aligned with the MLCommons Safety Hazard Taxonomy. Constructed by harmonizing heterogeneous datasets through a rigorous pipeline involving AI filtering and multi-stage human verification, SalamaBench enables standardized, category-aware safety evaluation. Using this benchmark, we evaluate five state-of-the-art ALMs, including Fanar 1 and 2, ALLaM 2, Falcon H1R, and Jais 2, under multiple safeguard configurations, including individual guard models, majority-vote aggregation, and validation against human-annotated gold labels. Our results reveal substantial variation in safety alignment: while Fanar 2 achieves the lowest aggregate attack success rates, its robustness is uneven across specific harm domains. In contrast, Jais 2 consistently exhibits elevated vulnerability, indicating weaker intrinsic safety alignment. We further demonstrate that native ALMs perform substantially worse than dedicated safeguard m...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine