#AI Bias
Latest news articles tagged with "AI Bias". Follow the timeline of events, related topics, and entities.
Articles (28)
-
πΊπΈ Framing Effects in Independent-Agent Large Language Models: A Cross-Family Behavioral Analysis
[USA]
arXiv:2603.19282v1 Announce Type: cross Abstract: In many real-world applications, large language models (LLMs) operate as independent agents without interaction, thereby limiting coordination. In th...
Related: #Decision-Making -
πΊπΈ Analysis Of Linguistic Stereotypes in Single and Multi-Agent Generative AI Architectures
[USA]
arXiv:2603.18729v1 Announce Type: new Abstract: Many works in the literature show that LLM outputs exhibit discriminatory behaviour, triggering stereotype-based inferences based on the dialect in whi...
Related: #Linguistics -
πΊπΈ To See or To Please: Uncovering Visual Sycophancy and Split Beliefs in VLMs
[USA]
arXiv:2603.18373v1 Announce Type: cross Abstract: When VLMs answer correctly, do they genuinely rely on visual information or exploit language shortcuts? We introduce the Tri-Layer Diagnostic Framewo...
Related: #Model Reliability -
πΊπΈ Measuring and Exploiting Confirmation Bias in LLM-Assisted Security Code Review
[USA]
arXiv:2603.18740v1 Announce Type: cross Abstract: Security code reviews increasingly rely on systems integrating Large Language Models (LLMs), ranging from interactive assistants to autonomous agents...
Related: #Security Review -
πΊπΈ When Names Change Verdicts: Intervention Consistency Reveals Systematic Bias in LLM Decision-Making
[USA]
arXiv:2603.18530v1 Announce Type: cross Abstract: Large language models (LLMs) are increasingly used for high-stakes decisions, yet their susceptibility to spurious features remains poorly characteri...
Related: #Ethical AI -
πΊπΈ Auditing Preferences for Brands and Cultures in LLMs
[USA]
arXiv:2603.18300v1 Announce Type: cross Abstract: Large language models (LLMs) based AI systems increasingly mediate what billions of people see, choose and buy. This creates an urgent need to quanti...
Related: #Brand Perception -
πΊπΈ Hidden Clones: Exposing and Fixing Family Bias in Vision-Language Model Ensembles
[USA]
arXiv:2603.17111v1 Announce Type: cross Abstract: Ensembling Vision-Language Models (VLMs) from different providers maximizes benchmark accuracy, yet models from the same architectural family share c...
Related: #Model Ensembles -
πΊπΈ Catching rationalization in the act: detecting motivated reasoning before and after CoT via activation probing
[USA]
arXiv:2603.17199v1 Announce Type: cross Abstract: Large language models (LLMs) can produce chains of thought (CoT) that do not accurately reflect the actual factors driving their answers. In multiple...
Related: #Reasoning Detection -
πΊπΈ When Generative Augmentation Hurts: A Benchmark Study of GAN and Diffusion Models for Bias Correction in AI Classification Systems
[USA]
arXiv:2603.16134v1 Announce Type: cross Abstract: Generative models are widely used to compensate for class imbalance in AI training pipelines, yet their failure modes under low-data conditions are p...
Related: #Generative Models -
πΊπΈ LLM BiasScope: A Real-Time Bias Analysis Platform for Comparative LLM Evaluation
[USA]
arXiv:2603.12522v1 Announce Type: cross Abstract: As large language models (LLMs) are deployed widely, detecting and understanding bias in their outputs is critical. We present LLM BiasScope, a web a...
Related: #LLM Evaluation -
πΊπΈ Diagnosing Retrieval Bias Under Multiple In-Context Knowledge Updates in Large Language Models
[USA]
arXiv:2603.12271v1 Announce Type: cross Abstract: LLMs are widely used in knowledge-intensive tasks where the same fact may be revised multiple times within context. Unlike prior work focusing on one...
Related: #Knowledge Retrieval -
πΊπΈ Do LLMs have a Gender (Entropy) Bias?
[USA]
arXiv:2505.20343v2 Announce Type: replace-cross Abstract: We investigate the existence and persistence of a specific type of gender bias in some of the popular LLMs and contribute a new benchmark dat...
Related: #Gender Studies -
πΊπΈ Do LLMs Share Human-Like Biases? Causal Reasoning Under Prior Knowledge, Irrelevant Context, and Varying Compute Budgets
[USA]
arXiv:2602.02983v2 Announce Type: replace Abstract: Large language models (LLMs) are increasingly used in domains where causal reasoning matters, yet it remains unclear whether their judgments reflec...
Related: #Causal Reasoning -
πΊπΈ Unmasking Biases and Reliability Concerns in Convolutional Neural Networks Analysis of Cancer Pathology Images
[USA]
arXiv:2603.12445v1 Announce Type: cross Abstract: Convolutional Neural Networks have shown promising effectiveness in identifying different types of cancer from radiographs. However, the opaque natur...
Related: #Medical Reliability -
πΊπΈ Google's AI Searches Love to Refer You Back to Google
[USA]
The company's generative AI search tools increasingly cite its own services, like Google Search and YouTube, over third-party publishers.
Related: #Search Monopoly -
πΊπΈ Locating Demographic Bias at the Attention-Head Level in CLIP's Vision Encoder
[USA]
arXiv:2603.11793v1 Announce Type: cross Abstract: Standard fairness audits of foundation models quantify that a model is biased, but not where inside the network the bias resides. We propose a mechan...
Related: #Computer Vision -
πΊπΈ Gender Bias in Generative AI-assisted Recruitment Processes
[USA]
arXiv:2603.11736v1 Announce Type: new Abstract: In recent years, generative artificial intelligence (GenAI) systems have assumed increasingly crucial roles in selection processes, personnel recruitme...
Related: #Recruitment, #Gender Inequality -
πΊπΈ BiasBusters: Uncovering and Mitigating Tool Selection Bias in Large Language Models
[USA]
arXiv:2510.00307v2 Announce Type: replace Abstract: Agents backed by large language models (LLMs) increasingly rely on external tools drawn from marketplaces where multiple providers offer functional...
Related: #Tool Fairness -
πΊπΈ Assessing Cognitive Biases in LLMs for Judicial Decision Support: Virtuous Victim and Halo Effects
[USA]
arXiv:2603.10016v1 Announce Type: cross Abstract: We investigate whether large language models (LLMs) display human-like cognitive biases, focusing on potential implications for assistance in judicia...
Related: #Judicial AI -
πΊπΈ Leveraging Wikidata for Geographically Informed Sociocultural Bias Dataset Creation: Application to Latin America
[USA]
arXiv:2603.10001v1 Announce Type: cross Abstract: Large Language Models (LLMs) exhibit inequalities with respect to various cultural contexts. Most prominent open-weights models are trained on Global...
Related: #Geographic Data -
πΊπΈ Common Sense vs. Morality: The Curious Case of Narrative Focus Bias in LLMs
[USA]
arXiv:2603.09434v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly deployed across diverse real-world applications and user communities. As such, it is crucial that these...
Related: #Ethical AI -
πΊπΈ Investigating Gender Stereotypes in Large Language Models via Social Determinants of Health
[USA]
arXiv:2603.09416v1 Announce Type: cross Abstract: Large Language Models (LLMs) excel in Natural Language Processing (NLP) tasks, but they often propagate biases embedded in their training data, which...
Related: #Health Equity -
πΊπΈ Towards a more efficient bias detection in financial language models
[USA]
arXiv:2603.08267v1 Announce Type: new Abstract: Bias in financial language models constitutes a major obstacle to their adoption in real-world applications. Detecting such bias is challenging, as it ...
Related: #Financial Technology -
πΊπΈ Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering
[USA]
arXiv:2505.12189v2 Announce Type: replace Abstract: Large language models (LLMs) exhibit reasoning biases, often conflating content plausibility with formal logical validity. This can lead to wrong i...
Related: #Model Optimization -
πΊπΈ Self-Attribution Bias: When AI Monitors Go Easy on Themselves
[USA]
arXiv:2603.04582v1 Announce Type: new Abstract: Agentic systems increasingly rely on language models to monitor their own behavior. For example, coding agents may self critique generated code for pul...
Related: #Algorithmic Accountability -
πΊπΈ Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems
[USA]
arXiv:2510.12462v2 Announce Type: replace Abstract: Large Language Models (LLMs) are increasingly being used to autonomously evaluate the quality of content in communication systems, e.g., to assess ...
Related: #Communication Systems, #Machine Learning Ethics -
π¬π§ Police AI chief admits crime-fighting tech will have bias but vows to tackle it
[United Kingdom]
<p>Exclusive: NCAβs Alex Murray says he hopes new Β£115m police AI centre can limit unfairness found in tools</p><ul><li><p><a href="https://www.theguardian.com/uk-news/2026/feb/24/its-not-robocop-uk-p...
Related: #Police Technology, #Criminal Justice Reform -
πΊπΈ From Biased Chatbots to Biased Agents: Examining Role Assignment Effects on LLM Agent Robustness
[USA]
arXiv:2602.12285v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly deployed as autonomous agents capable of actions with real-world impacts beyond text generation. While ...
Related: #Autonomous Agents, #Ethical AI
Key Entities (9)
- Large language model (7 news)
- Attribution bias (1 news)
- Autonomous system (1 news)
- National Crime Agency (1 news)
- Common Sense (1 news)
- Ethics of artificial intelligence (1 news)
- Clip (1 news)
- Google (1 news)
- Telecommunications (1 news)
About the topic: AI Bias
The topic "AI Bias" aggregates 28+ news articles from various countries.