SP
BravenNow
What does the US military’s feud with Anthropic mean for AI used in war?
| United Kingdom | business | ✓ Verified - theguardian.com

What does the US military’s feud with Anthropic mean for AI used in war?

#US military #Anthropic #AI ethics #warfare #military AI #defense technology #artificial intelligence

📌 Key Takeaways

  • The US military is in a dispute with Anthropic over AI ethics and military applications.
  • The conflict highlights tensions between AI developers and defense sector demands.
  • Ethical concerns about AI in warfare are central to the disagreement.
  • The outcome could influence future military AI procurement and development policies.

📖 Full Retelling

<p>Tech policy professor who served in US air force explains how a feud between an AI startup and the US military illuminates ethical fault lines</p><p>Anthropic’s ongoing fight with the Department of Defense over what <a href="https://www.theguardian.com/us-news/2026/feb/26/anthropic-pentagon-claude">safety restrictions</a> it can put on its <a href="https://www.theguardian.com/technology/artificialintelligenceai">artificial intelligence</a> models has

🏷️ Themes

AI Ethics, Military Technology

📚 Related People & Topics

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...

View Profile → Wikipedia ↗

United States Armed Forces

Combined military forces of the United States

The United States Armed Forces are the military forces of the United States. U.S. federal law names six armed forces: the Army, Marine Corps, Navy, Air Force, Space Force, and Coast Guard, each assigned their role and domain. From their inception during the American Revolutionary War, the Army and...

View Profile → Wikipedia ↗
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Ethics of artificial intelligence:

🌐 Pentagon 15 shared
🏢 Anthropic 15 shared
🏢 OpenAI 13 shared
👤 Dario Amodei 6 shared
🌐 National security 4 shared
View full profile

Mentioned Entities

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered t

United States Armed Forces

Combined military forces of the United States

Anthropic

Anthropic

American artificial intelligence research company

Deep Analysis

Why It Matters

This news matters because it highlights the growing tension between AI ethics and military applications, potentially affecting how AI is developed and deployed in warfare. It impacts AI companies facing ethical dilemmas about military contracts, defense contractors seeking advanced AI capabilities, and policymakers balancing national security with ethical AI governance. The outcome could set precedents for how commercial AI firms engage with military clients globally.

Context & Background

  • Anthropic is an AI safety company founded by former OpenAI researchers with a focus on developing safe and ethical AI systems.
  • The US military has been increasingly investing in AI for applications like autonomous weapons, intelligence analysis, and decision support systems.
  • There is ongoing global debate about ethical AI use in warfare, including concerns about autonomous weapons systems and AI-driven targeting.
  • Previous AI companies like Google have faced internal protests over military contracts, such as Project Maven in 2018.
  • The US Department of Defense has established the Joint Artificial Intelligence Center (JAIC) to accelerate AI adoption across military branches.

What Happens Next

Anthropic will likely face increased scrutiny from both military partners and ethical AI advocates, potentially leading to policy clarifications about their military engagement. The Department of Defense may seek alternative AI partners if Anthropic restricts military access. Congressional hearings on AI ethics in defense could be scheduled within the next 6-12 months, and NATO may develop clearer guidelines on AI use in warfare by 2025.

Frequently Asked Questions

What specific military applications might be affected by this feud?

Applications like autonomous drone targeting, battlefield decision support systems, and intelligence analysis tools could be affected if Anthropic restricts access to its AI models. This could delay military AI adoption or force development of alternative systems.

How does this compare to previous AI-military controversies?

This resembles Google's 2018 Project Maven controversy but involves a company specifically founded on AI safety principles. Unlike Google, Anthropic's core mission emphasizes ethical AI, making military partnerships more fundamentally contradictory to their stated values.

What are the potential national security implications?

If leading AI companies refuse military work, the US could fall behind adversaries who face fewer ethical constraints. However, ethical guardrails might prevent dangerous AI escalation and maintain international norms around autonomous weapons.

How might this affect other AI companies?

Other AI firms will watch this closely as it sets precedents for military engagement. Companies may face pressure to clarify their military policies, and investors might reconsider funding companies with restrictive military policies.

What international dimensions are involved?

This feeds into ongoing UN discussions about lethal autonomous weapons systems. US decisions influence global norms, and allies like the UK and Australia are developing similar military AI capabilities while facing ethical questions.

}
Original Source
Interview What does the US military’s feud with Anthropic mean for AI used in war? Nick Robins-Early Tech policy professor who served in US air force explains how a feud between an AI startup and the US military illuminates ethical fault lines A nthropic’s ongoing fight with the Department of Defense over what safety restrictions it can put on its artificial intelligence models has captivated the tech industry, acting as a test of how AI may be used in war and the government’s power to coerce companies to meet its demands. The negotiations have revolved around Anthropic’s refusal to allow the federal government to use its Claude AI for domestic mass surveillance or autonomous weapons systems, but the dispute also reflects the messy nature of what happens when tech companies have their products integrated into conflict. The Pentagon this week declared Anthropic a supply chain risk for its refusal to agree to the government’s terms, while Anthropic has vowed to challenge the designation in court. The Guardian spoke with Sarah Kreps, a professor and director of the Tech Policy Institute at Cornell University who previously served in the United States air force, about how the feud has played out. You’ve worked for a while on problems around “dual use technology ”. What happens when there’s a consumer technology that also gets used for classified or military purposes? I’ve thought about this a lot because I was in the military and I was on the side of the military that was developing and acquiring new technologies. We were always getting criticism about why it was taking so long, and now watching what’s happening I realize why it takes so long. What you would develop for classified and military contexts is very different from what Anthropic has developed for when I use Claude. The challenge for the military is that these technologies are so useful they can’t wait until a military grade version is available. They need to act quickly because of how valuable these tools are...
Read full article at source

Source

theguardian.com

More from United Kingdom

News from Other Countries

🇺🇸 USA

🇺🇦 Ukraine