SP
BravenNow
The Pentagon formally labels Anthropic a supply-chain risk
| USA | technology | ✓ Verified - theverge.com

The Pentagon formally labels Anthropic a supply-chain risk

#Pentagon #Anthropic #AI supply chain #Claude AI #Defense Secretary Hegseth #autonomous weapons #mass surveillance #supply-chain risk

📌 Key Takeaways

  • Pentagon designates Anthropic as 'supply-chain risk' over AI usage policies
  • First time an American company receives this designation typically for foreign entities
  • Dispute centers on Anthropic's refusal to allow Claude for autonomous weapons and mass surveillance
  • Defense contractors now barred from using Claude in government products
  • Six-month deadline set for removal of Claude from government systems

📖 Full Retelling

US Defense Secretary Pete Hegseth and the Pentagon have formally labeled Anthropic, an American AI company, a 'supply-chain risk' on March 5, 2026, escalating their ongoing dispute over the company's refusal to allow military use of its AI program Claude for autonomous lethal weapons and mass surveillance. This unprecedented designation, first reported by The Wall Street Journal, will bar defense contractors from working with the government if they use Claude in their products, marking the first time an American company has publicly received this label typically reserved for foreign entities with ties to adversarial governments. The conflict stems from weeks of failed negotiations, public ultimatums, and lawsuit threats, with the Pentagon arguing that Anthropic's demands for control over government usage would place too much power in private hands, while Anthropic remained unconvinced that the government would respect their ethical boundaries. After Anthropic announced last Thursday that they would not comply with Pentagon demands, Secretary Hegseth made good on previous threats, setting a six-month deadline for the company to remove Claude from government systems, particularly challenging given reports indicating Claude-powered intelligence tools played a major role in the recent successful US military operation that killed Iran's Supreme Leader Ayatollah Ali Khamenei.

🏷️ Themes

AI Ethics, Military Technology, Government Regulation

📚 Related People & Topics

Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗

Claude (language model)

Large language model developed by Anthropic

Claude is a series of large language models developed by Anthropic. The first model was released in March 2023, and the latest, Claude Opus 4.6, in February 2026.

View Profile → Wikipedia ↗
Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Anthropic:

🌐 Pentagon 32 shared
🌐 Artificial intelligence 9 shared
🌐 Military applications of artificial intelligence 7 shared
🌐 Ethics of artificial intelligence 7 shared
🌐 Claude (language model) 6 shared
View full profile

Mentioned Entities

Anthropic

Anthropic

American artificial intelligence research company

Claude (language model)

Large language model developed by Anthropic

Pentagon

Pentagon

Shape with five sides

}
Original Source
AI Policy Politics The Pentagon formally labels Anthropic a supply-chain risk Pete Hegseth had been threatening to punish the AI company for not loosening its acceptable use policy. Now, it’s official. Pete Hegseth had been threatening to punish the AI company for not loosening its acceptable use policy. Now, it’s official. by Tina Nguyen Mar 5, 2026, 11:02 PM UTC US Defense Secretary Pete Hegseth speaks during a press conference on US military action in Iran, at the Pentagon in Washington, DC, on March 2, 2026. Brendan Smialowski/AFP via Getty Images Part Of AI vs. the Pentagon: killer robots, mass surveillance, and red lines see all updates Tina Nguyen is a Senior Reporter for The Verge and author of Regulator , covering the second Trump administration, political influencers, tech lobbying and Big Tech vs. Big Government. After weeks of failed negotiations, public ultimatums, and lawsuit threats , the Defense Department has formally labeled Anthropic a “supply-chain risk”, escalating its fight with the AI company over their acceptable use policies and potentially bringing their fight to court. The decision, first reported by The Wall Street Journal on Thursday , citing one source familiar, will bar defense contractors from working with the government if they use Claude, Anthropic’s AI program, in their products. Though the designation is typically applied to foreign companies with ties to adversarial governments, this is the first time that an American company has publicly received this label. At the heart of the conflict is Anthropic’s refusal to allow the Pentagon to use Claude for two purposes: autonomous lethal weapons without human oversight, and mass surveillance. The Pentagon has argued that Anthropic’s demands for control over government usage would place too much power in the hands of a private company, while Anthropic was not reassured that the government would respect their red lines. The negotiations grew ugly, however, as the Pentagon increasingly thr...
Read full article at source

Source

theverge.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine