SP
BravenNow
It’s official: The Pentagon has labeled Anthropic a supply chain risk
| USA | technology | ✓ Verified - techcrunch.com

It’s official: The Pentagon has labeled Anthropic a supply chain risk

#Anthropic #Pentagon #supply chain risk #AI ethics #military AI #Dario Amodei #autonomous weapons #Claude AI

📌 Key Takeaways

  • Pentagon officially labeled Anthropic a supply chain risk, typically reserved for foreign adversaries
  • Anthropic CEO refused military use of AI for mass surveillance or fully autonomous weapons
  • Pentagon continues using Anthropic's Claude AI in Iran campaign despite the designation
  • The move has drawn criticism from tech industry employees and former government advisors

📖 Full Retelling

The Department of Defense has officially labeled Anthropic a supply chain risk in the United States following weeks of conflict after the AI firm's CEO Dario Amodei refused to allow military use of its systems for mass surveillance of Americans or fully autonomous weapons. This designation, typically reserved for foreign adversaries, requires any company or agency working with the Pentagon to certify that they don't use Anthropic's models. The unprecedented move threatens to disrupt both Anthropic's business and the Pentagon's own operations. Anthropic has been the only frontier AI lab with classified-ready systems, making it particularly valuable for military applications. The Department has argued that its use of AI should not be limited by a private contractor, while Anthropic CEO Dario Amodei has called the Pentagon's actions 'retaliatory and punitive,' reportedly stating that his refusal to praise or donate to President Trump contributed to the dispute. In the midst of this controversy, the U.S. military continues to rely on Anthropic's Claude AI in its Iran campaign, where American forces use AI tools to quickly manage operational data. Claude is one of the main tools installed in Palantir's Maven Smart System, which military operators in the Middle East depend on. This creates a complicated situation where the Pentagon has simultaneously labeled Anthropic a risk while continuing to use its technology in critical operations. The designation has sparked significant backlash, with former Trump White House AI advisor Dean Ball referring to it as a 'death rattle' of the American republic, arguing that the government has abandoned strategic clarity in favor of 'thuggish' tribalism. Hundreds of employees from OpenAI and Google have urged the DOD to withdraw its designation and called on Congress to push back, while also urging their leaders to continue refusing the DOD's demands for domestic mass surveillance and autonomous weapons. Interestingly, OpenAI has forged its own deal with the Department to allow military use of its AI systems for 'all lawful purposes,' though some of its employees have expressed concern about the ambiguous phrasing.

🏷️ Themes

Government regulation, AI ethics, Military technology

📚 Related People & Topics

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...

View Profile → Wikipedia ↗
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗
Dario Amodei

Dario Amodei

American entrepreneur (born 1983)

Dario Amodei (born 1983) is an American artificial intelligence researcher and entrepreneur. He is the co-founder and CEO of Anthropic, the company behind the large language model series Claude. He was previously the vice president of research at OpenAI. In his capacity as Anthropic's CEO, he often ...

View Profile → Wikipedia ↗

Claude (language model)

Large language model developed by Anthropic

Claude is a series of large language models developed by Anthropic. The first model was released in March 2023, and the latest, Claude Opus 4.6, in February 2026.

View Profile → Wikipedia ↗
Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Ethics of artificial intelligence:

🏢 Anthropic 15 shared
🌐 Pentagon 14 shared
🏢 OpenAI 13 shared
👤 Dario Amodei 5 shared
🌐 National security 4 shared
View full profile

Mentioned Entities

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered t

Anthropic

Anthropic

American artificial intelligence research company

Dario Amodei

Dario Amodei

American entrepreneur (born 1983)

Claude (language model)

Large language model developed by Anthropic

Pentagon

Pentagon

Shape with five sides

}
Original Source
The Department of Defense has officially notified Anthropic leadership that the company and its products have been designated a supply chain risk, Bloomberg reports, citing a senior department official. The designation comes after weeks of conflict between the AI lab and the DOD. Anthropic CEO Dario Amodei has refused to allow the military to use its AI systems for mass surveillance of Americans or to power fully autonomous weapons with no humans assisting in the targeting or firing decisions. The Department has argued that its use of AI should not be limited by a private contractor. Supply chain risk designations are typically reserved for foreign adversaries. The label requires any company or agency that does work with the Pentagon to certify that it doesn’t use Anthropic’s models. The Pentagon’s finding threatens to disrupt both the company and its own operations. Anthropic has been the only frontier AI lab with classified-ready systems. The U.S. military is currently relying on Claude in its Iran campaign, where American forces are using AI tools to quickly manage the data for their operations. Claude is one of the main tools installed in Palantir’s Maven Smart System, which military operators in the Middle East rely on, according to Bloomberg. Labeling Anthropic a supply chain risk over this disagreement is an unprecedented move from the Department, several critics say. Dean Ball, a former Trump White House AI advisor, has referred to the designation as a “death rattle” of the American republic, arguing government has abandoned strategic clarity and respect in favor of “thuggish” tribalism that treats domestic innovators worse than foreign adversaries. Hundreds of employees from OpenAI and Google have urged the DOD to withdraw its designation and called on Congress to push back on what could be perceived as an inappropriate use of authority against an American technology company. They have also urged their leaders to stand together to continue to refuse the DOD...
Read full article at source

Source

techcrunch.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine