SP
BravenNow
Anthropic Supply-Chain Risk Label Should Stay In Place, Appeals Court Says
| USA | technology | βœ“ Verified - wired.com

Anthropic Supply-Chain Risk Label Should Stay In Place, Appeals Court Says

#Anthropic #Claude AI #supply-chain risk #military procurement #Ninth Circuit Court #national security #AI regulation #conflicting rulings

πŸ“Œ Key Takeaways

  • Federal appeals court upholds requirement for Anthropic to label Claude AI as military supply-chain risk
  • Ruling creates conflicting legal directives with another court's decision on the same issue
  • Case centers on whether software-based AI should face hardware-focused security restrictions
  • Decision impacts Pentagon's ability to procure and utilize commercial AI technology

πŸ“– Full Retelling

A federal appeals court ruled on Tuesday that Anthropic must continue labeling its Claude AI system as a potential supply-chain risk to the U.S. military, creating conflicting legal directives for the artificial intelligence company. The decision by the Ninth Circuit Court of Appeals upholds a lower court's injunction requiring Anthropic to maintain the designation while litigation continues, directly impacting how the Department of Defense can procure and utilize the company's technology. The legal battle centers on whether Anthropic's Claude AI, which can process and generate text based on vast datasets, poses national security risks if integrated into military systems. Government attorneys argued that because Anthropic uses some hardware components manufactured overseas, its AI systems could contain vulnerabilities exploitable by foreign adversaries. The supply-chain risk label, typically applied to telecommunications equipment from companies like Huawei, represents a significant barrier to military contracts and has sparked intense debate about applying traditional security frameworks to cutting-edge software-based AI. This appellate ruling creates a direct conflict with a separate federal court decision from earlier this year that found the labeling requirement overly broad when applied to pure software systems. Anthropic now faces the unprecedented situation of operating under contradictory court orders in different jurisdictions, complicating both its business operations and the Pentagon's AI procurement strategy. The company has maintained that its AI systems undergo rigorous security testing and that the hardware-based supply-chain framework doesn't appropriately address software vulnerabilities. The case highlights growing tensions between rapid AI innovation and established national security protocols, with implications for how the U.S. government regulates and adopts emerging technologies. Legal experts suggest the conflicting rulings may eventually require Supreme Court intervention to establish consistent standards for AI security classification, while defense officials continue grappling with how to safely integrate commercial AI capabilities into military operations without compromising security.

🏷️ Themes

Artificial Intelligence, National Security, Legal Regulation

πŸ“š Related People & Topics

United States Court of Appeals for the Ninth Circuit

United States Court of Appeals for the Ninth Circuit

Federal appellate court for the western U.S.

The United States Court of Appeals for the Ninth Circuit (in case citations, 9th Cir.) is the U.S. federal court of appeals headquartered in San Francisco, California, and has appellate jurisdiction over the U.S. district courts for the following federal judicial districts: District of Alaska Distr...

View Profile β†’ Wikipedia β†—
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile β†’ Wikipedia β†—

Regulation of artificial intelligence

Guidelines and laws to regulate AI

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct ...

View Profile β†’ Wikipedia β†—

Claude (language model)

Large language model developed by Anthropic

Claude is a series of large language models developed by Anthropic. The first model was released in March 2023, and the latest, Claude Opus 4.6, in February 2026.

View Profile β†’ Wikipedia β†—

Entity Intersection Graph

Connections for United States Court of Appeals for the Ninth Circuit:

πŸ‘€ Michael Avenatti 1 shared
πŸ‘€ Stormy Daniels 1 shared
πŸ‘€ Donald Trump 1 shared
View full profile

Mentioned Entities

United States Court of Appeals for the Ninth Circuit

United States Court of Appeals for the Ninth Circuit

Federal appellate court for the western U.S.

Anthropic

Anthropic

American artificial intelligence research company

Regulation of artificial intelligence

Guidelines and laws to regulate AI

Claude (language model)

Large language model developed by Anthropic

}
Original Source
The AI company now faces conflicting rulings in its fight over how Claude can be used by the US military.
Read full article at source

Source

wired.com

More from USA

News from Other Countries

πŸ‡¬πŸ‡§ United Kingdom

πŸ‡ΊπŸ‡¦ Ukraine