Federal appeals court upholds requirement for Anthropic to label Claude AI as military supply-chain risk
Ruling creates conflicting legal directives with another court's decision on the same issue
Case centers on whether software-based AI should face hardware-focused security restrictions
Decision impacts Pentagon's ability to procure and utilize commercial AI technology
π Full Retelling
A federal appeals court ruled on Tuesday that Anthropic must continue labeling its Claude AI system as a potential supply-chain risk to the U.S. military, creating conflicting legal directives for the artificial intelligence company. The decision by the Ninth Circuit Court of Appeals upholds a lower court's injunction requiring Anthropic to maintain the designation while litigation continues, directly impacting how the Department of Defense can procure and utilize the company's technology.
The legal battle centers on whether Anthropic's Claude AI, which can process and generate text based on vast datasets, poses national security risks if integrated into military systems. Government attorneys argued that because Anthropic uses some hardware components manufactured overseas, its AI systems could contain vulnerabilities exploitable by foreign adversaries. The supply-chain risk label, typically applied to telecommunications equipment from companies like Huawei, represents a significant barrier to military contracts and has sparked intense debate about applying traditional security frameworks to cutting-edge software-based AI.
This appellate ruling creates a direct conflict with a separate federal court decision from earlier this year that found the labeling requirement overly broad when applied to pure software systems. Anthropic now faces the unprecedented situation of operating under contradictory court orders in different jurisdictions, complicating both its business operations and the Pentagon's AI procurement strategy. The company has maintained that its AI systems undergo rigorous security testing and that the hardware-based supply-chain framework doesn't appropriately address software vulnerabilities.
The case highlights growing tensions between rapid AI innovation and established national security protocols, with implications for how the U.S. government regulates and adopts emerging technologies. Legal experts suggest the conflicting rulings may eventually require Supreme Court intervention to establish consistent standards for AI security classification, while defense officials continue grappling with how to safely integrate commercial AI capabilities into military operations without compromising security.
π·οΈ Themes
Artificial Intelligence, National Security, Legal Regulation
The United States Court of Appeals for the Ninth Circuit (in case citations, 9th Cir.) is the U.S. federal court of appeals headquartered in San Francisco, California, and has appellate jurisdiction over the U.S. district courts for the following federal judicial districts:
District of Alaska
Distr...
# Anthropic PBC
**Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct ...
Claude is a series of large language models developed by Anthropic. The first model was released in March 2023, and the latest, Claude Opus 4.6, in February 2026.