SP
BravenNow
Anthropic loses appeals court bid to block Pentagon blacklisting temporarily
| USA | general | ✓ Verified - cnbc.com

Anthropic loses appeals court bid to block Pentagon blacklisting temporarily

#Anthropic #Department of Defense #blacklisting #supply chain risk #Claude AI #federal appeals court #preliminary injunction #lawsuit

📌 Key Takeaways

  • A federal appeals court denied Anthropic's request to temporarily pause its Pentagon blacklisting while its lawsuit proceeds.
  • The Pentagon designated Anthropic a supply chain risk in March, restricting defense contractors from using its Claude AI models.
  • The court ruled the government's interest in securing AI technology during military conflict outweighed Anthropic's primarily financial harms.
  • A separate federal court in San Francisco granted Anthropic a preliminary injunction, calling the government's action 'illegal First Amendment retaliation.'

📖 Full Retelling

A federal appeals court in Washington, D.C., on Wednesday denied artificial intelligence startup Anthropic's request for a temporary stay in its lawsuit against the U.S. Department of Defense, refusing to pause the Pentagon's blacklisting of the company while the legal case proceeds. The Pentagon had declared Anthropic a supply chain risk in early March, alleging that use of the company's Claude AI models threatens national security and requiring defense contractors to certify they do not use the technology. Anthropic sought the stay to prevent further financial and reputational damage as it challenges the designation. The court's ruling acknowledged that Anthropic would likely suffer "some degree of irreparable harm" without a stay but concluded that the company's interests were "primarily financial in nature." In its decision, the court stated that the equitable balance favored the government, weighing "a relatively contained risk of financial harm to a single private company" against "judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." Anthropic had argued that the blacklisting was unconstitutional retaliation, arbitrary, and capricious, but the court noted the company failed to show its speech had been "chilled" during the litigation. This legal battle is unfolding across two separate court jurisdictions due to the Pentagon's use of two distinct legal designations. The denial from the D.C. appeals court pertains to the designation under 41 U.S.C. § 4713. Meanwhile, in a separate but related case in San Francisco federal court concerning the 10 U.S.C. § 3252 designation, a judge late last month granted Anthropic a preliminary injunction, barring the Trump administration from enforcing a ban on Claude. In that ruling, the judge characterized the government's position as "classic illegal First Amendment retaliation," creating a complex and contradictory legal landscape for the high-stakes dispute over AI, national security, and corporate rights.

🏷️ Themes

National Security, Artificial Intelligence, Legal Dispute, Government Regulation

📚 Related People & Topics

Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗

Claude (language model)

Large language model developed by Anthropic

Claude is a series of large language models developed by Anthropic. The first model was released in March 2023, and the latest, Claude Opus 4.6, in February 2026.

View Profile → Wikipedia ↗
United States Department of Defense

United States Department of Defense

Executive department of the US federal government

The United States Department of Defense (DoD), also referred to as the Department of War (DOW), is an executive department of the U.S. federal government charged with coordinating and supervising the U.S. Armed Forces—the Army, Navy, Marines, Air Force, Space Force, and, for some purposes, the Coast...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Anthropic:

🌐 Pentagon 32 shared
🌐 Artificial intelligence 9 shared
🌐 Military applications of artificial intelligence 7 shared
🌐 Ethics of artificial intelligence 7 shared
🌐 Claude (language model) 6 shared
View full profile

Mentioned Entities

Anthropic

Anthropic

American artificial intelligence research company

Claude (language model)

Large language model developed by Anthropic

United States Department of Defense

United States Department of Defense

Executive department of the US federal government

Deep Analysis

Why It Matters

This legal dispute highlights the growing tension between government national security measures and the rights of private AI companies. The conflicting rulings create significant uncertainty for defense contractors who must navigate contradictory judicial orders regarding the use of specific technologies. The outcome of this case will likely define the limits of executive power in regulating emerging AI technologies during times of military conflict.

Context & Background

  • Anthropic is a major AI startup known for its Claude AI models and its focus on AI safety.
  • The Pentagon blacklisted Anthropic in early March under two distinct legal designations, alleging the company poses a supply chain risk to national security.
  • The legal battle is split across two jurisdictions: the D.C. case involves 41 U.S.C. § 4713, while the San Francisco case involves 10 U.S.C. § 3252.
  • Anthropic argues the blacklisting is unconstitutional retaliation and arbitrary, while the government maintains it is a necessary security measure.
  • A San Francisco judge recently characterized the government's position in the parallel case as 'classic illegal First Amendment retaliation.'
  • The D.C. court's opinion referenced the 'Department of War' and an 'active military conflict' when justifying the priority of government interests.

What Happens Next

Anthropic will likely continue to pursue the merits of its lawsuit in the D.C. circuit to overturn the blacklisting, while the preliminary injunction remains active in the San Francisco jurisdiction. The split between the appellate and district courts increases the probability that the case will eventually escalate to the Supreme Court to resolve the conflicting interpretations of federal law. Defense contractors will face immediate compliance challenges as they attempt to adhere to the conflicting court orders.

Frequently Asked Questions

Why did the D.C. appeals court deny the stay?

The court concluded that the government's interest in securing vital AI technology during an active military conflict outweighed the financial harm to Anthropic.

How does the San Francisco ruling differ from the D.C. ruling?

The D.C. court refused to pause the blacklist, whereas a San Francisco judge granted a preliminary injunction, characterizing the ban as illegal First Amendment retaliation.

What specific legal statutes are involved in this case?

The D.C. case involves 41 U.S.C. § 4713, while the San Francisco case involves 10 U.S.C. § 3252, creating a complex dual-jurisdiction battle.

What is the Pentagon's justification for blacklisting Anthropic?

The Pentagon declared Anthropic a supply chain risk, alleging that the use of its Claude AI models poses a threat to national security.

}
Original Source
A federal appeals court in Washington, D.C., on Wednesday denied Anthropic's request for a stay in its lawsuit against the Department of Defense. The artificial intelligence startup sought the action to pause its blacklisting by the Pentagon and prevent further monetary and reputational harm as the case unfolds. The ruling comes after a judge in San Francisco federal court late last month, in a separate case, granted Anthropic a preliminary injunction that bars the Trump administration from enforcing a ban on the use of Claude. The DOD declared Anthropic a supply chain risk in early March, meaning that use of the company's technology purportedly threatens U.S. national security. The label requires defense contractors to certify that they don't use Anthropic's Claude artificial intelligence models in their work with the military. "In our view, the equitable balance here cuts in favor of the government," the appeals court said in its decision. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." Anthropic had asked the appeals court to review the Pentagon's determination and argued that it's a form of retaliation that is unconstitutional, arbitrary, capricious and not in accord with procedures required by law, according to a filing . In the ruling on Wednesday, the court acknowledged that Anthropic "will likely suffer some degree of irreparable harm absent a stay," but that the company's interests "seem primarily financial in nature." While the company claimed the DOD was standing in the way of its right to free speech, "Anthropic does not show that its speech has been chilled during the pendency of this litigation," the order said. The DOD relied on two distinct designations – 10 U.S.C. § 3252 and 41 U.S.C. § 4713 – to justify the supply chain risk action, and they have to be ch...
Read full article at source

Source

cnbc.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine