SP
BravenNow
The Pentagon-Anthropic dispute is a test of control
| USA | economy | ✓ Verified - ft.com

The Pentagon-Anthropic dispute is a test of control

📖 Full Retelling

Should private companies be able to set boundaries around the AI systems we integrate into our lives?

Entity Intersection Graph

No entity connections available yet for this article.

}
Original Source
The Pentagon-Anthropic dispute is a test of control on x (opens in a new window) The Pentagon-Anthropic dispute is a test of control on facebook (opens in a new window) The Pentagon-Anthropic dispute is a test of control on linkedin (opens in a new window) The Pentagon-Anthropic dispute is a test of control on whatsapp (opens in a new window) Save The Pentagon-Anthropic dispute is a test of control on x (opens in a new window) The Pentagon-Anthropic dispute is a test of control on facebook (opens in a new window) The Pentagon-Anthropic dispute is a test of control on linkedin (opens in a new window) The Pentagon-Anthropic dispute is a test of control on whatsapp (opens in a new window) Save Dean Ball Published March 29 2026 Jump to comments section Print this page Unlock the Editor’s Digest for free Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter. The writer is a senior fellow at the Foundation for American Innovation and was lead staff writer of the Trump administration’s AI Action Plan On March 4, the US Department of Defense took an unprecedented move against an American company: designating the frontier AI start-up Anthropic a “supply chain risk”. Typically, this designation is applied to technology from foreign-adversary countries. In this instance, it was invoked over a contract dispute. The conflict, which was largely blocked by a judge in California last week, centred on the question of where control over AI should rest. Neither side had the answer quite right. Trump administration officials sought to renegotiate the terms of the Pentagon’s contract to use Anthropic’s Claude — the only large language model certified for use in classified US military contexts — not because they intended to violate the company’s red line on lethal autonomous weapons and mass surveillance, they say, but because they believe only US law should limit the military’s use of technology. The principle is reasonable enough. But, as Judge Rita Li...
Read full article at source

Source

ft.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine