Trump administration blacklisted Anthropic from Pentagon contracts over refusal to use AI for surveillance or autonomous weapons
Anthropic faces losing up to $200 million in contracts and will challenge the decision in court
AI companies' resistance to binding regulation has left them vulnerable to government intervention
Anthropic recently abandoned its core safety pledge not to release powerful AI systems without ensuring they won't cause harm
📖 Full Retelling
On Friday afternoon, the Trump administration severed ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei and other former OpenAI researchers, invoking a national security law to blacklist it from doing business with the Pentagon after Amodei refused to allow the company's technology to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input. The unprecedented action means Anthropic is set to lose a contract worth up to $200 million and be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to 'immediately cease all use of Anthropic technology.' The company has announced it will challenge the Pentagon in court, calling the supply-chain-risk designation legally unsound and 'never before publicly applied to an American company.' This dramatic development comes amid growing concerns about AI governance and the relationship between AI companies and national security interests. Max Tegmark, the Swedish-American physicist and MIT professor who founded the Future of Life Institute in 2014, views the Anthropic crisis as the result of the company's own choices. Tegmark, who helped organize a 2023 open letter signed by over 33,000 people including Elon Musk calling for a pause in advanced AI development, argues that Anthropic, like its rivals, has sown the seeds of its own predicament by resisting binding regulation. Earlier this week, Anthropic dropped the central tenet of its own safety pledge—the promise not to release increasingly powerful AI systems until confident they wouldn't cause harm. Tegmark contends that in the absence of regulatory frameworks, there's little to protect these AI companies from government intervention.
🏷️ Themes
AI governance, National security, Corporate responsibility, Regulatory policy
# Anthropic PBC
**Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct ...
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses AI alignment (which aims to ensure AI systems behave as intended), monitoring AI systems for risks, and enhancing their rob...
National security, or national defence (national defense in American English), is the security and defence of a sovereign state, including its citizens, economy, and institutions, which is regarded as a duty of government. Originally conceived as protection against military attack, national security...
Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen: the Trump administration was severing ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei and other former OpenAI researchers who left over safety concerns. Defense Secretary Pete Hegseth had invoked a national security law — one designed to counter foreign supply chain threats — to blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input. It was a jaw-dropping sequence. Anthropic is now set to lose a contract worth up to $200 million, as well as be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately cease all use of Anthropic technology.” (Anthropic has since said it will challenge the Pentagon in court , calling the supply-chain-risk designation legally unsound and “never before publicly applied to an American company.”) Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world’s ability to govern them. The Swedish-American physicist and professor at MIT founded the Future of Life Institute in 2014. In 2023, he famously helped organize an open letter — ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development. His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument doesn’t begin with the Pentagon but with a decision made years earlier — a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Earlier this week, Anthropic even dropped ...