SP
BravenNow
The trap Anthropic built for itself
| USA | technology | ✓ Verified - techcrunch.com

The trap Anthropic built for itself

#Anthropic #AI regulation #National security #Pentagon contract #AI safety #Trump administration #Dario Amodei #Max Tegmark

📌 Key Takeaways

  • Trump administration blacklisted Anthropic from Pentagon contracts over refusal to use AI for surveillance or autonomous weapons
  • Anthropic faces losing up to $200 million in contracts and will challenge the decision in court
  • AI companies' resistance to binding regulation has left them vulnerable to government intervention
  • Anthropic recently abandoned its core safety pledge not to release powerful AI systems without ensuring they won't cause harm

📖 Full Retelling

On Friday afternoon, the Trump administration severed ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei and other former OpenAI researchers, invoking a national security law to blacklist it from doing business with the Pentagon after Amodei refused to allow the company's technology to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input. The unprecedented action means Anthropic is set to lose a contract worth up to $200 million and be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to 'immediately cease all use of Anthropic technology.' The company has announced it will challenge the Pentagon in court, calling the supply-chain-risk designation legally unsound and 'never before publicly applied to an American company.' This dramatic development comes amid growing concerns about AI governance and the relationship between AI companies and national security interests. Max Tegmark, the Swedish-American physicist and MIT professor who founded the Future of Life Institute in 2014, views the Anthropic crisis as the result of the company's own choices. Tegmark, who helped organize a 2023 open letter signed by over 33,000 people including Elon Musk calling for a pause in advanced AI development, argues that Anthropic, like its rivals, has sown the seeds of its own predicament by resisting binding regulation. Earlier this week, Anthropic dropped the central tenet of its own safety pledge—the promise not to release increasingly powerful AI systems until confident they wouldn't cause harm. Tegmark contends that in the absence of regulatory frameworks, there's little to protect these AI companies from government intervention.

🏷️ Themes

AI governance, National security, Corporate responsibility, Regulatory policy

📚 Related People & Topics

Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗

Regulation of artificial intelligence

Guidelines and laws to regulate AI

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct ...

View Profile → Wikipedia ↗

AI safety

Artificial intelligence field of study

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses AI alignment (which aims to ensure AI systems behave as intended), monitoring AI systems for risks, and enhancing their rob...

View Profile → Wikipedia ↗

National security

Security and defence of a nation state

National security, or national defence (national defense in American English), is the security and defence of a sovereign state, including its citizens, economy, and institutions, which is regarded as a duty of government. Originally conceived as protection against military attack, national security...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Anthropic:

🌐 Pentagon 32 shared
🌐 Artificial intelligence 9 shared
🌐 Military applications of artificial intelligence 7 shared
🌐 Ethics of artificial intelligence 7 shared
🌐 Claude (language model) 6 shared
View full profile

Mentioned Entities

Anthropic

Anthropic

American artificial intelligence research company

Regulation of artificial intelligence

Guidelines and laws to regulate AI

AI safety

Artificial intelligence field of study

National security

Security and defence of a nation state

Deep Analysis

Why It Matters

The Anthropic situation highlights the risks of self-regulation in the rapidly advancing AI field and the potential consequences when companies prioritize profit over safety. It also underscores the need for government oversight to prevent misuse of powerful AI technologies and address national security concerns.

Context & Background

  • Anthropic is an AI company founded by former OpenAI researchers.
  • There's a growing debate about the safety and regulation of advanced AI.
  • Concerns exist regarding the potential for AI misuse, including mass surveillance and autonomous weapons.

What Happens Next

Anthropic is expected to challenge the Pentagon's decision in court. The incident may prompt further scrutiny and potential regulation of AI companies' safety practices by the government. It could also accelerate discussions about international cooperation on AI safety standards.

Frequently Asked Questions

Why did the Trump administration blacklist Anthropic?

The Trump administration blacklisted Anthropic for refusing to allow its technology to be used for mass surveillance and autonomous weapons, invoking a national security law.

What is the significance of Anthropic dropping its safety pledge?

Anthropic's decision to drop its safety pledge is significant because it demonstrates a shift away from prioritizing safety and towards commercial interests, contributing to a regulatory vacuum in the AI industry.

What is the argument about the race with China?

The argument that a race with China necessitates unfettered AI development is countered by the fact that China is actively banning certain types of AI, suggesting concerns about control and safety as well.

}
Original Source
Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen: the Trump administration was severing ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei and other former OpenAI researchers who left over safety concerns. Defense Secretary Pete Hegseth had invoked a national security law — one designed to counter foreign supply chain threats — to blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input. It was a jaw-dropping sequence. Anthropic is now set to lose a contract worth up to $200 million, as well as be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately cease all use of Anthropic technology.” (Anthropic has since said it will challenge the Pentagon in court , calling the supply-chain-risk designation legally unsound and “never before publicly applied to an American company.”) Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world’s ability to govern them. The Swedish-American physicist and professor at MIT founded the Future of Life Institute in 2014. In 2023, he famously helped organize an open letter — ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development. His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument doesn’t begin with the Pentagon but with a decision made years earlier — a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Earlier this week, Anthropic even dropped ...
Read full article at source

Source

techcrunch.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine