SP
BravenNow
AI vs. the Pentagon: killer robots, mass surveillance, and red lines
| USA | technology | ✓ Verified - theverge.com

AI vs. the Pentagon: killer robots, mass surveillance, and red lines

#Anthropic #Pentagon #Lethal autonomous weapons #Mass surveillance #Artificial intelligence ethics #Pete Hegseth #Defense contracts

📌 Key Takeaways

  • Anthropic is refusing Pentagon demands for "any lawful use" of AI, specifically blocking mass surveillance and autonomous weapons.
  • The Pentagon threatens to label Anthropic a "supply chain risk" if they do not comply by the deadline.
  • Rivals OpenAI and xAI have reportedly agreed to the military's terms, unlike Anthropic.
  • CEO Dario Amodei maintains the company's ethical red lines despite the financial and political pressure.
  • Tech workers are expressing disillusionment regarding their companies' involvement in military applications.

📖 Full Retelling

AI firm Anthropic is engaged in a high-stakes standoff with the Pentagon in Washington as of February 27, 2026, refusing to sign military contracts that demand "any lawful use" of its technology, specifically rejecting applications for mass surveillance and fully autonomous lethal weapons. The Department of Defense, led by Secretary Pete Hegseth and CTO Emil Michael, is threatening to designate the $380 billion startup as a "supply chain risk"—a label typically reserved for national security threats—if it does not remove its ethical guardrails by a looming deadline. This conflict highlights a widening rift in the tech industry, as rivals OpenAI and xAI have reportedly already agreed to the Pentagon’s terms, while Anthropic holds firm against utilizing its models for offensive capabilities that lack human oversight. The core of the dispute revolves around the military's insistence on broad contract language that would grant the government carte blanche to deploy AI in warfare and domestic intelligence. Pentagon officials have specifically requested the capability for systems to identify and eliminate targets without human intervention, a demand Anthropic CEO Dario Amodei has publicly rejected, stating that "threats do not change our position." Despite a high-level meeting at the White House where an ultimatum was issued, the company maintains that it cannot in good conscience accede to requests that violate its red lines. In retaliation, the Pentagon is pushing to classify Anthropic as a security risk, a move that could sever its access to lucrative government contracts worth hundreds of billions of dollars. While Anthropic resists, its competitors appear to have capitulated to government pressure, with OpenAI and xAI reportedly agreeing to the new terms, though OpenAI may be seeking to renegotiate specific clauses. This divergence has sparked anxiety and disillusionment among tech workers, many of whom entered the industry expecting to improve lives rather than facilitate warfare or intrusive surveillance. Employees at companies with defense contracts have expressed feeling betrayed, noting a shift in corporate culture from innovation to enabling state-sponsored violence. The situation underscores the growing tension between financial incentives, national security demands, and the ethical responsibilities of artificial intelligence developers.

🏷️ Themes

Artificial Intelligence, Military Technology, Corporate Ethics, Government Contracts

📚 Related People & Topics

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...

View Profile → Wikipedia ↗
Mass surveillance

Mass surveillance

Intricate surveillance of an entire or a substantial fraction of a population

Mass surveillance is the intricate surveillance of an entire or a substantial fraction of a population in order to monitor that group of citizens. The surveillance is often carried out by local and federal governments or governmental organizations, but it may also be carried out by corporations (eit...

View Profile → Wikipedia ↗
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗
Lethal autonomous weapon

Lethal autonomous weapon

Autonomous military technology system

Lethal autonomous weapons (LAWs) are a type of military drone or military robot which are autonomous in that they can independently search for and engage targets based on programmed constraints and descriptions. As of 2025, most military drones and military robots are not truly autonomous. LAWs are ...

View Profile → Wikipedia ↗
Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Ethics of artificial intelligence:

🏢 Anthropic 9 shared
🌐 Pentagon 9 shared
🏢 OpenAI 7 shared
👤 Dario Amodei 4 shared
🌐 National security 3 shared
View full profile

Deep Analysis

Why It Matters

This standoff represents a critical juncture for the integration of artificial intelligence into national defense, setting a precedent for how tech companies balance ethical obligations against immense government pressure. It directly affects the future development and deployment of autonomous weapons systems and mass surveillance capabilities, potentially defining the legal and moral boundaries of AI in warfare. The outcome will significantly influence the financial stability and market positioning of major AI firms, as being labeled a supply chain risk could cripple Anthropic's growth while rewarding compliant competitors. Furthermore, this conflict highlights a deepening cultural crisis within the tech workforce regarding the moral implications of their labor.

Context & Background

  • Anthropic was founded by former OpenAI members with a specific focus on AI safety and 'constitutional AI' to ensure systems remain helpful, harmless, and honest.
  • The Department of Defense has historically sought to integrate AI into operations through initiatives like Project Maven, which sparked internal protests at Google regarding drone strike analysis.
  • The concept of 'Lethal Autonomous Weapons Systems' (LAWS) has been a subject of international debate at the United Nations, with humanitarian organizations calling for a preemptive ban.
  • The 'supply chain risk' designation is a powerful regulatory tool typically reserved for foreign adversaries like Huawei, making its potential use against a domestic startup highly unusual and aggressive.
  • OpenAI recently modified its usage policies to remove explicit bans on military and warfare applications, signaling a broader industry shift toward accepting defense contracts.

What Happens Next

If Anthropic does not comply by the looming deadline, the Pentagon is expected to formally designate the company as a 'supply chain risk,' effectively banning its models from federal use and cementing the dominance of OpenAI and xAI in the defense sector. This move will likely trigger legal challenges from Anthropic regarding government overreach, while simultaneously galvanizing employee activism across the tech industry. Additionally, Congress may intervene to hold hearings on the regulation of AI in warfare, potentially leading to legislation that establishes clearer boundaries for autonomous lethal weapons.

Frequently Asked Questions

What specific contract terms is Anthropic refusing to accept?

Anthropic is rejecting the Pentagon's demand for 'any lawful use' language, which would allow the military to deploy AI for mass surveillance and fully autonomous lethal weapons without human intervention.

What consequences does the Pentagon threaten if Anthropic does not comply?

The Department of Defense threatens to designate Anthropic as a 'supply chain risk,' a label that would sever the company's access to lucrative government contracts worth hundreds of billions of dollars.

How are Anthropic's competitors reacting to the Pentagon's demands?

Rivals OpenAI and xAI have reportedly agreed to the Pentagon's terms, though OpenAI may be seeking to renegotiate specific clauses, creating a significant competitive divide in the industry.

Why are tech workers expressing anxiety regarding these contracts?

Many employees feel betrayed by their companies' shift from innovation to facilitating state-sponsored violence, having entered the industry with the expectation of improving lives rather than enabling warfare.

Original Source
AI Updated Today, Feb 27, 2026, 4:18 PM UTC AI vs. the Pentagon: killer robots, mass surveillance, and red lines by Stevie Bonifield Can AI firms set limits on how and where the military uses their models? Anthropic is in heated negotiations with the Pentagon after refusing to comply with new military contract terms that would require it to loosen the guardrails on its AI models, allowing for “any lawful use,” even mass surveillance of Americans and fully autonomous lethal weapons. Pentagon CTO Emil Michael is pushing for Anthropic to be designated a “supply chain risk” if it doesn’t comply, a label usually only given to national security threats. Anthropic’s rivals OpenAI and xAI have reportedly agreed to the new terms, but even after a White House meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei is still refusing to cross his company’s red line, stating that “threats do not change our position: we cannot in good conscience accede to their request.” Follow along here for the latest updates on the clash between AI companies and the Pentagon… Today, 59 minutes ago Hayden Field We don’t have to have unsupervised killer robots Image: Cath Virginia / The Verge It’s the day of the Pentagon’s looming ultimatum for Anthropic: allow the US military unchecked access to its technology, including for mass surveillance and fully autonomous lethal weapons, or potentially be designated a “supply chain risk” and potentially lose hundreds of billions of dollars in contracts. Amid the intensifying public statements and threats, tech workers across the industry are looking at their own companies’ government and military contracts wondering what kind of future they’re helping to build. While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the US military to use Anthropic’s AI kill targets with no human oversight, OpenAI and xAI had reportedly already agreed to such terms, although OpenAI is re...
Read full article at source

Source

theverge.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine