SP
BravenNow
OpenAI strikes deal with Pentagon to use tech in ‘classified network’
| USA | world | ✓ Verified - aljazeera.com

OpenAI strikes deal with Pentagon to use tech in ‘classified network’

#OpenAI #Pentagon #Artificial Intelligence #Military Technology #Anthropic #Autonomous Weapons #Mass Surveillance #Donald Trump

📌 Key Takeaways

  • OpenAI secured Pentagon deal for AI use in classified networks after Anthropic refused military demands
  • Agreement prohibits use of OpenAI technology for domestic mass surveillance and autonomous weapons
  • President Trump ordered agencies to stop using Anthropic, calling them 'left-wing nut jobs'
  • Human rights advocates raised concerns about unregulated military AI use in conflict zones
  • Deal represents significant shift in AI company-military relations

📖 Full Retelling

OpenAI CEO Sam Altman announced on February 28, 2026, that his company has reached a deal with the United States Department of Defense to use artificial intelligence technology in a 'classified network,' following ethical concerns raised by previous contractor Anthropic and amid tensions with the Trump administration. Altman made the announcement through a statement on social media platform X late Friday, emphasizing that the Pentagon had demonstrated 'deep respect for safety' in negotiations. The agreement specifically affirms that OpenAI's technology will not be used for 'domestic mass surveillance' or for developing 'autonomous weapon systems,' with humans maintaining 'responsibility for the use of force.' This stance comes in contrast to Anthropic, whose CEO Dario Amodei had stated his company could not 'in good conscience accede' to certain Pentagon demands, leading to President Trump ordering federal agencies to immediately stop using Anthropic technology. The developments come amid growing concerns about the military use of AI technology, particularly after reports emerged that Anthropic's Claude AI software had been used by the U.S. military in the abduction of Venezuelan President Nicolas Maduro in January. Human rights advocates have increasingly voiced worries about unregulated AI deployment by militaries, citing examples such as the Israeli army's reported use of AI systems including 'Lavender,' 'The Gospel,' and 'Where's Daddy?' in its operations in Gaza. Trump referred to Anthropic as 'left-wing nut jobs' on his Truth Social platform while giving the Pentagon six months to phase out existing Anthropic technology embedded in military systems.

🏷️ Themes

AI Ethics, Military Technology, Corporate-Military Relations, Human Rights

📚 Related People & Topics

OpenAI

OpenAI

Artificial intelligence research organization

# OpenAI **OpenAI** is an American artificial intelligence (AI) research organization headquartered in San Francisco, California. The organization operates under a unique hybrid structure, comprising the non-profit **OpenAI, Inc.** and its controlled for-profit subsidiary, **OpenAI Global, LLC** (a...

View Profile → Wikipedia ↗
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗
Artificial intelligence

Artificial intelligence

Intelligence of machines

# Artificial Intelligence (AI) **Artificial Intelligence (AI)** is a specialized field of computer science dedicated to the development and study of computational systems capable of performing tasks typically associated with human intelligence. These tasks include learning, reasoning, problem-solvi...

View Profile → Wikipedia ↗
Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for OpenAI:

🌐 Artificial intelligence 9 shared
🌐 ChatGPT 8 shared
👤 Wall Street 4 shared
🏢 Nvidia 4 shared
🏢 Anthropic 3 shared
View full profile

Deep Analysis

Why It Matters

This deal signifies a major development in the integration of AI technology within the US military, particularly concerning ethical considerations and safeguards. It highlights the ongoing debate surrounding the responsible development and deployment of AI, especially in sensitive areas like national security and surveillance. The agreement addresses concerns about misuse of AI for domestic surveillance and autonomous weapons.

Context & Background

  • Anthropic, an AI safety company, previously raised ethical concerns about Pentagon's AI use.
  • US President Trump ordered federal agencies to halt Anthropic's technology usage.
  • Concerns exist regarding the unregulated use of AI by militaries globally, including in the context of the Israeli-Palestinian conflict.

What Happens Next

OpenAI will now provide its technology for use within a classified Pentagon network, subject to the agreed-upon safety principles. The long-term implications involve monitoring how this AI is applied and ensuring adherence to the stated restrictions against mass surveillance and autonomous weapons systems. Further scrutiny from human rights advocates is expected.

Frequently Asked Questions

What are the key safeguards in place?

OpenAI has assured that its technology will not be used for domestic mass surveillance or autonomous weapon systems, with humans retaining responsibility for the use of force.

Why did Anthropic refuse to work with the Pentagon?

Anthropic refused due to concerns about potential misuse of their AI technology for domestic surveillance and autonomous weapons, citing ethical considerations.

What is the significance of President Trump's order?

President Trump's order reflects a broader political debate about the safety and ethical implications of AI in military applications. It also indicates a possible shift in policy regarding AI contractors.

What are the ethical concerns surrounding AI in warfare?

Ethical concerns include the potential for AI to be used for mass surveillance, autonomous killing, and the lack of accountability when AI systems make errors or cause harm.

Original Source
News | Weapons OpenAI strikes deal with Pentagon to use tech in ‘classified network’ Sam Altman claims his technology will not be used by US military for ‘domestic mass surveillance’ or ‘autonomous weapons’. Listen to this article | 2 mins By Lyndal Rowlands Published On 28 Feb 2026 28 Feb 2026 Click here to share on social media Share Save Add Al Jazeera on Google OpenAI CEO Sam Altman said his company has reached a deal with the United States Department of Defense after its previous contractor Anthropic voiced ethical concerns about the military’s use of its artificial intelligence technology. In a statement shared on X late on Friday, Altman said OpenAI made the deal after the Defense Department demonstrated its “deep respect for safety”. Recommended Stories list of 4 items list 1 of 4 Indian university faces backlash for presenting Chinese robot as its own list 2 of 4 Video: OpenAI and Anthropic CEOs refuse to hold hands at India AI summit list 3 of 4 Anthropic vs the Pentagon: Why AI firm is taking on Trump administration list 4 of 4 Trump orders federal agencies to stop using Anthropic as dispute escalates end of list Altman said the Pentagon agreed with his company’s principles that OpenAI’s technology would not be used for “domestic mass surveillance” or for “autonomous weapon systems”, affirming that humans would take “responsibility for the use of force”. “We remain committed to serve all of humanity as best we can,” Altman said, adding that “the world is a complicated, messy, and sometimes dangerous place.” The announcement came hours after US President Donald Trump ordered federal agencies to stop using Anthropic, whose CEO Dario Amodei had said his company could not “in good conscience accede” to certain demands from the Pentagon. It has also been reported that Anthropic’s Claude AI software was used by the US military to aid in its abduction of Venezuelan President Nicolas Maduro in January. Anthropic said it was refusing to remove safeguards that prev...
Read full article at source

Source

aljazeera.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine