SP
BravenNow
OpenAI shares more details about its agreement with the Pentagon
| USA | technology | ✓ Verified - techcrunch.com

OpenAI shares more details about its agreement with the Pentagon

#OpenAI #Pentagon #AI safeguards #Sam Altman #Anthropic #Executive Order 12333 #military AI #classified environments

📌 Key Takeaways

  • OpenAI reached a rushed Pentagon deal after Anthropic's negotiations failed
  • OpenAI published safeguards against mass surveillance, autonomous weapons, and high-stakes automated decisions
  • Critics question whether OpenAI's agreement truly prevents problematic AI uses
  • Altman admitted the deal was rushed but defended it as a de-escalation effort

📖 Full Retelling

OpenAI CEO Sam Altman admitted that the company's rushed agreement with the Pentagon was reached after President Trump directed federal agencies to stop using Anthropic's technology and Secretary of Defense Pete Hegseth designated the AI company as a supply-chain risk following failed negotiations between Anthropic and the Department of Defense. The deal, which allows OpenAI's models to be deployed in classified environments, came under immediate scrutiny as Anthropic had previously drawn red lines around the use of its technology in fully autonomous weapons or mass domestic surveillance, while Altman claimed OpenAI had the same restrictions. Despite these stated safeguards, questions arose about why OpenAI could reach an agreement that Anthropic could not, particularly given the controversial nature of military AI applications. In response to the backlash, OpenAI published a blog post outlining its approach, specifying three areas where its models cannot be used: mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions such as 'social credit' systems. The company emphasized its 'more expansive, multi-layered approach' to safety, claiming it retains full discretion over its safety stack, deploys via cloud with cleared personnel in the loop, and has strong contractual protections beyond existing U.S. laws. However, critics like Techdirt's Mike Masnick challenged these assurances, noting that the deal's reference to Executive Order 12333 could potentially allow domestic surveillance, as this order has been used by the NSA to collect communications by tapping into international lines. Altman later acknowledged on social media that the deal had been rushed and resulted in significant backlash, including Anthropic's Claude overtaking ChatGPT in Apple's App Store, but defended the decision as an attempt to de-escalate tensions between the military and AI companies.

🏷️ Themes

AI Ethics, Military Technology, Corporate Policy

📚 Related People & Topics

OpenAI

OpenAI

Artificial intelligence research organization

# OpenAI **OpenAI** is an American artificial intelligence (AI) research organization headquartered in San Francisco, California. The organization operates under a unique hybrid structure, comprising the non-profit **OpenAI, Inc.** and its controlled for-profit subsidiary, **OpenAI Global, LLC** (a...

View Profile → Wikipedia ↗
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗
Sam Altman

Sam Altman

American entrepreneur and investor (born 1985)

Samuel Harris Altman (born April 22, 1985) is an American businessman and entrepreneur who has served as the chief executive officer (CEO) of the artificial intelligence research organization OpenAI since 2019. Having overseen the successful launch of ChatGPT in 2022, he is widely considered to be o...

View Profile → Wikipedia ↗
Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for OpenAI:

🌐 ChatGPT 9 shared
🌐 Artificial intelligence 5 shared
🌐 AI safety 5 shared
🌐 Regulation of artificial intelligence 4 shared
🌐 OpenClaw 4 shared
View full profile

Mentioned Entities

OpenAI

OpenAI

Artificial intelligence research organization

Anthropic

Anthropic

American artificial intelligence research company

Sam Altman

Sam Altman

American entrepreneur and investor (born 1985)

Pentagon

Pentagon

Shape with five sides

}
Original Source
By CEO Sam Altman’s own admission, OpenAI’s deal with the Department of Defense was “definitely rushed,” and “the optics don’t look good.” After negotiations between Anthropic and the Pentagon fell through on Friday, President Donald Trump directed federal agencies to stop using Anthropic’s technology after a six-month transition period , and Secretary of Defense Pete Hegseth said he was designating the AI company as a supply-chain risk. Then, OpenAI quickly announced that it had reached a deal of its own for models to be deployed in classified environments. With Anthropic saying it was drawing red lines around the use of its technology in fully autonomous weapons or mass domestic surveillance, and Altman saying OpenAI had the same red lines, there were some obvious questions: Was OpenAI being honest about its safeguards? Why was it able to reach a deal while Anthropic was not? So as OpenAI executives defended the agreement on social media, the company also published a blog post outlining its approach . In fact, the post pointed to three areas where it said OpenAI’s models cannot be used — mass domestic surveillance, autonomous weapon systems, and “high-stakes automated decisions (e.g. systems such as ‘social credit’).” The company said that in contrast to other AI companies that have “reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments,” OpenAI’s agreement protects its red lines “through a more expansive, multi-layered approach.” “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” the blog said. “This is all in addition to the strong existing protections in U.S. law.” Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and ...
Read full article at source

Source

techcrunch.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine