SP
BravenNow
OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway
| USA | technology | ✓ Verified - wired.com

OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway

#OpenAI #Pentagon #Microsoft #Military AI #Azure OpenAI #National Security #AI Ethics #Defense Department

📌 Key Takeaways

  • Pentagon tested OpenAI models through Microsoft before lifting military ban
  • OpenAI employees confused about policy applicability to Microsoft's implementation
  • OpenAI updated policy in January 2024 to remove military ban
  • Company's growing involvement with national security divided employees
  • CEO expressed interest in selling AI models to NATO

📖 Full Retelling

In 2023, the US Department of Defense began experimenting with Microsoft's Azure OpenAI service, a version of OpenAI's technology offered by the tech giant, before the ChatGPT-maker lifted its explicit prohibition on military applications, according to sources familiar with the matter. This experimentation occurred at OpenAI's San Francisco offices, where Pentagon officials were observed visiting, despite OpenAI's usage policy that year explicitly banning military access to its AI models. The testing came as Microsoft, OpenAI's largest investor with broad commercialization rights, had already been contracting with the Department of Defense for decades, creating a pathway for military access to OpenAI's technology even as the startup maintained its anti-military stance. The apparent contradiction between OpenAI's public policy and the Pentagon's access to its technology through Microsoft has created confusion and concern within the company. Sources reveal that many OpenAI employees were uncertain whether the company's usage policies applied to Microsoft's implementation of their models, with some employees wary of associating with military applications while others were simply confused about the boundaries. Microsoft clarified that its Azure OpenAI Service was subject to Microsoft's terms of service, not OpenAI's policies, and was made available to the US Government in 2023, though not approved for 'top secret' government workloads until 2025. By January 2024, OpenAI updated its policies to remove the blanket ban on military use, with some employees reportedly learning about the change through media reports rather than internal communications. The policy shift marked the beginning of OpenAI's growing involvement with national security applications, culminating in December 2024 with a partnership announcement with Anduril to develop AI systems for 'national security missions.' This move divided employees, with some expressing concerns that the company's models were too unreliable for battlefield applications, while others believed the partnership demonstrated responsible handling of military relationships. The controversy intensified when OpenAI signed a deal with the Pentagon that appeared to contradict CEO Sam Altman's stated support for 'red lines' against legal mass surveillance and autonomous weapons development.

🏷️ Themes

AI Ethics, Military Technology, Corporate Policy, National Security

📚 Related People & Topics

OpenAI

OpenAI

Artificial intelligence research organization

# OpenAI **OpenAI** is an American artificial intelligence (AI) research organization headquartered in San Francisco, California. The organization operates under a unique hybrid structure, comprising the non-profit **OpenAI, Inc.** and its controlled for-profit subsidiary, **OpenAI Global, LLC** (a...

View Profile → Wikipedia ↗

Military applications of artificial intelligence

Artificial intelligence (AI) has many applications in warfare, including in communications, intelligence, and munitions control. Warfare which is algorithmic or controlled by artificial intelligence, with little to no human decision-making, is called hyperwar, a term coined by Amir Husain and John R...

View Profile → Wikipedia ↗
Microsoft

Microsoft

American multinational technology megacorporation

Microsoft Corporation is an American multinational technology conglomerate headquartered in Redmond, Washington. Founded in 1975, the company became influential in the rise of personal computers through software like Windows, and has since expanded to Internet services, cloud computing, artificial i...

View Profile → Wikipedia ↗
Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for OpenAI:

🌐 ChatGPT 9 shared
🌐 Artificial intelligence 5 shared
🌐 AI safety 5 shared
🌐 Regulation of artificial intelligence 4 shared
🌐 OpenClaw 4 shared
View full profile

Mentioned Entities

OpenAI

OpenAI

Artificial intelligence research organization

Military applications of artificial intelligence

Artificial intelligence (AI) has many applications in warfare, including in communications, intellig

Microsoft

Microsoft

American multinational technology megacorporation

Pentagon

Pentagon

Shape with five sides

}
Original Source
Maxwell Zeff Business Mar 5, 2026 5:00 PM OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway Sources allege the Defense Department experimented with Microsoft’s version of OpenAI technology before the ChatGPT-maker lifted its prohibition on military applications. Photo-Illustration: WIRED Staff; Getty Images Save this story Save this story OpenAI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. OpenAI employees have criticized the move, which came after Anthropic’s roughly $200 million contract with the Pentagon imploded , and asked Altman to release more information about the agreement. Altman admitted it looked “sloppy” in a social media post . While this incident has become a major news story, it may just be the latest and most public example of OpenAI creating vague policies around how the US military can access its AI. In 2023, OpenAI’s usage policy explicitly banned the military from accessing its AI models. But some OpenAI employees discovered the Pentagon had already started experimenting with Azure OpenAI, a version of OpenAI’s models offered by Microsoft, two sources familiar with the matter said. At the time, Microsoft had been contracting with the Department of Defense for decades. It was also OpenAI’s largest investor, and had broad license to commercialize the startup’s technology. That same year, OpenAI employees saw Pentagon officials walking through the company’s San Francisco offices, the sources said. They spoke on the condition of anonymity as they aren’t licensed to comment on private company matters. Some OpenAI employees were wary about associating with the Pentagon, while others were simply confused about what OpenAI’s usage policies meant. Did the policy apply to Microsoft? While sources tell WIRED it was not clear to most employees at the time, spokespeople from OpenAI and Microsoft say Azure OpenAI products are not, and were not, subject to OpenAI’s...
Read full article at source

Source

wired.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine