SP
BravenNow
Sam Altman admits OpenAI can’t control Pentagon’s use of AI
| United Kingdom | world | ✓ Verified - theguardian.com

Sam Altman admits OpenAI can’t control Pentagon’s use of AI

📖 Full Retelling

<p>CEO’s claims come amid increased scrutiny of US military’s use of the technology and ethics concerns from AI workers</p><ul><li><p><a href="https://www.theguardian.com/news/2026/feb/17/sign-up-for-the-breaking-news-us-email-to-get-newsletter-alerts-direct-to-your-inbox?utm_medium=ACQUISITIONS_STANDFIRST&amp;utm_campaign=BN22326&amp;utm_content=signup&amp;utm_term=standfirst&amp;utm_source=GUARDIAN_WEB">Sign up for the Breaking News US email

Entity Intersection Graph

No entity connections available yet for this article.

}
Original Source
Sam Altman admits OpenAI can’t control Pentagon’s use of AI CEO’s claims come amid increased scrutiny of US military’s use of the technology and ethics concerns from AI workers Sign up for the Breaking News US email to get newsletter alerts in your inbox OpenAI’s CEO, Sam Altman, told employees on Tuesday that his company does not control how the Pentagon uses their artificial intelligence products in military operations. Altman’s claims on OpenAI’s lack of input come amid increased scrutiny of how the military uses AI in war and ethics concerns from AI workers over how their technology will be deployed. “You do not get to make operational decisions,” Altman told employees, according to reports by Bloomberg and CNBC . “So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that,” Altman reportedly said. The AI industry has been mired in heated discussions and acrimonious negotiations in recent weeks as the Pentagon has demanded AI companies remove safety guardrails on their models to allow a broader range of military applications. AI-enabled systems have reportedly already been used in the US military’s operation to seize Venezuelan leader Nicolás Maduro and in targeting decisions in its war against Iran. Anthropic, OpenAI’s rival and maker of the Claude chatbot, last week refused a deal with the Pentagon over concerns its model could be used for domestic mass surveillance or fully autonomous weapons. Pete Hegseth, the US defense secretary, declared the company a “supply-chain risk” as a result, a designation never used before against a US company and one that could cause significant financial harm if formally enacted. On the same day that Hegseth vowed punitive measures against Anthropic, the Pentagon also announced a deal with OpenAI that was seemingly intended to replace the use of Claude in military applications. The timing of the deal and concerns that OpenAI had agreed to cross ethical lines that Anthropic ...
Read full article at source

Source

theguardian.com

More from United Kingdom

News from Other Countries

🇺🇸 USA

🇺🇦 Ukraine