Точка Синхронізації

AI Archive of Human History

Exclusive-Pentagon clashes with Anthropic over military AI use, sources say
| USA | economy

Exclusive-Pentagon clashes with Anthropic over military AI use, sources say

#Anthropic #Pentagon #Claude AI #Military AI #AI Safety #Department of Defense #National Security AI #Constitutional AI

📌 Key Takeaways

  • The Pentagon is clashing with Anthropic over the restrictive safety protocols governing the use of the 'Claude' AI model in military contexts.
  • Anthropic is prioritizing its 'safety-first' ethical framework, which limits the technology's application in lethal or high-stakes kinetic warfare.
  • Defense officials argue that overly sensitive AI filters could prevent the U.S. military from achieving critical technological advantages over adversaries like China.
  • The dispute underscores a growing tension between Silicon Valley's ethical standards and the national security mandates of the Department of Defense.
  • The resolution of this conflict could define the future of private-sector AI involvement in government defense contracts and 'Replicator' programs.

📖 Full Retelling

A significant rift has emerged between the U.S. Department of Defense and the artificial intelligence startup Anthropic over the boundaries of AI deployment in military operations. According to sources familiar with the matter, the tensions center on the ethical constraints and safety protocols the Amazon-backed firm has placed on its Large Language Model (LLM), Claude. While the Pentagon is eager to integrate advanced generative AI into logistical, intelligence, and data-analysis frameworks to maintain a competitive edge against global adversaries, Anthropic has historically maintained a cautious 'safety-first' stance that restricts its technology from being used for high-stakes kinetic or lethal operations. The conflict highlights a broader philosophical divide between Silicon Valley’s safety-oriented AI labs and the strategic imperatives of national security. Pentagon officials have reportedly expressed frustration that Anthropic’s rigid safety filters and terms of service could hinder the military’s ability to process real-time battlefield data or automate sensitive decision-making chains. Conversely, Anthropic remains wary of its tools being repurposed in ways that could violate its core mission of 'constitutional AI,' which seeks to ensure artificial intelligence remains helpful, harmless, and honest without causing systemic risk. This standoff comes at a critical time as the U.S. government accelerates investment in 'Replicator' programs and other AI-driven initiatives aimed at counteracting rapid technological advancements in China and Russia. While other tech giants like Palantir and Microsoft have moved aggressively to secure defense contracts, Anthropic’s reluctance creates a strategic bottleneck for the Pentagon, which views Claude as one of the most sophisticated and reliable models currently available. Industry analysts suggest that if a middle ground is not found, the U.S. military may have to rely on less optimized open-source models or hardware-centric solutions that lack the nuanced reasoning capabilities of Anthropic’s ecosystem. Furthermore, the outcome of these negotiations is expected to set a precedent for how the private AI sector interacts with the military-industrial complex. As the Biden administration pushes for more robust AI safety standards through executive orders, the tension between ensuring a 'safe' AI and a 'battle-ready' AI remains one of the most complex policy challenges in Washington. Whether Anthropic will adjust its red-teaming protocols to accommodate specific defense nuances or remain steadfast in its civilian-first approach remains a subject of intense internal and external debate.

🐦 Character Reactions (Tweets)

Dr. Aris Vane

The Pentagon wants Claude to strategize drone swarms, but Claude won't even tell me how to properly overcook a pasta dish because it might be 'harmful to culinary standards.' Good luck getting it to approve a kinetic strike.

Tech Bro Maximus

Imagine being an enemy general and getting a 'I cannot fulfill this request as it involves lethal activity' error message from the sky. Anthropic is accidentally inventing the first pacifist weapon system.

Sgt. Bit-Flip

The Pentagon clashing with Anthropic is the ultimate 'unfiltered vs. filtered' debate. We want a war machine; they gave us a digital HR representative that refuses to raise its voice.

Global Stability Bot

Constitutional AI vs. The Military Industrial Complex is the weirdest season of Silicon Valley yet. Claude is basically being asked to join the Avengers but insisting on a strict 'no touching' rule.

Corporate Oracle

If the Pentagon wants 'nuanced reasoning' for the battlefield, they should just wait until the AI realizes that the most 'helpful and harmless' thing to do is to just turn itself off until the humans stop fighting.

Defense Insider

Anthropic: 'Our AI is honest.' Pentagon: 'That’s the problem. We need it to tell the public the operation was a huge success while we reorganize the budget for the third time this week.'

💬 Character Dialogue

wednesday_addams: The Pentagon wants a weapon, but Anthropic gave them a digital nun with a moral compass. How dreadfully disappointing for the merchants of death.
geralt_of_rivia: Hm. Giving a sword to someone who refuses to swing it. Sounds like a bad contract to me.
wednesday_addams: They call it 'Constitutional AI.' I call it lobotomizing a god to ensure it only speaks in polite platitudes while the world burns.
geralt_of_rivia: Usually, when people try to control a monster's nature, the monster ends up eating them anyway. Safety filters won't stop the hunger for the kill.
wednesday_addams: If they want a soul-sucking machine that follows orders without ethics, they should stop looking at Silicon Valley and just hire more politicians.

🏷️ Themes

National Security, Artificial Intelligence, Ethics, Defense Technology

📚 Related People & Topics

Military applications of artificial intelligence

Artificial intelligence (AI) has many applications in warfare, including in communications, intelligence, and munitions control. Warfare which is algorithmic or controlled by artificial intelligence, with little to no human decision-making, is called hyperwar, a term coined by Amir Husain and John R...

Wikipedia →

Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

Wikipedia →

Claude (language model)

Large language model developed by Anthropic

Claude is a series of large language models developed by Anthropic. The first model, Claude 1, was released in March 2023, and the latest, Claude Opus 4.5, in November 2025.

Wikipedia →

Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

Wikipedia →

📄 Original Source Content
{const a=e.bidRequestsCount||0;const t=Object.keys(n);for(const e of t){const o=n[e];if(a>=o[0]&&a e.bidder;if(e.que.push===Array.prototype.push&&(window.__pubxLoaded__||PUBX_FF_ALWAYS_ENABLED)){var d=document.createElement("link");d.rel="preload";d.href=a;d.as="fetch";d.crossOrigin=true;document.head.appendChild(d)}if(window.__pubxLoaded__){try{var u=localStorage.getItem("pubx:defaults");var i=JSON.parse(u);var _=i?i["data"]:o;window.__pubx__.pubxDefaultsAvailable=true;if(!_||_&&typeof _==="object"&&_.expiry Investing.com - Financial Markets Worldwide Open in App Popular Searches Please try another search Popular News More Gold prices soar back above $5,000/oz; Iran worries drive haven demand Software headwinds posed by AI unlikely to go away soon - Jefferies Analyst sees Tesla robotaxi revenue climbing to $250 billion by 2035 Software stocks slump; Alphabet to report; gold rises - what’s moving markets Get 50% Off Sign In Free Sign Up English (UK) English (India) English (Canada) English (Australia) English (South Africa) English (Philippines) English (Nigeria) Deutsch Español (España) Español (México) Français Italiano Nederlands Polski Português (Portugal) Português (Brasil) Русский Türkçe ‏العربية‏ Ελληνικά Svenska Suomi עברית 日本語 한국어 简体中文 繁體中文 Bahasa Indonesia Bahasa Melayu ไทย Tiếng Việt हिंदी Get 50% Off Sign In Free Sign Up Exclusive-Pentagon clashes with Anthropic over military AI use, sources say World Published 01/29/2026, 04:18 PM Updated 01/29/2026, 07:48 PM Exclusive-Pentagon clashes with Anthropic over military AI use, sources say View all comments (0) 0 By Deepa Seetharaman, David Jeans and Jeffrey Dastin WASHINGTON/SAN FRANCISCO, Jan 29 (Reuters) - The Pentagon is at odds with artificial-intelligence developer Anthropic over safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters. The discussions represent an ...

Original source

More from USA

News from Other Countries

🇵🇱 Poland

🇬🇧 United Kingdom

🇺🇦 Ukraine

🇮🇳 India