Defense tech companies are dropping Claude after Pentagon's Anthropic blacklist
#Anthropic blacklist #Claude AI #Pentagon restrictions #Defense tech #AI government contracts #Autonomous weapons #National security AI #Palantir defense partnerships
π Key Takeaways
- Trump administration blacklisted Anthropic, causing defense contractors to abandon Claude
- Anthropic refused to comply with government demands on AI use restrictions
- Defense tech companies are preemptively switching to other AI models
- The move could disrupt Palantir's operations which heavily relies on government contracts
- Anthropic's technology remains in use for U.S. military operations in Iran despite the announcement
π Full Retelling
Following the Trump administration's decision on Friday to blacklist Anthropic and designate its technology a supply chain risk, defense tech companies are telling employees to stop using Claude and switch to other artificial intelligence models, as Anthropic executives refused to comply with government demands over its model use. Alexander Harstrick, managing partner at J2 Ventures, confirmed that 10 of his portfolio companies working with the Department of Defense have already backed off Claude for defense use cases and are actively seeking replacements, while defense contractors like Lockheed Martin are expected to remove Anthropic's technology from their supply chains. The sudden reversal comes despite Anthropic deriving about 80% of its revenue from enterprise customers, including a significant $200 million contract with the DoD that made Claude the first major model deployed in government classified networks through a partnership with Palantir. Defense Secretary Pete Hegseth declared that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic after the company refused to provide assurances that its AI would not be used for fully autonomous weapons or mass domestic surveillance of Americans. Anthropic maintains that Hegseth lacks the authority to restrict such business relationships and argues that any official supply chain risk designation would only apply to defense contracts, not other uses of the technology.
π·οΈ Themes
Defense Technology, AI Regulation, Government Contracts, National Security
π Related People & Topics
Claude (language model)
Large language model developed by Anthropic
Claude is a series of large language models developed by Anthropic. The first model was released in March 2023, and the latest, Claude Opus 4.6, in February 2026.
Entity Intersection Graph
Connections for Claude (language model):
View full profileMentioned Entities
Original Source
Following the Trump administration's decision on Friday to blacklist Anthropic and designate its technology a supply chain risk , defense tech companies are telling employees to stop using Claude , and to switch to other artificial intelligence models and assistants. "Most of our companies are actively involved in large defense contracts and so are very strict in their interpretation of the requirements," said Alexander Harstrick, managing partner at J2 Ventures , which backs startups in the space. Harstrick told CNBC in an email that 10 of his firm's portfolio companies that work with the Department of Defense , "have backed off of their use of Claude for defense use cases and are in active processes to replace the service with another one." Meanwhile, defense contractors like Lockheed Martin are expected to remove Anthropic's technology from their supply chains, Reuters reported late Tuesday. It's a sudden reversal for Anthropic, which gets about 80% of its revenue from enterprise customers , CEO Dario Amodei told CNBC in January. The company entered the DoD's ecosystem in late 2024 through a partnership with software and services provider Palantir . Months after that agreement, Claude became the first major model deployed in the government's classified networks through a $200 million contract with the DoD. The model's popularity continued to soar across the business world, particularly in the area of coding assistants . Defense Secretary Pete Hegseth declared on X that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic. The announcement came after Anthropic executives refused to comply with the government's demands over its model use. They wanted assurances that their AI would not be tapped for fully autonomous weapons or mass domestic surveillance of Americans. Anthropic's models are still being used to support the U.S. military operations in Iran , even after the announcement from the Trump adminis...
Read full article at source