SP
BravenNow
Amazon says customers can keep using Anthropic's Claude on its cloud for non-defense workloads
| USA | general | βœ“ Verified - cnbc.com

Amazon says customers can keep using Anthropic's Claude on its cloud for non-defense workloads

#Amazon #Anthropic #Claude AI #cloud computing #non-defense workloads #AI ethics #compliance

πŸ“Œ Key Takeaways

  • Amazon allows continued use of Anthropic's Claude AI on its cloud platform.
  • Usage is restricted to non-defense related workloads.
  • The decision follows recent regulatory scrutiny of AI in defense applications.
  • Amazon aims to balance customer access with compliance and ethical considerations.

πŸ“– Full Retelling

Amazon joined Microsoft and Google in continue to offer Anthropic's Claude AI technology to customers after the Pentagon deemed it a "supply chain risk."

🏷️ Themes

AI Regulation, Cloud Services

πŸ“š Related People & Topics

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...

View Profile β†’ Wikipedia β†—
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile β†’ Wikipedia β†—
Amazon

Amazon

Topics referred to by the same term

Amazon most often refers to:

View Profile β†’ Wikipedia β†—

Claude (language model)

Large language model developed by Anthropic

Claude is a series of large language models developed by Anthropic. The first model was released in March 2023, and the latest, Claude Opus 4.6, in February 2026.

View Profile β†’ Wikipedia β†—

Entity Intersection Graph

Connections for Ethics of artificial intelligence:

🏒 Anthropic 14 shared
🌐 Pentagon 13 shared
🏒 OpenAI 11 shared
πŸ‘€ Dario Amodei 6 shared
🌐 National security 4 shared
View full profile

Mentioned Entities

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered t

Anthropic

Anthropic

American artificial intelligence research company

Amazon

Amazon

Topics referred to by the same term

Claude (language model)

Large language model developed by Anthropic

Deep Analysis

Why It Matters

This announcement matters because it clarifies the operational boundaries for AI cloud services amid increasing government scrutiny of AI technologies. It affects enterprise customers who rely on Anthropic's Claude AI models for business applications but cannot use them for defense-related work. The distinction helps companies navigate compliance requirements while maintaining access to advanced AI capabilities. This also impacts Amazon's competitive positioning against other cloud providers offering AI services.

Context & Background

  • Anthropic is an AI safety startup founded by former OpenAI researchers, known for developing Claude as a competitor to ChatGPT.
  • Amazon Web Services (AWS) is the world's largest cloud computing provider and has invested significantly in Anthropic through a strategic partnership.
  • The U.S. government has increased scrutiny of AI technologies, particularly regarding national security concerns and potential military applications.
  • Cloud providers increasingly offer AI models as managed services, creating complex compliance landscapes for enterprise customers.

What Happens Next

Enterprise customers will need to implement usage monitoring to ensure compliance with the non-defense restriction. Amazon may introduce additional verification tools or certifications for Claude workloads. Regulatory bodies might establish clearer guidelines for AI usage in sensitive sectors, potentially affecting similar AI services across cloud platforms.

Frequently Asked Questions

What types of workloads are considered 'defense' workloads?

Defense workloads typically include any applications related to military operations, national security, weapons systems, or intelligence activities. This encompasses both direct military use and supporting infrastructure for defense organizations.

Can customers use Claude for government work that isn't defense-related?

Yes, the restriction specifically applies to defense workloads, so non-defense government applications like civilian agency operations, public services, or administrative functions would generally be permitted under Amazon's current policy.

Why would Amazon restrict Claude for defense workloads?

Amazon likely restricts defense usage due to Anthropic's own policies, regulatory compliance requirements, or ethical considerations around AI in military applications. Such restrictions help manage legal liability and align with responsible AI principles.

How will Amazon enforce this restriction?

Amazon will likely rely on customer self-certification, contractual agreements, and potentially technical monitoring solutions. They may implement usage audits or require specific compliance documentation for sensitive applications.

Does this affect other AI models on AWS?

Different AI models on AWS may have varying restrictions based on their developers' policies and Amazon's agreements. Customers should review terms for each specific AI service, as restrictions aren't necessarily uniform across all offerings.

}
Original Source
In this article AMZN Follow your favorite stocks CREATE FREE ACCOUNT Amazon CEO Andy Jassy speaks during a keynote address at AWS re:Invent 2024, a conference hosted by Amazon Web Services, at The Venetian Las Vegas on December 3, 2024 in Las Vegas, Nevada. Noah Berger | Getty Images Amazon said Friday it will continue offering Anthropic's artificial intelligence technology to its cloud customers, excluding work involving the Department of Defense. The announcement comes after the federal agency informed Anthropic on Thursday that it would label the company a "supply chain risk." Anthropic responded by saying it has "no choice" but to challenge the designation in court. "AWS customers and partners can continue to use Claude for all their workloads not associated with the Department of War ," an Amazon Web Services spokesperson said in a statement. "For all DoW workloads which use Anthropic technologies, we are supporting customers and partners as they transition to alternatives running on AWS." This is breaking news. Please refresh for updates. Subscribe to CNBC PRO Subscribe to Investing Club Licensing & Reprints CNBC Councils Select Personal Finance Join the CNBC Panel Closed Captioning Digital Products News Releases Internships Corrections About CNBC Site Map Podcasts Careers Help Contact News Tips Got a confidential news tip? We want to hear from you. Get In Touch CNBC Newsletters Sign up for free newsletters and get more CNBC delivered to your inbox Sign Up Now Get this delivered to your inbox, and more info about our products and services. Advertise With Us Please Contact Us Ad Choices Privacy Policy Your Privacy Choices CA Notice Terms of Service Β© 2026 Versant Media, LLC. All Rights Reserved. A Versant Media Company. Data is a real-time snapshot *Data is delayed at least 15 minutes. Global Business and Financial News, Stock Quotes, and Market Data and Analysis. Market Data Terms of Use and Disclaimers Data also provided by
Read full article at source

Source

cnbc.com

More from USA

News from Other Countries

πŸ‡¬πŸ‡§ United Kingdom

πŸ‡ΊπŸ‡¦ Ukraine