SP
BravenNow
AI executive Dario Amodei on the red lines Anthropic would not cross
| USA | general | ✓ Verified - cbsnews.com

AI executive Dario Amodei on the red lines Anthropic would not cross

📖 Full Retelling

The CEO of Anthropic says his company refused to allow its technology to be used by the Trump Administration without certain guidelines (such as not using its AI to power fully-autonomous weapons without any human involvement).

Entity Intersection Graph

No entity connections available yet for this article.

Original Source
Sunday Morning AI executive Dario Amodei on the red lines Anthropic would not cross By Jo Ling Kent Jo Ling Kent Senior Business and Technology Correspondent Journalist Jo Ling Kent joined CBS News in July 2023 as the senior business and technology correspondent for CBS News. Kent has more than 15 years of experience covering the intersection of technology and business in the U.S., as well as the emergence of China as a global economic power. Read Full Bio Jo Ling Kent Updated on: March 1, 2026 / 10:29 AM EST / CBS News Add CBS News on Google "It's about the principle of standing up for what's right," said Dario Amodei, CEO of the artificial intelligence firm Anthropic, who has found himself at the center of a new kind of firestorm. What's wrong, in his view, is why the AI company he co-founded has been banned from the federal government . "It feels very punitive and inappropriate, given the amount that we've done for U.S. national security," he said. Anthropic created Claude , an AI chatbot you might use at work or school. Since last summer, its government version has been deeply embedded in military intelligence and classified operations at the Pentagon. This past week, in the lead-up to the attack on Iran, the Defense Department demanded Anthropic hand over its AI without restrictions for lawful military use. The company refused. "We have these two red lines," said Amodei. "We've had them from Day One. We are still advocating for those red lines. We're not gonna move on those red lines." Those red lines? Not allowing Anthropic's AI to perform mass surveillance of Americans, and prohibiting its AI from powering fully-autonomous weapons without any human involvement. Amodei said, "It doesn't show the judgment that a human soldier would show – friendly fire or shooting a civilian, or just the wrong kind of thing. We don't want to sell something that we don't think is reliable, and we don't want to sell something that could get our own people killed, or that could ge...
Read full article at source

Source

cbsnews.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine