SP
BravenNow
Is the Pentagon allowed to surveil Americans with AI?
| USA | technology | ✓ Verified - technologyreview.com

Is the Pentagon allowed to surveil Americans with AI?

#Pentagon #AI surveillance #Anthropic #OpenAI #domestic surveillance #bulk data #national security

📌 Key Takeaways

  • The Pentagon's attempt to use Anthropic's AI for analyzing bulk commercial data sparked debate over domestic surveillance legality.
  • Anthropic refused to allow its AI for mass surveillance or autonomous weapons, leading to a supply chain risk designation.
  • OpenAI initially allowed Pentagon use for 'all lawful purposes', prompting public backlash and user protests.
  • OpenAI revised its deal to explicitly prohibit domestic surveillance and intelligence agency use, citing existing legal restrictions.

📖 Full Retelling

The ongoing public feud between the Department of Defense and AI company Anthropic over its technology has raised a deep open question: does the law actually allow the US government to conduct mass surveillance on Americans? Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSA’s collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think is surveillance and what the law allows. The flashpoint in the standoff between Anthropic and the government was the Pentagon’s desire to use Anthropic’s AI Claude to analyze bulk commercial data collected from Americans. Anthropic demanded its AI not be used for mass domestic surveillance (or for autonomous weapons, which are machines that can kill targets without human oversight). A week after negotiations broke down, the Pentagon designated Anthropic a supply chain risk , a label typically reserved for foreign companies that pose a threat to national security. In the wake of the fallout, OpenAI, a rival AI company behind ChatGPT, sealed a deal with the Pentagon that allowed the company’s AI to be used by the Pentagon for “ all lawful purposes ”—language that critics say left the door open to domestic surveillance. Over the following weekend, users uninstalled ChatGPT in droves. Protestors chalked messages around OpenAI’s headquarters in San Francisco: “What are your redlines”? OpenAI announced on Monday that it had reworked its deal to make sure that the company’s AI will not be used for domestic surveillance. The company added that its services will not be used by intelligence agencies, such as the NSA. Altman suggested that existing law prohibits domestic surveillance by the Department of Defense (now sometimes called the Department of War) and that OpenAI’s contract simply needed to reference it. “The DoW agrees with these principles, reflects them in

🏷️ Themes

AI Ethics, Government Surveillance

📚 Related People & Topics

OpenAI

OpenAI

Artificial intelligence research organization

# OpenAI **OpenAI** is an American artificial intelligence (AI) research organization headquartered in San Francisco, California. The organization operates under a unique hybrid structure, comprising the non-profit **OpenAI, Inc.** and its controlled for-profit subsidiary, **OpenAI Global, LLC** (a...

View Profile → Wikipedia ↗
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗
Artificial intelligence for video surveillance

Artificial intelligence for video surveillance

Overview of artificial intelligence for surveillance

Artificial intelligence for video surveillance utilizes computer software programs that analyze the audio and images from video surveillance cameras in order to recognize humans, vehicles, objects, attributes, and events. Security contractors program the software to define restricted areas within th...

View Profile → Wikipedia ↗
Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for OpenAI:

🌐 ChatGPT 9 shared
🌐 Artificial intelligence 5 shared
🌐 AI safety 5 shared
🌐 Regulation of artificial intelligence 4 shared
🌐 OpenClaw 4 shared
View full profile

Mentioned Entities

OpenAI

OpenAI

Artificial intelligence research organization

Anthropic

Anthropic

American artificial intelligence research company

Artificial intelligence for video surveillance

Artificial intelligence for video surveillance

Overview of artificial intelligence for surveillance

Pentagon

Pentagon

Shape with five sides

Deep Analysis

Why It Matters

This news matters because it reveals ongoing tensions between AI companies and the U.S. government over the boundaries of domestic surveillance using advanced technology. It affects American citizens' privacy rights, AI companies' ethical stances, and national security operations. The controversy highlights how existing laws may not adequately address AI-powered surveillance capabilities, potentially creating legal gray areas. This debate could shape future regulations governing AI use by government agencies and impact public trust in both technology companies and government institutions.

Context & Background

  • Edward Snowden's 2013 revelations exposed the NSA's bulk metadata collection from Americans' phones, sparking national debate about surveillance and privacy
  • The Fourth Amendment protects against unreasonable searches and seizures, but its application to digital surveillance has been contested in courts for decades
  • The Department of Defense is generally prohibited from conducting domestic surveillance under the Posse Comitatus Act and other laws, with domestic intelligence primarily falling to the FBI and other agencies
  • AI companies like Anthropic and OpenAI have established ethical guidelines restricting certain military and surveillance applications of their technology
  • The 'supply chain risk' designation mentioned in the article is typically used for foreign companies under the Defense Federal Acquisition Regulation Supplement (DFARS)

What Happens Next

Congress will likely hold hearings on AI surveillance capabilities and consider updating surveillance laws to address AI technology specifically. The Department of Defense may develop clearer guidelines for AI use in intelligence operations. Other AI companies will face pressure to establish clear policies on government contracts. Legal challenges may emerge if surveillance using AI technology is deployed domestically. The incident may accelerate legislative efforts like the proposed AI Bill of Rights or specific AI surveillance regulations.

Frequently Asked Questions

What laws currently govern domestic surveillance by the U.S. government?

The Fourth Amendment provides constitutional protection against unreasonable searches. The Foreign Intelligence Surveillance Act (FISA) governs surveillance for foreign intelligence purposes, while the Posse Comitatus Act generally restricts military involvement in domestic law enforcement. Specific agencies have different legal authorities, with the NSA focused on foreign intelligence and the FBI handling domestic investigations.

Why are AI companies concerned about government use of their technology?

AI companies worry about ethical implications, potential misuse of their technology for mass surveillance or autonomous weapons, and damage to their public reputation. Many have established ethical guidelines to prevent harmful applications and maintain user trust. The controversy also reflects broader debates about responsible AI development and corporate social responsibility in the tech industry.

What is the difference between 'bulk metadata' collection and content surveillance?

Bulk metadata collection involves gathering information about communications (who called whom, when, duration) without accessing the actual content. Content surveillance involves accessing the substance of communications (what was said or written). Courts have sometimes treated metadata as having less privacy protection, though this distinction has been challenged as technology evolves and metadata reveals more about individuals.

How does the Pentagon's 'supply chain risk' designation affect companies?

The supply chain risk designation can prevent companies from receiving Defense Department contracts and may affect their ability to work with other government agencies. It's typically used for foreign companies posing national security risks, so applying it to a domestic AI company represents an unusual escalation in government-corporate disputes over technology use.

What are the main arguments for and against AI-powered domestic surveillance?

Proponents argue AI could enhance national security by identifying threats more efficiently and processing vast amounts of data that humans cannot. Opponents warn about privacy violations, potential for abuse, algorithmic bias, and the creation of a surveillance state. There are also concerns about mission creep, where tools developed for foreign intelligence might be repurposed for domestic monitoring.

}
Original Source
The ongoing public feud between the Department of Defense and AI company Anthropic over its technology has raised a deep open question: does the law actually allow the US government to conduct mass surveillance on Americans? Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSA’s collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think is surveillance and what the law allows. The flashpoint in the standoff between Anthropic and the government was the Pentagon’s desire to use Anthropic’s AI Claude to analyze bulk commercial data collected from Americans. Anthropic demanded its AI not be used for mass domestic surveillance (or for autonomous weapons, which are machines that can kill targets without human oversight). A week after negotiations broke down, the Pentagon designated Anthropic a supply chain risk , a label typically reserved for foreign companies that pose a threat to national security. In the wake of the fallout, OpenAI, a rival AI company behind ChatGPT, sealed a deal with the Pentagon that allowed the company’s AI to be used by the Pentagon for “ all lawful purposes ”—language that critics say left the door open to domestic surveillance. Over the following weekend, users uninstalled ChatGPT in droves. Protestors chalked messages around OpenAI’s headquarters in San Francisco: “What are your redlines”? OpenAI announced on Monday that it had reworked its deal to make sure that the company’s AI will not be used for domestic surveillance. The company added that its services will not be used by intelligence agencies, such as the NSA. Altman suggested that existing law prohibits domestic surveillance by the Department of Defense (now sometimes called the Department of War) and that OpenAI’s contract simply needed to reference it. “The DoW agrees with these principles, reflects them in
Read full article at source

Source

technologyreview.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine