Over 100 Google AI employees oppose military use of Gemini for surveillance and autonomous weapons
Employees want Google to establish 'red lines' similar to Anthropic's negotiations with the Pentagon
This movement is part of broader industry pushback against Pentagon pressure on AI companies
Google has faced similar employee activism before, particularly after a 2018 Pentagon project backlash
Jeff Dean has expressed solidarity with Anthropic on opposing mass surveillance
📖 Full Retelling
More than 100 Google AI employees sent a letter to Chief Scientist Jeff Dean on February 26, 2026, opposing the use of their company's Gemini technology for U.S. surveillance and autonomous weapons, echoing similar concerns raised by Anthropic amid a standoff with the Pentagon. The employees specifically requested that Google establish clear 'red lines' in its government contracts, mirroring the ethical boundaries Anthropic is currently negotiating with the Department of Defense. The letter expressed concerns about allowing the U.S. military to use Google's AI products for surveillance of American citizens or for piloting autonomous weapons without meaningful human oversight. 'Please do everything in your power to stop any deal which crosses these basic red lines,' the employees wrote, emphasizing their desire to be proud of their work at Google. This movement comes as part of a broader industry response to Pentagon pressure on AI companies, with nearly 50 OpenAI employees and 175 Google workers simultaneously publishing a public letter criticizing the Defense Department's tactics and calling for solidarity among tech workers. Jeff Dean has publicly expressed support for Anthropic's position, stating that 'mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression,' though Google and Dean did not immediately comment on the employee letter. The current activism reflects Google's ongoing struggle with balancing government contracts with employee ethics concerns, particularly after a 2018 Pentagon project that caused significant employee backlash and led Google to discontinue the contract.
🏷️ Themes
AI Ethics, Military Technology, Corporate Activism
Artificial intelligence (AI) has many applications in warfare, including in communications, intelligence, and munitions control. Warfare which is algorithmic or controlled by artificial intelligence, with little to no human decision-making, is called hyperwar, a term coined by Amir Husain and John R...
Google AI is a subsidiary of Google DeepMind dedicated to artificial intelligence. It was announced at Google I/O 2017 by CEO Sundar Pichai.
This division has expanded its reach with research facilities in various parts of the world such as Zurich, Paris, Israel, and Beijing.
The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...
# Anthropic PBC
**Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...
In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°.
A pentagon may be simple or self-intersecting.
Advertisement SKIP ADVERTISEMENT Supported by SKIP ADVERTISEMENT Google Workers Seek ‘Red Lines’ on Military A.I., Echoing Anthropic More than 100 Google A.I. employees sent a letter to Jeff Dean, a chief scientist, opposing Gemini’s use for U.S. surveillance and some autonomous weapons. Listen to this article · 3:40 min Learn more Share full article By Tripp Mickle Reporting from San Francisco Feb. 26, 2026, 9:53 p.m. ET The standoff between the Pentagon and Anthropic over artificial intelligence is reverberating across Silicon Valley, spurring debates among employees at other companies about the government’s use of the technology they build. On Thursday, more than 100 employees who work on Google’s artificial intelligence technology signed a letter sent to management expressing concern about the company’s plan to work with the Pentagon and calling on Google to draw the same red lines in its government contracts that Anthropic is seeking. The employees signed onto a letter saying they did not want Google to allow the U.S. military to use its Gemini A.I. product to surveil American citizens or pilot autonomous weapons without human involvement. The letter was sent to Jeff Dean, the chief scientist of the company’s A.I. division, Google DeepMind. “Please do everything in your power to stop any deal which crosses these basic red lines,” the employees wrote. “We love working at Google and want to be proud of our work.” The letter illustrates how the Pentagon’s pressure on Anthropic could backfire with other A.I. companies, including Google and OpenAI. Over the past few weeks, the Defense Department, which has a $200 million contract with Anthropic, has been pressing to be able to use the start-up’s A.I. models as the military sees fit. Anthropic has resisted agreeing to those terms because it wants assurances that the technology won’t be used for mass surveillance of Americans or deployed in autonomous weapons that have no human involvement. On the same day that Mr. De...