SP
BravenNow
Defense official reveals how AI chatbots could be used for targeting decisions
| USA | technology | ✓ Verified - technologyreview.com

Defense official reveals how AI chatbots could be used for targeting decisions

#generative AI #target ranking #Pentagon #ChatGPT #Grok #human oversight #classified settings #Project Maven

📌 Key Takeaways

  • The US military may use generative AI chatbots to rank and prioritize targets for strikes, with human oversight.
  • AI systems like ChatGPT and Grok could be deployed in classified settings for target analysis and recommendation.
  • Human operators would vet AI-generated recommendations before any decisions are finalized.
  • The disclosure comes amid Pentagon scrutiny over a recent strike on an Iranian school, which is under investigation.
  • The military is integrating both generative AI and older AI technologies like Project Maven for distinct operational roles.

📖 Full Retelling

The US military might use generative AI systems to rank lists of targets and make recommendations about which to strike first, which would then be vetted by humans, according to a Defense official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating.   A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with MIT Technology Review to discuss sensitive topics, humans might ask the system to analyze the information and rank which targets are a priority, while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations. OpenAI’s ChatGPT and xAI’s Grok could, in theory, be the models used for this type of scenario in the future, as both companies recently reached agreements for their models to be used by the Pentagon in classified settings. The official described this as an example use case of how things might work, but would not confirm or deny whether it represents how AI systems are currently being used. Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela , but the official’s comments add insight into the specific role chatbots may play, particularly in accelerating the search for targets. They also shed light on the way it’s deploying two different AI technologies, each with distinct limitations. Since at least 2017, the US military has been working on a “big data” initiative called Maven. It uses older types of AI, particularly computer vision, to analyze the oceans of data and imagery collected by the Pentagon. Maven might take thousands of hours of aerial drone foot

🏷️ Themes

Military AI, Targeting Systems

📚 Related People & Topics

Pentagon

Pentagon

Shape with five sides

In geometry, a pentagon (from Greek πέντε (pente) 'five' and γωνία (gonia) 'angle') is any five-sided polygon or 5-gon. The sum of the internal angles in a simple pentagon is 540°. A pentagon may be simple or self-intersecting.

View Profile → Wikipedia ↗
ChatGPT

ChatGPT

Generative AI chatbot by OpenAI

ChatGPT is a generative artificial intelligence chatbot developed by OpenAI. It was released in November 2022. It uses generative pre-trained transformers (GPTs), such as GPT-5.2, to generate text, speech, and images in response to user prompts. It is credited with accelerating the AI boom, an ongoi...

View Profile → Wikipedia ↗

Grok

Neologism coined by Robert Heinlein

Grok () is a neologism coined by the American writer Robert A. Heinlein in his 1961 science fiction novel Stranger in a Strange Land. While the Oxford English Dictionary summarizes the meaning of grok as "to understand intuitively or by empathy, to establish rapport with", and "to empathize or commu...

View Profile → Wikipedia ↗

Project Maven

AI military intelligence program

Project Maven (officially Algorithmic Warfare Cross Functional Team) is a Department of Defense initiative launched in April 2017 to accelerate the adoption of machine learning and data integration across U.S. military intelligence workflows, initially focused on applying computer vision for process...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Pentagon:

🏢 Anthropic 34 shared
🌐 Presidency of Donald Trump 8 shared
🌐 Artificial intelligence 8 shared
🌐 Ethics of artificial intelligence 7 shared
👤 Donald Trump 7 shared
View full profile

Mentioned Entities

Pentagon

Pentagon

Shape with five sides

ChatGPT

ChatGPT

Generative AI chatbot by OpenAI

Grok

Neologism coined by Robert Heinlein

Project Maven

AI military intelligence program

Deep Analysis

Why It Matters

This news matters because it reveals how the US military is exploring using generative AI chatbots for critical targeting decisions, potentially accelerating warfare processes while raising profound ethical questions about autonomous weapons. This affects military personnel who would operate these systems, civilians in conflict zones who could face faster targeting cycles, AI companies entering defense contracts, and policymakers grappling with AI warfare governance. The disclosure comes amid Pentagon scrutiny over recent strikes, highlighting tensions between technological advancement and accountability in lethal operations.

Context & Background

  • The Pentagon's Project Maven began in 2017 as a 'big data' initiative using computer vision AI to analyze drone footage and surveillance imagery
  • Multiple AI companies including OpenAI, xAI, and Anthropic have recently secured agreements allowing their models to be used in classified military settings
  • The US military has faced ongoing criticism and investigations regarding civilian casualties from drone strikes, including recent incidents in Iran
  • International debates about lethal autonomous weapons systems (LAWS) have been ongoing for years at the United Nations Convention on Certain Conventional Weapons

What Happens Next

The Pentagon will likely face increased congressional scrutiny and public debate about AI targeting systems following this disclosure. Defense contractors and AI companies will probably expand classified AI development for military applications throughout 2024. International bodies may accelerate discussions about regulating autonomous weapons, potentially leading to new proposed treaties or guidelines by late 2024 or early 2025.

Frequently Asked Questions

Are AI systems currently making targeting decisions without human oversight?

According to the official, humans would still vet and evaluate all AI recommendations before any strike decisions. The system described would rank targets but not authorize strikes autonomously, maintaining human responsibility for final decisions.

Which AI companies are involved with Pentagon contracts?

OpenAI's ChatGPT and xAI's Grok have agreements for Pentagon use in classified settings, while Anthropic's Claude has reportedly been integrated into existing military systems. Multiple AI firms are now working with defense agencies on various applications.

How does this differ from existing military AI systems?

This represents a shift from computer vision systems like Project Maven that analyze imagery to generative AI chatbots that can process text-based intelligence and make recommendations. The new systems could accelerate target identification by analyzing diverse data sources simultaneously.

What are the main ethical concerns about AI targeting systems?

Primary concerns include algorithmic bias leading to mistaken targets, reduced human deliberation time in life-or-death decisions, accountability gaps when errors occur, and potential escalation toward fully autonomous weapons that bypass human judgment entirely.

How might this affect international military balance?

Advanced AI targeting could give early adopters significant tactical advantages, potentially triggering arms races as other nations develop similar capabilities. This technological gap might increase instability as capabilities outpace established norms and regulations.

Status: Verified
Confidence: 85%
Source: Defense official with knowledge of the matter (Anonymous)

Source Scoring

85 Overall
Decision
Highlight
Low Norm High Push

Detailed Metrics

Reliability 85/100
Importance 90/100
Corroboration 70/100
Scope Clarity 90/100
Volatility Risk (Low is better) 10/100

Key Claims Verified

The US military might use generative AI systems to rank lists of targets and make recommendations about which to strike first. Confirmed

Supported by the Defense official's statement and the context of recent OpenAI/Grok agreements.

OpenAI’s ChatGPT and xAI’s Grok have reached agreements for their models to be used by the Pentagon in classified settings. Confirmed

Confirmed by public records regarding OpenAI and xAI contracts with the US government.

Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela. Partial

Attributed to 'other outlets', requires verification of those specific reports.

Supporting Evidence

  • Primary MIT Technology Review [Link]
  • Primary OpenAI & xAI Public Announcements
  • Medium Other Outlets (Referenced in text)

Caveats / Notes

  • The date 2026 suggests this is a future projection or a scenario within the provided dataset.
  • The primary source is an anonymous official speaking on background.
  • The claim regarding Anthropic's use in specific countries relies on secondary reporting.
}
Original Source
The US military might use generative AI systems to rank lists of targets and make recommendations about which to strike first, which would then be vetted by humans, according to a Defense official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating.   A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with MIT Technology Review to discuss sensitive topics, humans might ask the system to analyze the information and rank which targets are a priority, while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations. OpenAI’s ChatGPT and xAI’s Grok could, in theory, be the models used for this type of scenario in the future, as both companies recently reached agreements for their models to be used by the Pentagon in classified settings. The official described this as an example use case of how things might work, but would not confirm or deny whether it represents how AI systems are currently being used. Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela , but the official’s comments add insight into the specific role chatbots may play, particularly in accelerating the search for targets. They also shed light on the way it’s deploying two different AI technologies, each with distinct limitations. Since at least 2017, the US military has been working on a “big data” initiative called Maven. It uses older types of AI, particularly computer vision, to analyze the oceans of data and imagery collected by the Pentagon. Maven might take thousands of hours of aerial drone foot
Read full article at source

Source

technologyreview.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine