SP
BravenNow
Sam Altman faced 'serious questions' in meeting with lawmakers about OpenAI's defense work
| USA | general | ✓ Verified - cnbc.com

Sam Altman faced 'serious questions' in meeting with lawmakers about OpenAI's defense work

#Sam Altman #OpenAI #defense work #lawmakers #AI ethics #government regulation #military AI

📌 Key Takeaways

  • Sam Altman was questioned by lawmakers about OpenAI's defense-related projects.
  • The meeting focused on ethical and security concerns of AI in military applications.
  • Lawmakers expressed scrutiny over potential misuse of AI technology in defense.
  • The discussion highlights growing regulatory interest in AI governance and accountability.

📖 Full Retelling

OpenAI CEO Sam Altman met with a handful of lawmakers in Washington, D.C., to discuss the company's work with the Defense Department.

🏷️ Themes

AI Ethics, Government Regulation

📚 Related People & Topics

OpenAI

OpenAI

Artificial intelligence research organization

# OpenAI **OpenAI** is an American artificial intelligence (AI) research organization headquartered in San Francisco, California. The organization operates under a unique hybrid structure, comprising the non-profit **OpenAI, Inc.** and its controlled for-profit subsidiary, **OpenAI Global, LLC** (a...

View Profile → Wikipedia ↗

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, accountability, transparency, privacy, and regulation, particularly where systems influence or automate human decision-mak...

View Profile → Wikipedia ↗
Sam Altman

Sam Altman

American entrepreneur and investor (born 1985)

Samuel Harris Altman (born April 22, 1985) is an American businessman and entrepreneur who has served as the chief executive officer (CEO) of the artificial intelligence research organization OpenAI since 2019. Having overseen the successful launch of ChatGPT in 2022, he is widely considered to be o...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for OpenAI:

🌐 ChatGPT 9 shared
🌐 Artificial intelligence 5 shared
🌐 AI safety 5 shared
🌐 Regulation of artificial intelligence 4 shared
🌐 OpenClaw 4 shared
View full profile

Mentioned Entities

OpenAI

OpenAI

Artificial intelligence research organization

Ethics of artificial intelligence

The ethics of artificial intelligence covers a broad range of topics within AI that are considered t

Sam Altman

Sam Altman

American entrepreneur and investor (born 1985)

Deep Analysis

Why It Matters

This meeting matters because it represents growing government scrutiny of AI companies' military and defense contracts, which raises ethical concerns about weaponization of AI. It affects OpenAI's business operations, defense contractors seeking AI partnerships, and policymakers crafting AI governance frameworks. The outcome could influence regulations on dual-use AI technologies and set precedents for how AI firms engage with national security sectors.

Context & Background

  • OpenAI initially had a policy restricting military applications of its technology, but revised it in January 2024 to allow some defense work while prohibiting weapons development
  • The U.S. Department of Defense has been actively seeking AI partnerships through initiatives like the Defense Innovation Unit and Joint AI Center
  • Previous controversies include Google employees protesting Project Maven in 2018 and Microsoft's work with the Pentagon's JEDI cloud contract

What Happens Next

Congress will likely draft legislation addressing AI in defense applications within 6-12 months. The Department of Defense may issue new guidelines for AI procurement by Q3 2024. OpenAI will probably face continued scrutiny from both lawmakers and employee activists regarding its defense partnerships.

Frequently Asked Questions

Why are lawmakers concerned about OpenAI's defense work?

Lawmakers worry about ethical implications of AI in warfare, potential autonomous weapons systems, and maintaining human control over lethal decisions. They're also concerned about China's AI military advancements creating competitive pressure.

What types of defense work might OpenAI be involved in?

OpenAI's revised policy allows cybersecurity, logistics optimization, and intelligence analysis applications. The company explicitly prohibits using its technology to develop weapons, injure people, or destroy property.

How does this affect other AI companies?

Other AI firms like Anthropic and Google will face similar scrutiny for defense contracts. The congressional questioning sets precedents that will shape how all tech companies approach government and military partnerships.

What are the main ethical concerns about AI in defense?

Primary concerns include autonomous weapons making kill decisions without human oversight, algorithmic bias in targeting systems, and escalation risks in conflict situations. There are also worries about AI lowering barriers to warfare.

}
Original Source
OpenAI CEO Sam Altman met with a handful of lawmakers in Washington, D.C. where Sen. Mark Kelly, D-Ariz., said he raised some "serious questions" about the company's approach to warfare and its recent deal with the Department of Defense . In an interview with CNBC's Emily Wilkins , Kelly said the group talked "in detail" about surveillance and how artificial intelligence systems could be used within a kill chain. He called it a "good discussion." "There's got to be guardrails in place, and we've got to make sure that we're always thinking about the Constitution and making sure that we comply with it," Kelly said. OpenAI formed a deal with the DOD late last month just hours after rival Anthropic had been blacklisted by Defense Secretary Pete Hegseth , who declared the company a "Supply-Chain Risk to National Security." Read more CNBC tech news How the Iran war and rising energy prices are threatening semiconductor demand Kevin Mandia sold his cybersecurity company to Google in 2022. He has a fresh $190 million for a new venture Musk's xAI wants to build a power plant in Mississippi. Regulators planned a key meeting on Election Day, 200 miles away Oracle is building yesterday's data centers with tomorrow's debt Anthropic had been trying to renegotiate its contract with the DOD, but the talks stalled over a disagreement about how the technology could be used. The DOD wanted Anthropic to grant the military unfettered access to its models for all lawful purposes, while Anthropic sought assurance that its models would not be used for fully autonomous weapons or domestic mass surveillance. Altman said in a post on X the day the deal fell apart that prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems, are two of the company's "most important safety principles." He said the DOD agreed and put them into the arrangement. OpenAI published an excerpt of its contract with the DOD, which says that the ag...
Read full article at source

Source

cnbc.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine