SP
BravenNow
Confronting the CEO of the AI company that impersonated me
| USA | technology | ✓ Verified - theverge.com

Confronting the CEO of the AI company that impersonated me

#Superhuman #Grammarly #Expert Review #AI cloning #class action lawsuit #Shishir Mehrotra #The Verge #Decoder

📌 Key Takeaways

  • Superhuman (formerly Grammarly) launched an AI feature called Expert Review that cloned real journalists without permission.
  • The Verge and other journalists discovered their names were used, leading to outrage and a class action lawsuit by Julia Angwin.
  • Superhuman responded by offering an opt-out, then killing the feature entirely, with CEO Shishir Mehrotra apologizing.
  • The incident sparked a tense interview on Decoder, highlighting disagreements over AI's extractive nature and ethical implications.

📖 Full Retelling

Today, I’m talking with Shishir Mehrotra, who is CEO of Superhuman — that’s the company formerly known as Grammarly , which is still its flagship product. Shishir also used to be the chief product officer at YouTube, and he’s on the board of directors at Spotify. He’s a fascinating guy, and we actually scheduled this interview a month or so ago, thinking we’d talk about AI and what it’s doing to software, platforms, and creativity pretty broadly. Verge subscribers, don’t forget you get exclusive access to ad-free  Decoder  wherever you get your podcasts. Head here . Not a subscriber? You can sign up here . Then things really took a turn. Back in August of last year, Grammarly shipped a feature called Expert Review, which allowed you to get writing suggestions from AI-cloned “experts,” and reporters at The Verge and other outlets discovered that those experts included us. It included me. No one had ever asked permission to use our names this way, and a lot of reporters were outraged by this — the talented investigative journalist Julia Angwin was so upset she filed a class action lawsuit about it. Superhuman responded to this by first offering up an email-based opt out and then killing the feature entirely. Shishir apologized, and you’ll hear him apologize again.  Throughout all of this, I kept wondering if Shishir was still going to show up and record Decoder , because my questions about decision-making and AI and platforms suddenly seemed a lot harder than before. To his credit, he did, and he stuck it out. This conversation got tense at times, and it’s clear we disagree about how extractive AI feels for people. But I won’t stretch this out any longer. Okay: Shishir Mehrotra, CEO of Superhuman. Here we go. This interview has been lightly edited for length and clarity. Shishir Mehrotra, you’re the CEO of Superhuman. Welcome to Decoder . Thanks for having me. I&#82

🏷️ Themes

AI Ethics, Journalism Conflict

📚 Related People & Topics

Superhuman

Humans with powers and abilities exceeding those found in average humans

Superhumans are humans, humanoids or other beings with abilities and other qualities that exceed those naturally found in humans. These qualities may be acquired through natural ability, self-actualization or technological aids. The related concept of a super race refers to an entire category of bei...

View Profile → Wikipedia ↗

Grammarly

American online grammar checker and plagiarism-detection service

Grammarly is an American English language writing assistant software tool. It reviews the spelling, grammar, and tone of a piece of writing as well as identifying possible instances of plagiarism. It can also suggest style and tonal recommendations to users and produce writing from prompts with its ...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Superhuman:

🌐 Grammarly 2 shared
👤 The Verge 1 shared
🌐 Artificial intelligence 1 shared
View full profile

Mentioned Entities

Superhuman

Humans with powers and abilities exceeding those found in average humans

Grammarly

American online grammar checker and plagiarism-detection service

Deep Analysis

Why It Matters

This news matters because it highlights critical ethical and legal issues surrounding AI companies using people's identities without consent, which affects journalists, public figures, and potentially anyone whose likeness could be replicated. It demonstrates how AI companies are testing boundaries of intellectual property and personal rights in the rapidly evolving tech landscape. The situation affects content creators who may find their professional identities commodified without compensation or permission, raising questions about accountability in AI development.

Context & Background

  • Grammarly (now Superhuman) launched 'Expert Review' feature in August 2023 that used AI-cloned versions of real journalists and experts
  • The Verge and other outlets discovered their identities were being used without permission or notification
  • Journalist Julia Angwin filed a class action lawsuit against the company over this practice
  • Superhuman initially offered email opt-out before completely killing the feature
  • CEO Shishir Mehrotra has background as former YouTube chief product officer and Spotify board member

What Happens Next

The class action lawsuit filed by Julia Angwin will likely proceed through the legal system, potentially setting precedents for AI identity rights. Other AI companies may face similar scrutiny about their training data and output practices. Regulatory bodies might develop clearer guidelines about AI impersonation and consent requirements. The interview suggests ongoing tension between creators and AI companies that will continue playing out in public discourse.

Frequently Asked Questions

What exactly did Superhuman's AI do wrong?

The company's Grammarly product created AI-cloned 'experts' using real journalists' names and professional identities without asking permission or notifying them, essentially impersonating them for commercial purposes.

How did Superhuman respond to the controversy?

The company first offered an email opt-out system, then completely killed the Expert Review feature after public backlash. CEO Shishir Mehrotra apologized multiple times but maintained disagreements about how 'extractive' AI feels to affected individuals.

What legal action has been taken?

Investigative journalist Julia Angwin filed a class action lawsuit against the company, which could establish important precedents about AI, identity rights, and consent in the digital age.

Why is this interview particularly significant?

The interview occurred despite the controversy, creating a direct confrontation between an affected journalist and the CEO responsible, revealing tensions about AI ethics that many companies are currently navigating.

What broader implications does this case have?

This situation highlights how AI companies are testing boundaries of consent and intellectual property, potentially affecting anyone whose likeness, voice, or professional identity could be replicated by AI systems without permission.

}
Original Source
Today, I’m talking with Shishir Mehrotra, who is CEO of Superhuman — that’s the company formerly known as Grammarly , which is still its flagship product. Shishir also used to be the chief product officer at YouTube, and he’s on the board of directors at Spotify. He’s a fascinating guy, and we actually scheduled this interview a month or so ago, thinking we’d talk about AI and what it’s doing to software, platforms, and creativity pretty broadly. Verge subscribers, don’t forget you get exclusive access to ad-free  Decoder  wherever you get your podcasts. Head here . Not a subscriber? You can sign up here . Then things really took a turn. Back in August of last year, Grammarly shipped a feature called Expert Review, which allowed you to get writing suggestions from AI-cloned “experts,” and reporters at The Verge and other outlets discovered that those experts included us. It included me. No one had ever asked permission to use our names this way, and a lot of reporters were outraged by this — the talented investigative journalist Julia Angwin was so upset she filed a class action lawsuit about it. Superhuman responded to this by first offering up an email-based opt out and then killing the feature entirely. Shishir apologized, and you’ll hear him apologize again.  Throughout all of this, I kept wondering if Shishir was still going to show up and record Decoder , because my questions about decision-making and AI and platforms suddenly seemed a lot harder than before. To his credit, he did, and he stuck it out. This conversation got tense at times, and it’s clear we disagree about how extractive AI feels for people. But I won’t stretch this out any longer. Okay: Shishir Mehrotra, CEO of Superhuman. Here we go. This interview has been lightly edited for length and clarity. Shishir Mehrotra, you’re the CEO of Superhuman. Welcome to Decoder . Thanks for having me. I&#82
Read full article at source

Source

theverge.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine