Confronting the CEO of the AI company that impersonated me
#Superhuman #Grammarly #Expert Review #AI cloning #class action lawsuit #Shishir Mehrotra #The Verge #Decoder
📌 Key Takeaways
- Superhuman (formerly Grammarly) launched an AI feature called Expert Review that cloned real journalists without permission.
- The Verge and other journalists discovered their names were used, leading to outrage and a class action lawsuit by Julia Angwin.
- Superhuman responded by offering an opt-out, then killing the feature entirely, with CEO Shishir Mehrotra apologizing.
- The incident sparked a tense interview on Decoder, highlighting disagreements over AI's extractive nature and ethical implications.
📖 Full Retelling
🏷️ Themes
AI Ethics, Journalism Conflict
📚 Related People & Topics
Superhuman
Humans with powers and abilities exceeding those found in average humans
Superhumans are humans, humanoids or other beings with abilities and other qualities that exceed those naturally found in humans. These qualities may be acquired through natural ability, self-actualization or technological aids. The related concept of a super race refers to an entire category of bei...
Grammarly
American online grammar checker and plagiarism-detection service
Grammarly is an American English language writing assistant software tool. It reviews the spelling, grammar, and tone of a piece of writing as well as identifying possible instances of plagiarism. It can also suggest style and tonal recommendations to users and produce writing from prompts with its ...
Entity Intersection Graph
Connections for Superhuman:
View full profileMentioned Entities
Deep Analysis
Why It Matters
This news matters because it highlights critical ethical and legal issues surrounding AI companies using people's identities without consent, which affects journalists, public figures, and potentially anyone whose likeness could be replicated. It demonstrates how AI companies are testing boundaries of intellectual property and personal rights in the rapidly evolving tech landscape. The situation affects content creators who may find their professional identities commodified without compensation or permission, raising questions about accountability in AI development.
Context & Background
- Grammarly (now Superhuman) launched 'Expert Review' feature in August 2023 that used AI-cloned versions of real journalists and experts
- The Verge and other outlets discovered their identities were being used without permission or notification
- Journalist Julia Angwin filed a class action lawsuit against the company over this practice
- Superhuman initially offered email opt-out before completely killing the feature
- CEO Shishir Mehrotra has background as former YouTube chief product officer and Spotify board member
What Happens Next
The class action lawsuit filed by Julia Angwin will likely proceed through the legal system, potentially setting precedents for AI identity rights. Other AI companies may face similar scrutiny about their training data and output practices. Regulatory bodies might develop clearer guidelines about AI impersonation and consent requirements. The interview suggests ongoing tension between creators and AI companies that will continue playing out in public discourse.
Frequently Asked Questions
The company's Grammarly product created AI-cloned 'experts' using real journalists' names and professional identities without asking permission or notifying them, essentially impersonating them for commercial purposes.
The company first offered an email opt-out system, then completely killed the Expert Review feature after public backlash. CEO Shishir Mehrotra apologized multiple times but maintained disagreements about how 'extractive' AI feels to affected individuals.
Investigative journalist Julia Angwin filed a class action lawsuit against the company, which could establish important precedents about AI, identity rights, and consent in the digital age.
The interview occurred despite the controversy, creating a direct confrontation between an affected journalist and the CEO responsible, revealing tensions about AI ethics that many companies are currently navigating.
This situation highlights how AI companies are testing boundaries of consent and intellectual property, potentially affecting anyone whose likeness, voice, or professional identity could be replicated by AI systems without permission.