Grammarly says it will stop using AI to clone experts without permission
#Grammarly #AI cloning #expert review #Superhuman #consent #data privacy #The Verge
📌 Key Takeaways
- Grammarly's 'expert review' AI feature cloned real writers without permission, including Verge staff.
- Superhuman disabled the feature after criticism, acknowledging it 'missed the mark'.
- The company plans to redesign the feature to give experts control over their representation.
- The incident highlights ethical concerns in AI training using personal or professional data without consent.
📖 Full Retelling
🏷️ Themes
AI Ethics, Privacy
📚 Related People & Topics
Superhuman
Humans with powers and abilities exceeding those found in average humans
Superhumans are humans, humanoids or other beings with abilities and other qualities that exceed those naturally found in humans. These qualities may be acquired through natural ability, self-actualization or technological aids. The related concept of a super race refers to an entire category of bei...
Grammarly
American online grammar checker and plagiarism-detection service
Grammarly is an American English language writing assistant software tool. It reviews the spelling, grammar, and tone of a piece of writing as well as identifying possible instances of plagiarism. It can also suggest style and tonal recommendations to users and produce writing from prompts with its ...
The Verge
American technology news and media website
The Verge is an online American technology news publication headquartered in Lower Manhattan, New York City and operated by Vox Media. The website publishes news, feature stories, guidebooks, product reviews, consumer electronics news, and podcasts. The website was launched on November 1, 2011 and u...
Entity Intersection Graph
No entity connections available yet for this article.
Mentioned Entities
Deep Analysis
Why It Matters
This news is important because it highlights growing ethical concerns in AI development, particularly around consent and intellectual property. It affects writers, journalists, and other professionals whose work may be used without permission to train AI models, potentially undermining their livelihoods and creative control. The decision also impacts AI companies and users who rely on these tools, as it may lead to more transparent and ethical AI practices in the industry.
Context & Background
- AI models like those used by Grammarly are often trained on large datasets that may include copyrighted or personal content without explicit permission.
- There is an ongoing global debate about AI ethics, including issues of consent, attribution, and compensation for creators whose work is used in AI training.
- Previous incidents, such as lawsuits against AI companies for using copyrighted material, have set precedents for increased scrutiny of AI data sourcing practices.
What Happens Next
Grammarly will likely revise its 'expert review' feature to include opt-in mechanisms or compensation models for experts. Other AI companies may face similar scrutiny, leading to industry-wide changes in how AI tools are developed and marketed. Regulatory bodies might introduce guidelines or laws to address AI ethics, particularly around consent and attribution.
Frequently Asked Questions
It was an AI feature that provided edit suggestions 'inspired by' real writers, including Verge staff, without their permission. This involved cloning their writing styles or expertise to enhance Grammarly's recommendations.
Grammarly disabled it due to backlash over ethical concerns, including lack of consent from the experts involved. The company acknowledged missing the mark and plans to redesign the feature with better control for experts.
Writers and experts whose work was used without permission are directly affected, as are Grammarly users who relied on the feature. The broader AI industry may also see impacts from increased ethical scrutiny.
Ethical issues include lack of consent, potential misrepresentation, and intellectual property violations. Using someone's style or expertise without permission can undermine their autonomy and compensation.
AI companies may adopt more transparent practices, such as opt-in systems and fair compensation for contributors. This could slow down feature releases but improve trust and compliance with ethical standards.
While not explicitly stated, there could be legal risks if experts pursue claims for unauthorized use of their work. This incident may prompt stricter regulations or lawsuits in the AI sector.