Grammarly's AI Feature Under Fire
In a significant backlash against artificial intelligence applications in writing, Grammarly, a tool that has surged in popularity for its editing capabilities, recently disabled its 'Expert Review' feature. The feature, which presented editing suggestions as if they were made by renowned authors and scholars, has triggered a class action lawsuit. The suit alleges that Grammarly exploited the identities of these figures without consent to enhance its tool.
The Legal Challenge
The lawsuit, spearheaded by acclaimed journalist Julia Angwin, outlines strong claims against Grammarly's parent company, Superhuman. While the suit does not specify damages, it estimates that claims could exceed $5 million, highlighting the potential legal ramifications that tech companies may face over the use of AI.
The Engine Behind the Feature
Grammarly's 'Expert Review' utilized a large language model to approximate the style of various established professionals, allowing users to receive critiques attributed to these individuals. A disclaimer attempted to clarify that none of the cited experts had endorsed the tool, yet many asserted that their work was misrepresented.
Public Backlash
The public response to Grammarly's feature was swift and critical. Following revelations about the way the AI tool utilized author names—from Stephen King to Neil deGrasse Tyson—authors took to social media to express their outrage.
“I was surprised to learn I was cloned, so to speak. Deepfakes seem reserved for celebrities, not regular journalists,” Angwin remarked.
Ethical Considerations in AI
This controversy raises broader ethical concerns within the realm of AI and intellectual property. Companies employing AI must navigate a complex landscape that respects creators' rights while harnessing technology's potential. Angwin's attorney argues that existing laws prohibit commercial uses of an individual's likeness without consent, presenting a straightforward case that underscores the need for better oversight.
Misrepresentation and Accuracy
Angwin raised particular concerns about the accuracy of suggestions generated by the AI. In instances where the AI suggested alterations that complicated the writing unnecessarily, Angwin noted, “It felt very scattershot to me. I was surprised at how bad it was.” These remarks illuminate the risks associated with relying on AI for tasks traditionally fulfilled by skilled human professionals.
Industry Response and the Future of AI Tools
In light of the controversy, Superhuman announced they would discontinue the feature, acknowledging failures to adequately represent the voices of the authors utilized. Shishir Mehrotra, CEO of Superhuman, remarked,
“We received valid critical feedback from experts who are concerned that the agent misrepresented their voices.”This move illustrates a growing need for transparency and ethical practices in AI development.
Conclusion: A Cautionary Tale
As the design and functionality of AI tools evolve, they must also include frameworks for respecting the rights and perspectives of those they represent. The Grammarly situation serves as a cautionary tale not only for others in the tech space but also for users who increasingly lean on such platforms to enhance their work. Ultimately, the future will depend on balancing innovation with ethical responsibility.
Source reference: https://www.wired.com/story/grammarly-is-facing-a-class-action-lawsuit-over-its-ai-expert-review-feature/




Comments
Sign in to leave a comment
Sign InLoading comments...