Business

Commentary: Why an AI firm known for fighting plagiarism has real authors in a fury

Published

on

The online service Grammarly originated in 2009 as a suite of tools to help ferret out plagiarism in schoolwork or help students hone their grammar and spelling. Eventually it incorporated artificial intelligence bots as sources of its writing assistance.

In August 2025, however, the firm stepped way over the line of what is — or should be — permissible as an AI-generated service.

This was its “expert review” service, available to those willing to fork over up to $30 a month. The pitch was that subscribers could get their writing samples reviewed by established writers, including some household names as Stephen King and Neil DeGrasse Tyson, and receive feedback from them about how to improve their prose.

This is an area I cover and there have been a lot of lows. But I still feel like this is a new low.

— Julia Angwin, technology journalist and plaintiff in a lawsuit against Grammarly

Advertisement

A few problems have surfaced about this.

First, it appears that many, if not all, the cited “experts” haven’t granted Grammarly permission to use their names or work in connection with this service. Second, none of them actually reviewed the submitted writing samples — the samples were screened by AI bots, which generated the suggestions based on the authors’ published works.

Third, Grammarly didn’t make the truth clear to its users — the suggestions seemed on first impression to come directly from the cited “experts”; it was only when a user clicked through for more detail that Grammarly disclosed that its suggestions were “inspired” by the experts’ published works.

Last week, Grammarly suspended the “expert review” function. That happened the same day that Julia Angwin, a veteran technology and investigative journalist who has worked at the Wall Street Journal and Propublica, filed a federal class-action lawsuit alleging that Grammarly had in effect stolen the real authors’ identities and attributed to them advice that the authors might disagree with, or that might even undermine the authors’ reputations for sound writing.

Advertisement

This isn’t the first time that someone has tried to use AI as a shortcut, with parlous consequences. Over the last couple of years, AI-generated material has appeared in legal briefs and medical diagnoses. Not a few news organizations have been caught publishing AI-generated articles without adequately disclosing that they weren’t written by humans.

Often, the shortcuts have been exposed because the AI bot outputs were riddled with errors — citations to nonexistent legal precedents, proposed medical treatments that were actually life-threatening, factual mistakes that even novice human journalists would know to avoid.

“Expert review” appeared at a time when many authors and artists are taking AI companies to court for allegedly violating copyright law by “training” their bots on published work without acknowledgment or payment.

Numerous lawsuits are making their way through the courts, although the judiciary hasn’t settled on a single conclusion about where the line stands distinguishing “fair use” from copyright infringement.

Yet one doesn’t need an AI bot to explain why Grammarly’s stunt has to rank among the sleaziest misuses of AI technology yet to appear.

Advertisement

San Francisco-based Grammarly didn’t make things any better with a mea culpa posted on LinkedIn by its chief executive, Shishir Mehrotra. Grammarly’s AI agent, he wrote, “was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans.”

In other words, he asserted that “expert review” was designed as a boon not only for Grammarly’s users, but for the experts whose names and works had been exploited for the firm’s profit and without their say-so. He stated that Grammarly will “reimagine” its service to give the experts “real control over how they want to be represented — or not represented at all.”

In an email, Mehrotra responded to my request for comments by acknowledging that “we believe this feature missed the mark on what both experts and users expect out of us.” He added, however, that Grammarly considers the claims in Angwin’s lawsuit to be “without merit and will strongly defend against them.”

Grammarly hasn’t been shy about pushing AI-powered services to users. In November, it changed its corporate name to Superhuman, reflecting what it called its “mission … to unlock the superhuman potential in everyone.”

By then, “expert review” already had been launched. From the outset, the company was a little vague about what the service actually entailed. According to the web page originally posted to pitch the service (the page has since been removed but survives in a web archive), users could improve their writing by “drawing on insights from subject-matter experts and trusted publications.”

Advertisement

Users were instructed to upload their document to the system. The bot then “cross-referenced your writing with relevant experts” and offered “specific … expert-informed feedback.” Users could then choose from a list of a few such experts, each offering a couple of lines of feedback.

Buried in the pitch were subtle disclaimers.

Grammarly slipped a warning onto its web page noting that its feedback was merely “inspired by real experts” and a further notification that its references to “experts” were “for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals.”

The roster of experts was impressive indeed. They included novelist King, astrophysicist Tyson and numerous book and magazine writers of varied eminence. I couldn’t reach King, and Tyson didn’t respond to my request for comment, but some other writers have made their reactions known via other routes.

The tech journalist Kara Swisher, for instance, answered a query from a fellow journalist by labeling the Grammarly folks “rapacious information and identity thieves.”

Advertisement

It might have become obvious to some users that the likelihood was remote that their work was being personally vetted by the cited experts. I might have asked the respected grammarian William Strunk Jr., author of that indispensable primer “The Elements of Style,” what he thought about having been offered up by Grammarly as an expert writing coach, except that he died in 1946. Other deceased writers also have appeared on the roster, such as astronomer Carl Sagan (d. 1996).

“Expert review” coasted under the radar for months, until a few tech journalists caught its scent. The first may have been Miles Klee of Wired, whose report appeared on March 3. Within days, similar reports appeared on The Verge and Defector.

It was a post by Casey Newton of Platformer, which listed several of Grammarly’s “experts,” that alerted Angwin that the company was exploiting her name and work. “They were attempting to take my livelihood and automate it,” she told me. “They were literally selling a service that claims that Julia Angwin will edit your piece. Obviously, that’s a direct threat to me and my ability to earn a living.”

Moreover, Angwin says, the edits that Grammarly proposed under her name to a user were “terrible — so they weren’t just stealing my livelihood but ruining my reputation.”

In its initial response to the burgeoning controversy, Grammarly offered to allow writers to opt out of “expert review” by sending the company an email. The problem there is that the “experts” have no way of knowing that there’s anything to opt out from, since Grammarly hasn’t published a comprehensive roster.

Advertisement

As the author of eight books and years of newspaper columns, I was interested to know if my own name or works were offered. Grammarly told me only that its “data on experts was sourced from third-party LLMs [that is, AI bots]. … Experts were surfaced based on their expertise with the topic.” It added that it “won’t be providing additional comment at this time.”

The extent of Superhuman’s legal exposure for this program is hard to gauge. Angwin’s lawsuit, which seeks to empower a class of authors whose names were used by the company without their consent, cites California and New York laws barring the use of anyone’s name or likeness for commercial purposes without their consent.

As for how many people have been affected, Angwin’s attorney, Peter Romer-Friedman, told me that obtaining the full roster would be his first task under discovery if the case heads to trial. (Superhuman hasn’t yet responded to the lawsuit in court.) But he says more than 100 writers have reached out to say they want to be part of the case since it was filed, and speculates that the total number could be in the thousands.

“This is an area I cover,” Angwin says, “and there have been a lot of lows. But I still feel like this is a new low.”

Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Exit mobile version