Decoder with Nilay Patel
Episode: Confronting the CEO of the AI company that impersonated me
Date: March 23, 2026
Overview
In this episode, Nilay Patel, editor-in-chief of The Verge, confronts Shashir Malhotra, CEO of Superhuman (formerly Grammarly), about a recent controversial AI feature called "Expert Review." This feature used the names of real experts—including Nilay and other notable journalists—without their permission to generate AI writing suggestions attributed to them. The feature's launch sparked outrage among media professionals and led to a class action lawsuit. Despite the controversy, Malhotra agreed to a candid and, at times, heated conversation about AI ethics, decision-making inside Superhuman, the broader implications for the creator economy, and the industry’s uncertain regulatory and business future.
Key Discussion Points & Insights
1. Superhuman’s Company Structure and AI Philosophy
- Superhuman Rebrand: Malhotra explains Superhuman now encompasses multiple productivity tools, including the flagship Grammarly, Coda (documents), their own email client, and the new "Go" platform—a system for building custom AI assistants to integrate wherever users work.
"Superhuman is the AI native productivity suite. We bring AI to wherever people work." (04:19 - Malhotra)
- Central Value: The promise is consistent, high-quality AI assistance, integrating deeply at the point where users interact with text or data across platforms.
"We see about a million different apps and agents every day. We seamlessly blend AI right into your experience so you don't have to think about AI." (04:19 - Malhotra)
- Differentiation: Consistency across platforms and high grammar accuracy are core; new platforms allow others to build similar agents.
2. The "Expert Review" Controversy
- Feature Description: The "Expert Review" feature in Grammarly generated text suggestions allegedly based on advice from named experts (including journalists and authors), often without explicit consent—some users saw this as impersonation, others as misattribution.
- Malhotra’s Defense & Apology:
"It deeply pained me to feel that we underdelivered for them, and I really like to apologize for that. That was not our intention." (08:52 - Malhotra)
- Claims the feature was low-use, buried, and removed quickly after negative feedback—even before the lawsuit.
- Admits it was "not a good feature. It wasn't good for experts, it wasn't good for users." (08:52 - Malhotra)
Decision-Making at Superhuman
- Development Process: A small team (a PM and a couple of engineers) shipped the feature. Their process is supposed to avoid groupthink but failed to anticipate backlash.
- User vs. Expert Needs: The intent was to let users get feedback from "experts they admire," but the execution fell short for both users and experts.
"We think of Grammarly...like having your grammar teacher right next to you everywhere you work...For some people, the people they want feedback from are experts." (10:25 - Malhotra)
3. Ethics: Attribution, Impersonation, and Payment
- Distinguishing Attribution vs. Impersonation:
- Malhotra argues attribution is normal and desirable on the internet but impersonation requires a higher bar.
- He admits the line is blurry; Nilay contends the feature wasn’t actual attribution, since "nothing I would ever say" was attached to his name (see below).
- Should Creators Be Compensated for Likeness?
"We should not be able to impersonate you, period. We did not. If we use your work...they should attribute you and they should link back to you." (15:00 - Malhotra)
- Future model: platform where experts can build agents and monetize via a 70/30 revenue split. Experts would need to do the work to create these agents.
Notable Exchange
Nilay: "How much do you think you should pay me to use my name?"
Malhotra: "We should not be able to impersonate you, period...If we use your work, they should attribute you and link back to you."
...
Nilay: "If you use my likeness, how much should you have to pay me?"
Malhotra: "[If] you bring an agent, craft it, put it on our platform, then you should get paid for it—just like how platforms like YouTube work."
(13:27-15:53)
4. Legal and Regulatory Tensions
- Chronology Dispute:
- Nilay notes Superhuman first responded with an opt-out email for affected creators before removing the feature, only doing so after the lawsuit (16:25).
- Legal Defense:
- Malhotra disputes that the feature qualifies as using name and likeness for commercial purposes.
"We believe the claims are without merit...It's just not impersonation...It was inspired by a specific work...Far from that test." (13:30 - Malhotra)
- YouTube Precedent:
- As former YouTube exec, Malhotra references building Content ID and an open creator program, arguing that law provides a floor, not a ceiling, for creator fairness.
"We won [the Viacom copyright suit] on summary judgment...But that's not the standard we held ourselves to." (20:05 - Malhotra)
Notable Quote
Nilay: "I just want to...again, this wasn't an attribution. You just made something up and put my name on it. There's no attribution here."
Malhotra: "The feature was, here's a suggestion generated by a specific work from a specific person."
(37:15 - 37:54)
5. Broader AI, Copyright, and Platform Impacts
- Public Perception of AI:
- AI polls very negatively, with many people feeling it's extractive and threatens jobs—ranking worse than ICE, per an NBC News poll.
"AI is polling behind ICE and only slightly above the Democratic Party." (41:19 - Nilay)
- Malhotra attributes this mostly to general job anxieties across the population, not only creator concerns.
- The Extractive Nature of AI:
- Nilay links AI’s perception problem to its perceived extraction—using the totality of others’ efforts to derive value and potentially replace them without compensation.
"You’ve taken the sum total of everyone’s work on the Internet and now you’re going to use it to replace human beings in their jobs without any economic recompense." (43:23 - Nilay)
- Legal Precedents:
- The episode covers possible seismic regulatory shifts if copyright laws are interpreted against AI companies regarding input (training data) or output (AI-generated content).
- Malhotra claims Superhuman sits atop the model providers (like OpenAI, Anthropic); if model costs rise due to copyright litigation, Superhuman's business model may need to adapt but won’t be directly hit first.
- Suggests new business models are needed for creators and experts involving direct connections, subscriptions, and building their own agents.
6. Monetization, Creator Economy, and Platform Dynamics
-
The "Sasspocalypse":
- Nilay probes the existential risk to SaaS companies from direct LLM competition—could Claude, ChatGPT, or Gemini just undercut all platforms?
- Malhotra argues users stick with specialized, networked tools for integrated experience; network effects and continued innovation will preserve their value against commoditization.
-
Creator Monetization & The Funnel Problem:
- Many creators must sell physical products (atoms) as ad revenue from content (bits) declines.
- Malhotra counters it’s about building deeper connections, via subscriptions or direct services—enabled in part by agents.
Notable Exchange
Nilay: "My actual body of work has been reduced to zero value."
Malhotra: "That’s a pretty hard sell...I would hope we look at it the other way and say some of these platforms are going to give you a way to participate...give you a way to take your expertise and put it in front of people in a way that actually helps them in a different way than you could connect in the past."
(64:21 - 66:43)
7. Future of AI Agents and the Superhuman Platform
- Product Vision:
- Superhuman’s new push is to let experts build their own AI agents—such as a "Nilay Assistant"—and monetize them. Users could get real-time advice from the actual expert’s “agent,” with the expert having to specify rules and supervise quality.
- Malhotra admits creativity is harder to automate than rules-based editing (like grammar), but insists some experts (e.g., teachers) will be able to successfully productize their expertise.
"You have to write down that viewpoint of what is your editing like…You need to get feedback and you need to be able to come through and say, that was a shitty suggestion, don't do that again." (75:39 - Malhotra)
Notable Quotes & Memorable Moments
-
On launching "Expert Review" without permission (apology):
"It deeply pained me to feel that we underdelivered for them, and I really like to apologize for that. That was not our intention." (08:52 - Malhotra)
-
On feature quality and creator consent:
"I don't think it's a good feature. I don't really want to. I'm not trying to be close to this line." (15:00 - Malhotra)
-
On new creator business models:
"What they told me is actually, I don't really want to be fishing for pennies. Whenever my work gets used, I want to build connection with people." (59:12 - Malhotra)
-
On AI’s extractive reputation:
"AI is polling behind ICE and only slightly above the Democratic Party. This is a tough spot to be in." (41:19 - Nilay)
-
On the new opportunity for creators:
"If I can have my high school English teacher with me everywhere I work, it makes me better. It makes my trust and judgment shine through. I would like your agent for the people that matter." (61:59 - Malhotra)
Timestamps for Key Segments
- 01:54 – Introduction to the episode and controversy
- 04:19 – 07:29 – What Superhuman is and how its platform works
- 08:17 – 19:02 – The "Expert Review" feature: intent, backlash, and removal
- 13:27 – 18:04 – Attribution vs. impersonation, payment, and opt-out controversy
- 20:05 – 21:40 – Legal defense and YouTube’s Content ID precedent
- 26:33 – 29:01 – Resumption after ad break, discussion of legal standards
- 37:15 – 39:56 – Disagreement over attribution vs. hallucination
- 41:19 – 45:51 – AI’s negative reputation & extractive perception
- 54:33 – 58:52 – Copyright law, input/output cases, platform risk
- 60:11 – 66:43 – Creators' declining traffic and new business models
- 67:15 – 69:50 – SaaS/LLM competition and platform risk
- 74:26 – 81:31 – Can AI agents replicate creativity and taste?
- 81:48 – 81:57 – Closing: what’s next for Superhuman
Tone and Takeaways
- Tone: Serious, pointed, often tense and personal; Nilay is persistent and confrontational, Malhotra apologetic but defensive, eager to move to future opportunities.
- Key Takeaway: The episode is a case study in emergent AI ethics, the gap between legal and social acceptability, and the struggle of creators to adapt as platforms ingest, remix, and monetize their work in new ways. The conversation foreshadows intensifying battles over attribution, compensation, and value in the AI era.
For Further Listening: The Verge episode show notes include backstory links and recommend tracking coming Superhuman announcements and upcoming legal developments in AI, copyright, and attribution.
