The Neurodivergent Experience
Episode: Hot Topic: AI Imposters – When Disability Becomes Content
Hosts: Jordan James & Simon Scott
Date: August 28, 2025
Episode Overview
This episode dives into the rising trend of artificial intelligence (AI)-generated social media accounts and content impersonating disabled and neurodivergent people, especially for profit and clout. Jordan and Simon unpack the ethical complexities, emotional toll, and real-life consequences inflicted on genuine advocates when disability and neurodivergence are co-opted or faked, particularly using advanced AI tools. The hosts share personal stories and broader reflections while calling out the dangers of eroding trust in online communities.
Key Discussion Points & Insights
1. AI-Generated “Disability” Profiles
-
[03:00] Simon introduces a CBS News article about AI-generated accounts on social media, especially Instagram, TikTok, and YouTube, impersonating people with Down syndrome and other disabilities. These accounts use “feel-good” messages, motivational captions, and mimic the language of real advocates.
"Many of these artificial intelligence backed profiles are gaining followers faster than real disability advocates and they're making money from it." — Simon, 03:16
-
[03:34] Simon: Many of these fake profiles do not disclose their use of AI and sometimes repackage real advocates’ words.
-
[04:07] Jordan: Expresses concern and personal hurt, noting it’s possible their own articles and advocacy words are being stolen and used by AI imposters.
2. Stealing Advocacy for Profit
-
[05:21] Jordan: Laments that AI-generated (and sometimes real) accounts are monetizing advocacy purely for profit, calling out the "pity pound":
"It's stealing our advocacy...I never have done [advocacy] for money. And the fact that they're doing it for money and getting money for it makes me even sicker because that's not what advocacy should be about." — Jordan, 05:23
-
[06:07] Jordan: Distinguishes between earning money ethically (books, talks, real mentoring) and becoming greedy, especially with GoFundMe schemes exploiting disabled children’s images.
-
[08:04] Jordan: Criticizes “greed” in monetizing disability, whether AI or real people, calling fully faking a disability "fucked up."
3. Sympathy as Currency and Online Exploitation
-
[09:59] Simon: Notes that sympathy is a powerful driver for online revenue and attracts both genuine and exploitative actors.
"Sympathy is a currency online." — Simon, 10:04
-
[10:59] Jordan: Discusses personal experience with online harassment and eugenics rhetoric directed at disabled people.
-
[12:11] Jordan: Explains that negative comments feel overwhelming, but actual engagement (likes, hearts) is much more positive, pointing to a disparity in how online sentiment is measured.
4. Navigating Monetization in Advocacy
- [15:36] Jordan: Supports ethical ways of earning (e.g., selling affordable photography), but critiques those who ask for donations purely based on their disability, unless they’re offering real value.
"If you want to support my page...buy one of my pictures...anyone can afford it, and it also supports me." — Jordan, 16:07
5. AI and OnlyFans – Grim New Frontiers
-
[08:57] Simon: Reports from the CBS article that some AI “disability” profiles monetize further through OnlyFans, making the scenario even more disturbing.
-
[17:21] Jordan: Draws a firm ethical boundary: if a real disabled person chooses to sell content (even on OnlyFans), that’s fine — but pretending to have a disability to make money is unequivocally wrong.
6. Identity Theft, Catfishing, and Content Theft
-
[19:03] Jordan: Shares experiences of people stealing their photography, using AI or simply copying images and pretending to be them online.
"I've had accounts pretend to be me. They literally have pretended to be me. And it's really, really messed up..." — Jordan, 21:25
-
[22:07] Simon: Cites a podcast survey about potential “AI-only” shows, raising concerns about trust—how do listeners know the authenticity of the advocates they follow if AI-generated imposters proliferate?
"You're now listening to podcasts and things like that where...you don't even know if the people that you're listening to once a week are even real." — Simon, 23:19
7. The Erosion of Trust
-
[23:27] Jordan: Points to falling engagement in their photography as an example—people assume images are AI and disengage, undermining genuine artistic creation.
"I've seen a huge drop off of...engagement with my photography because...somebody might go, 'oh, that's just AI'...I don't blame them. I blame the people who are using AI to make pictures." — Jordan, 25:51
-
[25:51] Simon: Says “fake news” and “fake content” are now so prevalent, denial becomes a catch-all excuse — including when public figures try to escape accountability.
8. Nuanced Benefits of AI
- [26:17] Jordan: Acknowledges ways AI can empower disabled creators, e.g., non-speaking or deaf individuals using AI for voice or sign language podcasts. The issue is lack of transparency and intent.
"A real non-speaking individual...could make their own podcast using AI...That’s beautiful. It’s when it’s misused and...when people hide it. That’s the problem." — Jordan, 26:33
9. Rampant AI-Generated Fraud and Social Media Gullibility
- [28:00] Jordan: Describes viral AI-generated “poverty posts” (e.g., unlikely child “success” stories or inspiring images), often fooling less tech-savvy users.
- [29:30] Simon: Points to exponential advancements in AI-generated images and video, mentioning false news reporting based on such content.
Notable Quotes & Memorable Moments
-
On the emotional toll:
"Every article that I write, every piece of writing is personal...it takes hours for me to get it right and put in my feelings and experiences...That they can make a fake podcast as well [is] stealing our advocacy." — Jordan, 04:07
-
On online hate:
"I've been called all sorts of names under the sun...someone told me that I should never have had kids because I'm damaging the world by...putting autism into the world." — Jordan, 11:07
-
On ethical advocacy:
"It's not when people make a living...it’s when people get greedy." — Jordan, 07:08
-
On the dangers of AI impersonation:
"Isn't that half the battle that we're already fighting?...now people are going to start questioning whether they're even disabled at all." — Simon, 22:59
-
Finding a silver lining:
"AI can be good...somebody is non-speaking, but they'd like to do a podcast. They could use AI to put their words...and that's not fake." — Jordan, 26:17
Selected Timestamps for Key Segments
- [03:00] — Introduction of CBS News article on AI-generated disability imposters
- [04:07] — Jordan’s personal perspective on content theft
- [05:21] — Stealing advocacy for profit and ethical issues
- [08:57] — AI “imposter” OnlyFans accounts discussed
- [11:07] — Impact of negative online comments and hate
- [16:07] — Jordan explains pricing for photography and ethical monetization
- [19:03] — Content/identity theft using stolen images
- [22:07] — AI podcasts, voice cloning, and trust issues
- [25:51] — Engagement drops due to AI skepticism
- [26:17] — Positive applications for AI in disability empowerment
- [28:00] — The rise of viral, dubious “poverty” and “inspiration” AI images
- [29:30] — AI-generated videos go viral, even fooling news outlets
Tone & Takeaways
The episode is reflective, frank, and sometimes biting, but ultimately constructive. Jordan and Simon express deep frustration at unethical exploitation but also advocate for understanding and ethical tech use. Listeners are left warned about both practical and social hazards unfolding as AI technology evolves — especially its potential to harm real disabled and neurodivergent communities by muddying authenticity and trust.
Closing Thought:
With AI increasingly able to fake disability, real stories and advocates face suspicion, undermining crucial awareness work and exposing new vulnerabilities. Transparency, ethics, and solidarity are needed more than ever in both tech and advocacy.
