WSJ Tech News Briefing: Who Owns Your Face in the Age of AI?
Date: October 17, 2025
Host: Julie Chang (WSJ)
Guests: Jin Jo Lee (Heard on the Street columnist), Nicole Nguyen (Personal Tech columnist)
Episode Overview
This episode tackles two timely tech topics:
- The speculative boom in zero-revenue energy companies, fueled by AI-driven demand for power.
- The rising risks and ethical quandaries of AI-generated digital likeness—who owns your face as generative video technology advances?
WSJ reporters provide context, analysis, and firsthand experiences with cutting-edge tools, flagging both the investor frenzy around next-gen energy startups and the privacy implications of powerful video AI apps like OpenAI's Sora.
Key Discussion Points & Insights
1. The AI Gold Rush—In Energy, Not Just Tech
(00:24–03:41)
-
Unprecedented Valuations for Zero-Revenue Startups
- Oklo, a nuclear energy startup backed by Sam Altman, is valued at $25B despite no revenue, leveraging nascent technology and ambitious goals.
- Fermi plans to build out capacity equal to all of New Mexico’s power generation, yet only has a small fraction of that secured.
- “It's the largest market cap for a listed US Company with zero revenue.” —Jin Jo Lee (01:23)
-
Investor Motivation
- Sky-high valuations in profitable energy firms push investors toward early-stage, unproven bets with links to AI growth.
- The “AI boom” is seen as a catalyst that could legitimize and commercialize expensive, advanced energy solutions.
- “If there was ever a time for a new energy technology or an expensive energy technology to take off than it would be now.” —Jin Jo Lee (02:50)
-
Risks of an AI Bubble Burst
- Established, profitable companies might weather a downturn—zero-revenue newcomers risk catastrophic share price drops.
- “Their share prices probably have the furthest to come down if the bubble bursts.” —Jin Jo Lee (03:28)
2. Who Owns Your Face? Dilemmas of AI-Generated Digital Avatars
(04:25–09:29)
The Rise of Sora and Other AI Video Tools
-
OpenAI Sora: How It Works
- Invite-only, iOS-available app creating rapid video avatars from short user prompts.
- Nicole Nguyen trialed it, noting she could create a “digital avatar” after “look[ing] at a camera, speak[ing] three numbers aloud, mov[ing] my head around.” (05:32)
- “It did look and sound a lot like me that I could put in any scenario that I could dream up.” —Nicole Nguyen (05:38)
-
Believability—The Good, the Bad, and the Uncanny
- Likeness varies: sometimes eerily accurate in detail, other times off-putting or inconsistent (e.g., odd teeth, unnatural comments).
- “In some, my facial expressions...the texture to my skin was perfect… In others, my teeth look really off. And it said things that I would never say.” —Nicole Nguyen (06:27)
-
Dangerous Flexibility and Potential for Abuse
- Generative AI’s unpredictability—identical prompts yield different results, making control difficult.
- The capacity to put anyone’s likeness in fabricated, contextless, or damaging scenarios raises deep concerns.
Company-Provided Privacy Protections
-
Cameo Feature Limits
- Users can see and delete any video drafts where their avatar is used—even before publishing.
- Content featuring you cannot, in theory, be downloaded or screen-recorded.
- “You will always have visibility into their drafts, even if they don't publish those videos. And you can delete them at any time.” —Nicole Nguyen (07:19)
- Nevertheless, workarounds exist.
-
Who Is Fair Game?—Issues of Consent and Copyright
- Living celebrities (e.g., Taylor Swift) and major IP (like Star Wars) are blocked; historical figures such as Martin Luther King Jr. are not.
- Nicole saw “dead celebrities” used in questionable contexts:
- “In one of the creepiest videos that I saw, [...] [MLK Jr.] said, ‘I have a dream that Sora will change its content violation policy,’ which is something that Martin Luther King Jr. never said.” —Nicole Nguyen (07:56)
Risks for Individuals
-
Loss of Control Over Likeness
- Once images are uploaded, users often forfeit significant control, and AI tools amplify risks of manipulation and viral spread far beyond old-school Photoshop.
- Most tools’ Terms of Service require users to have rights to any uploaded photos—but enforcement is lax and open to abuse.
- “Once you hand your likeness over to these AI apps, you lose control of them.” —Nicole Nguyen (08:24)
-
Training Data Concerns and Controls
- Major tools let users opt out of content being used for model training, but settings require user action.
- “OpenAI says it takes steps to protect user privacy...you can prevent the company from using your content to train its models.” —Nicole Nguyen (09:18)
Memorable Quotes & Notable Moments
- “If the AI bubble bursts, their share prices probably have the furthest to come down.” —Jin Jo Lee (03:28)
- “All I had to do was look at a camera, speak three numbers aloud, move my head around, and then it created this impersonation of me, this digital avatar…” —Nicole Nguyen (05:32)
- “Once you hand your likeness over to these AI apps, you lose control of them.” —Nicole Nguyen (08:24)
- “He said, ‘I have a dream that Sora will change its content violation policy,’ which is something that Martin Luther King Jr. never said, but it was definitely in his voice and in his likeness.” —Nicole Nguyen (07:59)
Timestamps for Important Segments
- 00:24 — Introduction to speculative energy startups and their ties to the AI boom
- 01:18–03:41 — Analysis of Oklo and Fermi valuations; risks of AI-driven investment bubbles
- 04:25 — Introduction to Sora and new AI video tools
- 05:14–07:05 — Nicole Nguyen’s firsthand review of generating digital avatars and their limitations
- 07:12–08:17 — Privacy features, restrictions, creepy AI reenactments of historical figures
- 08:21–09:29 — Broader risks: loss of control, data sharing, and model training concerns
Takeaways
- Generative AI is revolutionizing both energy investment and digital media—sometimes recklessly.
- Tools like Sora make it trivially easy to fabricate highly realistic fakes of anyone or anything, with safeguards still lagging behind.
- Users, lawmakers, and tech companies urgently need to clarify who controls digital likenesses and how to rein in misuse before deepfakes and unauthorized avatars become commonplace.
