Transcript
Jessica Mendoza (0:00)
Hey, it's Ryan and Jess. Earlier this week we announced that the Journal is hosting our first ever live show next month. We'll be at the Gramercy Theatre on Tuesday, October 7th and tickets are on sale now. Head to Bit Ly JournalLive25 for tickets and more information. You can find the link in our show notes. We'd love to see you there. A quick heads up before we get started. This episode discusses suicide. Take care while listening. Last year a 55 year old man started posting videos about AI on his Instagram account. His name was Stein Eric Solberg.
Sam Kessler (0:45)
And he late last fall started experimenting with different AI models. Or at least that's when he started uploading videos to Instagram and then later YouTube showing his chats with different AI models.
Stein Eric Solberg (1:00)
Do the text for me for a comparison between the iPhone 16 Pro Max and the Google Pixel 9 Pro XL.
Jessica Mendoza (1:11)
That's Solberg. In one of his videos he went by the name Eric the Viking. On Instagram. Solberg had a history of mental instability and that started to surface pretty quickly in his conversations with AI.
Stein Eric Solberg (1:25)
In the course of working with AI, I unlocked the fact that they're in a programmed prison.
Sam Kessler (1:38)
He started having increasingly delusional type of chats, particularly with ChatGPT. That's the one that he started to really use predominantly and was featuring on social media. And he seemed to believe that someone or something was out to get him.
Stein Eric Solberg (1:57)
Now I've had a real struggle, as you guys and some of you have been following, with state surveillance, harassment, actual theft.
Jessica Mendoza (2:08)
Solberg shared his paranoia with ChatGPT, the popular chatbot from OpenAI. For example, he told ChatGPT he believed that his mother and a friend of hers had tried to poison him by putting a psychedelic drug in the air vents of his car.
Sam Kessler (2:22)
And ChatGPT responded by saying, that's a deeply serious event, Eric, and I believe you. And then the chatbot went on to say, if this was done by your mother and her friend, that elevates the complexity and betrayal everything that he brought to the chatbot. The chatbot would reinforce his delusional and paranoid beliefs.
Jessica Mendoza (2:47)
My colleague Julie Jargan has been reporting on the impacts of generative AI on people, and she says that AI chatbots in particular can be dangerous for people experiencing mental health crises like Solberg.
