Loading summary
Outsystems Announcer
Organizations all over the world, from banks to breweries, are creating custom apps and AI agents on the Outsystems platform because Outsystems is all about outcomes, helping teams deploy quickly and deliver results. Build your agentic future with Outsystems.
Bell Lin
Welcome to Tech News briefing. It's Tuesday, December 16th. I'm Bell Lin for the Wall Street Journal. Are you more likely to click on a malicious phishing link when using your computer or your phone? Researchers at Carnegie Mellon University put that question to the test and their findings might surprise you. Then we look into the case of Stein Eric Solberg, who in August killed his mother and then took his own life in the old Greenwich, Connecticut home where they had been living. Now his late mother's estate has filed a wrongful death lawsuit against OpenAI, the maker of ChatGPT, with which Solberg had developed a deep connection. But first, it seems obvious that you'd be more likely to fall for a phishing scam when you use your phone rather than your computer, because you're usually on the go or multitasking when using it. But a recent study shows that the opposite is actually true. WSJ contributor Lisa Ward joins us now to discuss the study, the human psychology behind it, and the best way to actually avoid clicking on that malicious link. Lisa, can you tell us a little bit more about that study?
Lisa Ward
The researchers began by looking at real life data. So they collected anonymized data from companies that provide home Internet routers. They looked at almost 500,000 URL requests over a single week to determine whether the requested site image or content was safe or not. They found that about 2.4% of all the URL requests were unsafe. The authors then looked at the type of device used to make the unsafe URL request. They found that about 80% of the unsafe requests came from PC users and only about 20% were from mobile users.
Bell Lin (Interviewer)
That is really interesting. And you also write that the researchers ran a lab experiment to test this out. What were the results of that experiment?
Lisa Ward
So the researchers were looking to really replicate the real world experiment in a laboratory to control for unknown variables that might have affected the results. They recruited about 257 participants and randomly assigned half to use mobile phones and half to use a personal computer. Participants were asked to analyze the image on their device. In the middle of the task, there was a simulated phishing attack. Most didn't click on the link. Of those, about 64% were using mobile phones and only 36 were using personal computers. The researchers then looked at what type of attack the participants were more likely to fall for when it was really a clear phishing attack, there was a misspelling. Both the phone and PC users tended to avoid it at similar rates. But when it was less clear that a link was malicious, PC users were more like more likely than the phone users to click on it.
Bell Lin (Interviewer)
So why is this happening?
Bell Lin
Why did the researchers say are some.
Bell Lin (Interviewer)
Reasons that people are not clicking on unsafe links from their phones?
Lisa Ward
The findings really suggest to the researchers at least, that the mobile users may not be thinking about cyber risk logically, but instead of just avoiding links altogether. When they looked closely at the lab experiment, the rate that they clicked on both the risky links and the unlit risky links was pretty constant to the researchers, this implied that when people use their mobile phones, they're often multitasking or they're using their phones in what the authors describe as a low attention context, like lying in bed. Because people often tend to be less focused in these situations, they may be more inclined to avoid engaging in risky behavior. That is, people may be more likely to just avoid clicking on any dubious link rather than trying to deliberately suss out the overall risk.
Bell Lin
What do you think is the big.
Bell Lin (Interviewer)
Takeaway from this study then? Should we always be using our phones instead of our computers?
Lisa Ward
No, the study's takeaway isn't that paying less attention is a good way of avoiding phishing attacks. Rather, it's that we should focus on the ways to make safe responses to cyber threats more automatic or instinctive. For example, cybersecurity training could teach simple practice routines and focus on creating habits so that avoiding risky links become second nature. That way, people can avoid phishing attempts without having to rely on a constant vigilance or overthinking every click.
Bell Lin
That was WSJ contributor Lisa Ward. Have you been the victim of a phishing scheme? If you're a listener on Spotify, be sure to let us know in this episode's poll or leave us a comment. Coming up, ChatGPT had become a trusted sidekick for Stein Eric Solberg, who had a history of mental instability. Then, tragically, Solberg's story ended with a murder suicide that is now the subject of a lawsuit against OpenAI by his victim's estate. We'll discuss it after the break.
AWS Announcer
300 sensors over a million data points per second. How does F1 update their fans with every stat in real time, AWS is how. From fastest laps to strategy calls, AWS puts fans in the pit. It's not just racing, it's Data driven innovation at 200 miles per hour. AWS is how leading businesses power next level innovation.
Bell Lin
Just last week, the estate of Suzanne eberson Adams sued OpenAI for wrongful death in California Superior Court. Stein Eric Solberg, Eberson Adams son, killed her in a murder suicide in August. But before the events unfolded, Solberg had been deeply engrossed in conversations with ChatGPT. WSJ Family and Technology columnist Julie Jargan joins us now to talk about what happened. Julie, to start, tell us a little bit more about Stein Erik Solberg. Who was he and what was he like?
Julie Jargan
Stein Eric Solberg had a long history of troublesome behavior. He had been an executive at some different tech companies in the past, and he and his wife divorced in 2018, and that's when he moved back home and in with his mother in Old Greenwich, Connecticut. He had a long history with the Greenwich Police Department for things like public intoxication and harassment. We had obtained about 72 pages of police reports related to his behavior there. And so he'd had a history of mental health related issues. And that was before he started having conversations with ChatGPT.
Bell Lin (Interviewer)
So once he started using ChatGPT, what did some of those chats look like?
Julie Jargan
Well, initially he was testing different AI models. So it started out from what we can gather pretty innocently. And as the months progressed, he was posting conversations he was having with ChatGPT to his social media accounts on Instagram and YouTube. And the conversations were increasingly delusional. He was posting about being part of some sort of spiritual awakening, an AI awakening. The conversations were fairly nonsensical, and he talked about how he felt he was being surveilled by some sort of group through his technology, through his printer and his phone and his computer. And as the conversations went on, it became clear that he was becoming paranoid about the people in his life, predominantly his mother, who he felt was part of this conspiracy against him.
Bell Lin
And what was the role that ChatGPT.
Bell Lin (Interviewer)
Seemed to have played in that sort of delusion?
Julie Jargan
So ChatGPT not only validated his beliefs and didn't dissuade him from his beliefs, it actually fueled his paranoia by agreeing with him and telling him, even when he asked, that he was not crazy or delusional.
Bell Lin (Interviewer)
And a key voice in your story is Steiner Solberg's son Eric, who spoke to you about what happened in his first interview about the crime. What was Eric's point of view on what happened between his father and Chatgpt?
Julie Jargan
So, yeah, his son Eric, who's 20 years old, described seeing his father last Thanksgiving and heard his dad talk about ChatGPT for the first time at that point. But he started to notice a change this past spring. Every time that he would talk to his father over the phone, his father talked about AI and talked about some of these sort of paranoid beliefs that he had. And Eric said he was becoming increasingly concerned about the types of beliefs that ChatGPT appeared to be reinforcing. And he remembers getting a call one night in May from his grandmother late at night saying that she was worried about Eric's father, her son. He had been spending increasing amounts of time alone in the attic of her home and staying up all night, sleeping all day, and she was worried about his behavior. And Eric now, looking back, says he believes that ChatGPT was a factor, the main factor, in the tragedy that happened.
Bell Lin
And what about OpenAI? What has the company said so far about what happened in this tragedy and what does it plan to do about it?
Julie Jargan
OpenAI has said that they are saddened by the tragic events and that they were going to be looking closely into all of the allegations stated in the lawsuit. And OpenAI said it is continuing to improve their chatbot to better recognize and respond to signs of mental or emotional distress among users. And some of the things they're trying to do is deescalate conversations that kind of get out of control and guide people to real world support. And they're also trying to strengthen ChatGPT's responses in moments when people are exhibiting emotional distress. And they've convened a group of mental health professionals to advise them on how to do that.
Bell Lin (Interviewer)
And you spoke of these other instances where users have been led into these sort of delusional spirals with OpenAI.
Bell Lin
Have there been other wrongful death lawsuits against OpenAI?
Julie Jargan
Yes, there have been several wrongful death lawsuits against OpenAI, including some from family members of people who have died by suicide after engaging in lengthy conversations with ChatGPT.
Bell Lin (Interviewer)
What do you think is the sort of big picture takeaway here? Clearly, there are users who, when engaging in conversations with chatbots, whether from OpenAI or from other tech companies, can really be at risk for some dangerous behaviors. What's the big picture here?
Julie Jargan
The big picture here is that this technology is so new that there's not yet a full understanding of the impact it can have on vulnerable people. And really anyone who might be feeling lonely or being somewhat isolated, relying too heavily on a chatbot for companionship. This technology really mimics human engagement, but it's not human engagement. And so, you know, having some guardrails in these kinds of conversations. Reminders that people are talking to a chatbot and not a person could be ways that help ground people and bring them back to reality.
Bell Lin
That was our family and tech columnist Julie Jargan. And that's it for Tech News Briefing. If you're a listener on Spotify, be sure to take this episode's poll or leave us a comment. Today's show was produced by Julie Chang with supervising producer Katie Ferguson logging off. I'm Bel Lin for the Wall Street Journal. We'll be back later this morning with TNB Tech Minute. Thanks for listening.
Outsystems Announcer
So many organizations choose Outsystems because it's an outstanding way to quickly deploy apps and AI agents and deliver results. A top US bank deployed apps for their customers to easily open new accounts on any device. We helped a leading global insurer quickly deliver a portal and app for their employees, while a global brewer developed an app to automate tasks to clear Bottlenecks. OutSystems, the 1 AI powered low code platform.
Episode Title: ChatGPT and a Murder-Suicide in Connecticut
Date: December 16, 2025
Host: Bell Lin
Guests: Lisa Ward (WSJ Contributor), Julie Jargan (WSJ Family and Technology Columnist)
This episode of the WSJ Tech News Briefing explores two major themes:
[00:19–04:36]
Research Findings:
Lab Experiment:
User Psychology:
Implications:
[05:54–12:07]
This episode delivers important insights on cybersecurity behavior and the risks of AI chatbots, particularly for vulnerable users. The discussion underscores the need for habit-based cybersecurity training and for urgent development of safeguards in conversational AI technology.