Loading summary
IBM Representative
So there's a lot of noise about AI, but time's too tight for more promises. So let's talk about results. At IBM, we work with our employees to integrate technology right into the systems they need. Now a global workforce of 300,000 can use AI to fill their HR questions, resolving 94% of common questions, not noise. Proof of how we can help companies get smarter by putting AI where it actually pays off. Deep in the work that moves the business, lets create smarter business.
Chase Bank Representative
IBM Small businesses are the pulse of every community. They bring people together, create opportunities and drive growth. Chase for Business helps business owners like you with personalized guidance and convenient digital tools all in one place. With that guidance and your determination, you can take your business farther and help build a brighter future for your community. Learn more@chase.com business chase for business make more of what's yours the Chase Mobile app is available for select mobile devices. Message and data rates may apply JPMorgan Chase Bank NA Member FDIC Copyright 2026 JPMorgan Chase Co.
The Hartford Representative
When you're running a business, the best days are the ones where priorities stay on track. For midsize and large companies, risk can affect multiple parts of the organization at once, from property and liability to cyber and regulatory challenges. At that level, managing risk becomes an ongoing discipline. At the Hartford, the focus is on helping businesses manage risk before it turns into something more disruptive. And when losses do happen, that work is paired with insurance coverage shaped by years of underwriting, risk engineering and claims experience. Learn more@thehartford.com riskmitigation policies provided by Hartford Fire Insurance Company and its property and casualty affiliates, Hartford, Connecticut
Bloomberg Host (possibly Tim Stankovic)
Bloomberg Audio Studios Podcasts Radio news.
Bloomberg Host (possibly Carol Massar)
You're listening to Bloomberg Business Week with Carol Massar and Tim Stanvik on Bloomberg Radio.
Bloomberg Host (possibly Tim Stankovic)
Well, the AI spend and build out is on. You know that we talk about it a lot here at Bloomberg. Lots of questions still abound over the impact on the workforce. Does it destroy some jobs? Does it create others? Or questions on how much more productive we become? Will it create problems for younger individuals? Might it destroy our creativity? But might it spark it?
Bloomberg Host (possibly Carol Massar)
Might we get discoveries when it comes to health care and pharmaceuticals?
Bloomberg Host (possibly Tim Stankovic)
That's right. So the yin and yang right of AI, and we're just kind of watching this in real time. Wired reported out last night about some work that MIT researchers are doing led by Patty Moss. She's a professor at the Institute's Media Lab. They've worked on a new benchmark that could help AI developers build systems that better understand Tim how to inspire Healthier behavior among users.
Bloomberg Host (possibly Carol Massar)
We wanted to know more. Let's get to the BusinessWeek women's health segment, where we focus on key issues in developing technologies impacting the present and future of women's health around the world. We head to the MIT Media Lab in Cambridge, Massachusetts. Sent to Dr. Patty Moss, professor of Media Arts and sciences and head of Fluid Interfaces Group. She does research at the intersection of human computer interaction and AI. So let's talk about these, these LLMs, these models and how we can interact with them to actually benefit our health. Because so much of what we've heard about has been the detrimental effects on our health and well being.
Ryan Reynolds
Yeah.
Dr. Patty Moss
Pleasure to be here. Basically. LLMs today, of course, are everywhere. They're being pushed to us in our text editors, Google search, all of that. But we still don't know what the impact is of using these chatbots for assistance in our daily lives. One of the things that we are looking at here at the MIT Media Laboratory is exactly the impact of using, using these technologies day in, day out on our cognitive functioning, on our ability to think critically, our ability to learn, our ability to solve problems, our socializing, our loneliness and so on.
Bloomberg Host (possibly Tim Stankovic)
Yeah, you know, that's the thing. One of the things I think about Dr. Moss is we did so well in keeping on top of social media. So it did no harm.
Bloomberg Host (possibly Carol Massar)
She's being sarcastic.
Bloomberg Host (possibly Tim Stankovic)
Thank you so much. Right, though. So my fear is how do we keep on top of this? So tell us about the work you are doing to try and get a handle on how it might impact our brain, our psychological health, and how that might inform the companies that are in the thick of it.
Dr. Patty Moss
Yeah. So we've been doing studies here at the MIT Media Lab showing that when you use these technologies to help you with a particular work task or with writing an essay, et cetera, it may actually improve your performance in the moment when you have access to these tools. But it can actually erode some of those same skills in the long term if you rely on these technologies for too much of your work. Basically, it's really important that we think about how these technologies should be integrated in our daily life and in our work lives in, in such a way that they don't erode skills that we don't want to lose. And that's exactly what we try to focus on with our work.
Bloomberg Host (possibly Carol Massar)
So how do we do, how do we do that when. When we are incur. When we're. Because we are encouraged to use these. I mean, you have people come on our program and say, if you're not using these tools, then you are essentially, you know, erasing yourself from the workforce in the future. Yeah, but there's that balance that we want to make sure we continue to be able to think.
Dr. Patty Moss
Yeah, well, the emphasis of LLM development has been especially on increasing their capacity, their efficiency, things like that. But it hasn't exactly been on the design of these systems and designing them in a way so that ultimately the benefit for people is a net positive, especially in the long term. We believe that we actually have to increase the friction in these systems and that they shouldn't too readily just do all the work for us, but they should play a more augmentative role, basically and support us in also developing our skills and keeping our skills up to a certain level.
Bloomberg Host (possibly Tim Stankovic)
So what do we do? Is it create guardrails? Like, or when we ask a question like, that's pretty easy. Don't you want to do that on your own? And you know, then there are those of us who are going to be like, yeah, it might be easy, but I don't have the time. Like, I just wonder how do we actually manage. Yeah, and you know, I was going to say, and we're in a world where the government and you have, you know, is it OpenAI. Right. That has been pushing back about concerns about weapons that can ultimately, you know, or systems that can think on their own, so.
Bloomberg Host (possibly Carol Massar)
Or anthropic.
Bloomberg Host (possibly Tim Stankovic)
Anthropic, forgive me, anthropic is who we're thinking about. But like this pushback and the concern about what this means for weapons. But I do think about these systems that can maybe almost have a life personality of their own and what damage they can create, especially for a younger population, but not just younger, all of us.
Dr. Patty Moss
Yeah, well, these systems are not necessarily 100% correct yet all the time they still hallucinate. They don't really deeply understand our world. And so it's important that we see still are able to understand things, solve problems, and that we can supervise these systems so we know when they may be incorrect or when we maybe shouldn't be trusting them. One of the things we do here at MIT is we've been developing a set of best that actually evaluate the impact of LLM use on people, on their ability to think critically, for example, and to think for themselves. Very important that LLMs don't just always give you the answer. Depending on the context and who's the user and what they're using the LLM for, it may be better for that chatbot to just ask a Question first of the user to engage them in thinking. Because our tests show that when an LLM just gives you the answer readily, people stop thinking for themselves. They're naturally a little bit lazy. They stopped.
Bloomberg Host (possibly Carol Massar)
Dr. Moss, does that align with the incentives of these LLMs?
Dr. Patty Moss
No, it doesn't exactly align with the incentives, which is another reason why we are developing these benchmarks, the benchmark for the human impact of AI. Because if we have a benchmark like that, we can put pressure on these companies to actually do well on that particular human impact benchmark.
Bloomberg Host (possibly Tim Stankovic)
Are you.
Dr. Patty Moss
We've only got many. Go ahead.
Bloomberg Host (possibly Tim Stankovic)
I was going to say there's only 20 seconds left. Are you optimistic that we're going to be able to manage this? Are you more nervous and just very quickly?
Dr. Patty Moss
I'm optimistic, actually. I think we're going a little bit too quickly with rolling out these systems to billions of people honestly and affecting their lives in a big way. The science is only now catching up, but I believe that we can design these systems in a way that they actually lead to human flourishing and people doing well. Be sure that make them.
Bloomberg Host (possibly Tim Stankovic)
Be sure to come back soon. Because I do feel like, again, we're learning as we go, but it's an important subject. Dr. Patty Moss at the MIT Media Lab in Cambridge, Massachusetts.
IBM Representative
The thing about AI for business, it may not automatically fit the way your business works. At IBM, we've seen this firsthand. But by embedding AI across hr, IT and procurement processes, we've reduced costs by millions, slashed repetitive tasks, and freed thousands of hours for strategic work. Now we're helping companies get smarter by putting AI where it actually pays off, deep in the work that moves the business. Let's create smarter business. IBM.
Cigna Healthcare Representative
For many men, mental health challenges aren't recognized until they've already taken a toll. Work pressure, financial changing relationships, and traditional expectations around masculinity can quietly wear men down, often without clear warning signs. In season three of the Visibility Gap, Dr. Guy Winch and his guests explore how these pressures show up, how to spot them earlier, and how men can access meaningful support. Listen to the new season of the Visibility Gap, a podcast presented by Cigna Healthcare.
The Hartford Representative
Support for the show comes from public. Public is an investing platform that offers access to stocks, options, bonds, and crypto. And they've also integrated AI with tools that can assist investors in building customized portfolios. One of these tools is called generated Assets. It allows you to turn your ideas into investable indexes. So let's say you're interested in something specific, like biotech companies. With high R and D spend, small cap stocks with improving operating margins or the S&P 500 minus high debt companies. Chances are there isn't an ETF that fits your exact criteria. But on Public you just type in a prompt and their AI screens thousands of stocks and builds a one of a kind index. You can even backtest it against the S&P 500. Then you can invest in a few clicks, go to public.com market and earn an uncapped 1% bonus when you transfer your portfolio. That's public.com market and paid for by
Public.com Representative
Public Holdings Brokerage Services by Public Investing member FINRA SIPC Advisory Services by Public Advisors SEC Registered Advisor crypto services by ZeroHash sample prompts are for illustrative purposes only, not investment advice. All investing involves risk of loss. See complete disclosures@public.com Disclosures for many men,
Cigna Healthcare Representative
mental health challenges aren't recognized until they've already taken a toll. Work pressure, financial stress, changing relationships and traditional expectations around masculinity can quietly wear men down, often without clear warning signs. In season three of the Visibility Gap, Dr. Guy Winch and his guests explore how these pressures show up, how to spot them earlier, and how men can access meaningful support. Listen to the new season of the Visibility Gap, a podcast presented by Cigna Healthcare. Wasabi is purpose built to free your business from skyrocketing storage costs and fees from the big guys. Wasabi is the go to provider for professional and collegiate sports teams around the world. Check out Wasabi's AI enabled intelligent media storage, Wasabi Air and the industry's only cloud storage service with triple protection against cyber criminals. Wasabi driving innovation in data storage for up to 80% less than market competition. Try for free at wasabi.com, wasabi Hot Cloud Storage proud partner of iHeart Podcast
Ryan Reynolds
Network Ryan Reynolds here from Mint Mobile. I don't know if you knew this, but anyone can get the same Premium Wireless for $15 a month plan that I've been enjoying. It's not just for celebrities. So do like I did and have one of your assistant's assistants switch you to Mint Mobile today. I'm told it's super easy to do@mintmobile.com
Chase Bank Representative
Switch upfront payment of $45 for 3 month plan equivalent to $15 per month required intro rate first 3 months only, then full price plan options available, taxes and fees, extra fee full terms@mintmobile.com.
Podcast: Bloomberg Businessweek
Hosts: Carol Massar, Tim Stenovec
Guest: Dr. Patty Moss (MIT Media Lab)
Release Date: May 8, 2026
In this episode, Carol Massar and Tim Stenovec dive into the evolving intersection of artificial intelligence and human psychology. The discussion centers on AI’s impact on the workforce, creativity, critical thinking, and health—particularly as large language models (LLMs) are increasingly woven into our daily lives and workplaces. The episode features Dr. Patty Moss, professor and head of the Fluid Interfaces Group at MIT Media Lab, who shares insights from her research on the cognitive and social effects of AI, as well as emerging benchmarks intended to guide the responsible integration of these technologies.
AI as an Ubiquitous Tool:
Dr. Moss highlights that LLMs, such as chatbots, have become prevalent in our daily environments—from text editors to search engines—yet we don't fully grasp their long-term psychological effects.
[03:23]
“LLMs today, of course, are everywhere. They're being pushed to us in our text editors, Google search, all of that. But we still don't know what the impact is of using these chatbots for assistance in our daily lives.”
— Dr. Patty Moss
Risks to Human Capabilities:
While LLMs can improve performance on specific tasks, over-reliance risks eroding skills like problem-solving, critical thinking, and self-initiation.
[04:42]
“It may actually improve your performance in the moment when you have access to these tools. But it can actually erode some of those same skills in the long term if you rely on these technologies for too much of your work.”
— Dr. Patty Moss
Need for Friction & Design Considerations:
The discussion emphasizes that AI tools should support, not supplant, human capabilities. Dr. Moss advocates for intentionally adding "friction" so that users remain engaged and skillful, rather than passively accepting AI outputs.
[05:53]
“We actually have to increase the friction in these systems and that they shouldn't too readily just do all the work for us, but they should play a more augmentative role… and support us in also developing our skills and keeping our skills up to a certain level.”
— Dr. Patty Moss
Guardrails and Customization:
Tim Stenovec asks whether creating limits or encouragements—such as prompting users to solve simple questions themselves—can protect skills. Dr. Moss agrees and describes efforts to build benchmarks that measure AI’s impact on human thinking, emphasizing that LLMs should sometimes ask questions back to the user, fostering active engagement.
[07:34]
“Very important that LLMs don't just always give you the answer. Depending on the context and who's the user and what they're using the LLM for, it may be better for that chatbot to just ask a question first of the user to engage them in thinking.”
— Dr. Patty Moss
Misalignment with Current Business Incentives:
Dr. Moss notes that current industry incentives drive LLM developers toward speed, efficiency, and widespread adoption, but not necessarily toward maintaining or improving human skills.
[08:55]
“No, it doesn't exactly align with the incentives, which is another reason why we are developing these benchmarks, the benchmark for the human impact of AI… so we can put pressure on these companies to actually do well on that particular human impact benchmark.”
— Dr. Patty Moss
Role of Benchmarks:
By creating measures focused on human flourishing and cognitive impact, researchers hope to nudge the industry toward AI systems that enhance, rather than erode, human capacities.
“I'm optimistic, actually. I think we're going a little bit too quickly with rolling out these systems to billions of people honestly and affecting their lives in a big way. The science is only now catching up, but I believe that we can design these systems in a way that they actually lead to human flourishing and people doing well.”
— Dr. Patty Moss
Sarcastic Commentary on Social Media:
At one point, Tim jokes about successfully handling the negative effects of social media:
[04:12]
“We did so well in keeping on top of social media. So it did no harm.”
— Tim Stenovec
Carol Massar quickly interjects with,
“She's being sarcastic.”
On Younger Generations & AI Risks:
[07:17]
“But I do think about these systems that can maybe almost have a life personality of their own and what damage they can create, especially for a younger population, but not just younger, all of us.”
— Tim Stenovec
The episode brings a nuanced look at the ongoing AI revolution, advocating for deliberate, responsible innovation. Through Dr. Moss’s MIT research, listeners are reminded that while AI offers tangible benefits, its design must prioritize long-term human well-being. The integration of “friction” and human impact benchmarks may become key strategies in ensuring AI augments rather than replaces our most critical mental skills.
(All timestamps in MM:SS format. Dialogue may be lightly edited for clarity.)