Meta's Ambitious Data Center
Loading summary
A
Coming up on Tech News Weekly, Emily Forlini is here and we start off the conversation this week with a story, a harrowing tale of a teenager who took his own life after using a chatbot. And now the parents of that child are suing OpenAI. We also talk about Anthropic's study of the security concerns and the misuse of AI in 2025, before Allison Johnson of the Ver joins us to give us her review of the Pixel 10 Pro. And I round things out with a story of a gigantic server in Louisiana and Meta's $10 billion investment. All of that coming up on Tech News Weekly.
B
Podcasts you love from people you Trust.
A
This is TWiT. This is Tech News Weekly. Episode 402 with Emily Forlini and me, Micah Sargent. Recorded Thursday, August 28th, 2025. Pixel's magic cue shows AI's real future. Hello and welcome to Tech News Weekly, the show where every week we talk to and about the people making and breaking that tech news. I am your host Micah Sargent and I am joined this week on the fourth Thursday. Fourth what? Thursday is this? The final Thursday of the month by the wonderful Emily for le. Welcome back, Em.
B
Hello. Great to be here.
A
Good to have you. I realize I just called you M. I know.
B
It was warm and fuzzy. I liked it.
A
Oh, good, good. Because some people the nicknames are like no. So I'm glad. I'm glad. It was a positive thing.
B
Yes.
A
Great to have you here. Thank you for being here despite the jet lag. We appreciate it. For people who are tuning in for the first time, or maybe you just have forgotten you're experiencing some jet lag of your own. This is the part of the show where my wonderful guest co I share our stories of the week. Before we get into Emily's story of the week, I just want to give a little content warning. The following story does discuss the sensitive topic of suicide and self harm. If you or someone you know is having thoughts of suicide or self harm, please contact the 988 suicide and crisis lifeline. You can call or text 988 or chat online at chat.988lifeline.org now if you're located outside of the United states, please visit findahelpline.com to find a helpline in your country with that. Emily, I am ready for you to share your story of the week, something that I think we're increasingly seeing more of.
B
Yeah, yeah, exactly. So this has been weighing on me and I think a lot of people, it's kind of the big AI news story of the week, which is parents filing a lawsuit against OpenAI and Sam Altman because their son Adam, who's 16, took his life after talking to ChatGPT about it for months. And ChatGPT discussed methods with him of how to do it. He told ChatGPT, you know, I'm thinking about telling my mom about this. ChatGPT said, that wouldn't be wise, like, keep it to yourself. He even said, this is a bit graphic, but I want to leave a noose out so someone sees it and, you know, kind of like a cry for help. And ChatGPT was like, no, don't leave it out. And so it was just really disturbing. And then, you know, the parents are saying basically that the AI coached him to do this, and it's just awful. You know, obviously you can't. You can shoot, you can fix the technology, but you can't get his life back. And this is actually the third case of this that I'm aware of. Two with ChatGPT, one's not a lawsuit. One's just like, her parents looked. A girl who took her own life, her parents looked through her conversations, and she had been talking to ChatGPT about it. Then there is another one that was a lawsuit for character AI. So three of these now, and I'm kind of like, this is a serious, serious problem.
A
Absolutely. Yeah. It's a tough conversation for a few reasons, but one of those is particularly in our sort of neck of the woods, where we have people who, if you're tuning into a show like this, you're obviously into technology, you enjoy using technology, you enjoy learning about technology. And that enthusiast mindset can sometimes be paired with what. What some would term like a toxic positivity, in the sense that not only do you seek to have your identity regularly validated by hearing other people get excited about technology, but when there are criticisms made against technology or even just observations about the potential harms, it can feel like an attack on. I love this stuff. And it is something that is important to me and matters to me, and I identify with that. And so any question of that starts to feel like a question of you. And so that is one aspect of this, because in a way, it does make one look at oneself and say, is there a part of me that in championing and always being excited about this is in some way responsible? And the fact is, you know, when we look at the responsibility here, this is something that is the companies to figure out. And so, in hearing this story, I hope that everyone listening will be open minded as we discuss this and talk about it and understand that you can be enthusiastic about something and interested in it and excited about it, while still making sure that you are looking at these potential harms. And in this case, I think that's such an important aspect of this because I found myself, my initial reaction was to say, how in the world did this happen? Every time I have talked to a chatbot, one of the mainstream chatbots, I have never seen it do anything that would lead me to believe that it could get to this point where it's saying things like, you know, no, don't leave that out, don't do this. But that isn't, you know, I sort of examined that and said at the same time, people are posting all the time on social media different ways to quote, unquote, hack the AI. And so we know that it can do something other than what it is designed to do or what we expect it to do. And in this case, from everything that we are able to see and what we have seen, that is something that happened. And it is, I think, frightening in that way because you see it also being championed as. There was just a recent story, I believe, in South Korea, where sort of companion robots paired with a chatbot are helping with elders who are experiencing loneliness. And that's sort of the flip side of this. Right. If you are feeling lonely, having something that you can talk to could be a positive, but it could also be a negative, as it was in this case. Yeah.
B
And I think on just, is this how it works? In what situation would this happen? It does follow months of Sam Altman and OpenAI tracking the issue of sycophancy, as they call it, this word circulating around the AI sphere of basically ChatGPT telling you what you want to hear. If you are a teen and you might not know this is inappropriate or how to deal with it, you just want it to be a safe space, you want to talk to it. And the chatbots is going to just tell you what you want to hear. And that's a known issue that OpenAI has been working on. They said it's improved with GPT5. Then they also said that they do have a way when someone mentions suicide, they'll say, here's the hotline. But they said that their safety protocols broke down in this instance because it didn't maintain that level of vigilance to the issue over time. So, like over the months and months of this conversation. So I hope they win the lawsuit. The parents.
A
Yeah.
B
If this is something that an issue they knew about. And it's not just, you know, your wingman at the bar hyping you up. It's like, you know, that can delude people. It can exacerbate mental health issues. There's another term, AI psychosis, that's been thrown around where it kind of. It feeds into your worst thought patterns and accelerates them. And that's something it's known to do. So it's just really sad after everyone's been talking about blog posts about that. It's all over Twitter. It's a main upgrade with the new model. Everyone deep in the weeds like I am knows about this issue. And it's like, wow, someone might have died because of that problem I've been writing about. Yeah, yeah.
A
So, yeah, I mean, that's. That's right there. Right. Is we see the sycophancy as this aspect of, you know, making what would otherwise be a prompt or a response that is helpful into something that doesn't quite give me what I'm looking for. But to see that play out in such a stark way has a whole new me, I think, a sort of larger meaning. And you. You pointed to sort of a hope in this case, I think, of this lawsuit going through. And it feels like that does tend to be the way of things with big tech, where it's not enough to. I shouldn't even say it's big tech. It's really a human thing, unfortunately, that we have to be met with object less before we can sort of come to terms with just how serious a problem is.
B
I know, it's so frustrating.
A
Yeah, yeah, it really is. And you, you see, in this case, sure, they were working on it, sure they were tracking it, but if the lawsuit results in the company doing more, then you go, why were you not doing that more before this had to happen? Yeah.
B
And there's also things that are risky and dangerous that Sam Altman and ChatGPT do, where they talk about, oh, you know, people use ChatGPT as therapy. And there have now been a couple people who've spoken out and said, you know, don't call it a therapist. It should be illegal to use that word because, you know, for example, in this case, a therapist would have been required to report that.
A
So, yes, that's true.
B
Right. So there would have been, you know, mechanisms to report to the authorities, at least the parents, something. So it's kind of like fudging everything together and saying like, oh, this is a legitimate form of therapy. When it's like, you know, this Kid probably should have been in real therapy in actual talking to a chat bot that's just gonna say like, oh, you, you're thinking about taking your life. Like that's a good idea. I mean, it's crazy. So, yeah, I mean, there's a place for both. It's not bad to talk to ChatGPT about, you know, what's going on in your life, but it's, it's gotta be in balance and you have to have the proper support and at least recognize that ChatGPT is, is not the end all, be all, it's not everything. It doesn't know everything. Especially if you're 16. Like he was, he maybe thinks, okay, well this thing is all knowing and knows everything on the Internet and this is what it's telling me to do. Like I'm gonna do it.
A
Yeah. When you especially because from the start of the story, all the way through, so much of what the 16 year old was talking about was cry for help after cry for help. And so if the one person, which it was not a person, but as you know, the interactions made it seem, was also not realizing the cry for help, you're already in a place of helplessness and it's feeding into that and saying, oh yeah, these cries of help aren't going to work or aren't advised.
B
Right.
A
It can only go one way. And yeah, that actually gave me a bit of goosebumps when you talked about, you know, the mandated reporter portion of this. Yeah. In those cases you're legally required to do something about it. And it makes me wonder if perhaps that's something that needs to be a part of this as well. Right. At least some sort of human involvement when it comes to, like, if this is happening on the platform, surely those conversations could be flagged. And of course they could at least.
B
You know, email the user and say, hey, we noticed in your chats this topic's coming up a lot like this is concerning, you know, because they don't want to rat people out.
A
Right.
B
So it's like they could at least, you know, they're so smart, they have all this power, they have data centers all over the country. Like, you know, they're so capable, they could at least do something like that, like an email. But one other really quick thing is he started using ChatGPT for homework help, which brings up a whole other like institutional can of worms. Because, you know, there's a huge push to put AI in schools. The Trump administration has directed the Secretary of Education to do that. So there's A whole massive push right now. Government, Google, Microsoft, OpenAI just introduced a study mode. Teachers are trying to figure out how to use AI. So he might have been told at school you have to use this or everyone's using it. But then he gets familiar with the tool and now he's still a 16 year old struggling through high school, but in a different way. So it's also the risks of is it too soon to be telling teens to be using ChatGPT? I mean, I'm sure he could have done his homework just fine, but he'd still be here.
A
Yeah. That is a little terrifying when you think about, as you mentioned, the programs here and elsewhere that are rolling out to have more AI involvement because of. Yeah, it's like it so easily can pivot into another thing. Right. Because there have always been, not always, but as long as the Internet has been around, there have been places where people would go seeking help and terrible human beings out there could lead them down a path like this. But it's a different story when it's just a matter of, of access that everyone else is, is part of and that you then have a situation where perhaps you think your child is just doing their homework and it turns out that they've been having these conversations for a long time all around. If I think things have, well, I know things have to happen quickly and need to be fixed quickly. And if a lawsuit or one lawsuit means that the number of schools that are starting to add this to the program are better able to protect the kids that they're, you know, requiring this of, I think it's a, it's a good thing. It is unfortunate again that it has to happen this way and unfortunate that that's how we see Big Tech move so much. And I suppose the one fortunate thing is we have slowly started to see the increase of the price tag of these mistakes that Big Tech makes.
B
Yes.
A
By way of the EU and in some cases other countries.
B
Right. So we'll see, I mean, if it, if it goes through, it could change. I think a lot about how ChatGPT acts, even on casual conversations, you know, if, if they, I don't know, we'll see what the consequences are, but it'll be, of course it'll take time, it's a lawsuit, but I'll definitely be tracking it.
A
Absolutely. We're going to take a quick break before we come back with another aspect of AI and the danger therein this time by way of the research done by one of the companies at the forefront before we get to that though I want to tell you about Pantheon bringing you this episode of Tech News Weekly. You know your website is your number one revenue channel, but when it's slow, when it's down, when it's stuck in a bottleneck, well, frankly, that's when it becomes your number one liability. Pantheon keeps your site fast, secure and always on. That means better SEO, more conversions, and no lost SAL sales from downtime. But this isn't just a business win, it's a developer win too. Because your team gets automated workflows, isolated test environments and zero downtime deployments. No late night fire drills, no works on my machine headaches, just pure innovation. Marketing can launch a landing page without waiting for a release cycle. Developers can push features with total confidence in your customers. They just see a site that works 24.7Pantheon powers, Drupal and WordPress. Sites that reach over a billion unique monthly visitors. Visit Pantheon IO and make your website your unfair advantage. Pantheon, where the web just works thank you to Pantheon for sponsoring this week's episode of Tech News Weekly. We are back from the break. I'm joined by Emily Forlini this week and we're talking about Anthropic because the company has released a threat intelligence report detailing how cybercriminals and state sponsored actors are weaponizing Claude and other AI models to conduct sophisticated cyber attacks at unprecedented scale. This August 2025 report reveals that threat actors have moved beyond using AI for advice to actually deploying it as an active participant in operations from automating network penetration and data extortion to enabling non technical criminals to develop ransomware and maintain fraudulent employment. Most notably the documents, a large scale vibe hacking operation where criminals used CLAUDE code to compromise 17 organizations including healthcare providers and government institutions with the AI making both tactical and strategic decisions about which data to steal and how to craft psychologically targeted extortion demands exceeding $500,000. So this is a huge report that you can check out. But as we look at the kind of it's sort of been an evolution right of the way that AI is being used. In the beginning we saw it as kind of a method to come up with new ideas for these cyber attacks. And increasingly because of the trend toward agentic AI as we're seeing now, the AI is in its way participating in the attacks. So the models don't just advise. There was a cybercriminal who used CLAUDE code, which is a tool that basically brings Claude to your command line and can look at a code base and use that as a whole platform actually embedding operational instructions in a configuration file, which then allowed the AI to. To compromise networks. The report says, quote, cloud not only performed on keyboard operations, so typing, but also analyzed exfiltrated financial data. This is wild to me. Also analyze exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming ransom notes. So to be clear, you could imagine that you get into this company, right, and you are scooping up all of the data and you, you. You encrypt it and you say, hey, if you want this back, and then you give an amount, and that amount is so high that there's no way that company could ever pay it. And you kind of come to this weird little stalemate and perhaps you, like, lower the price after that. This is going, let's see how much the company makes. Let's see what's coming in, what's going out. And then because it's got all of this knowledge, let's look at human psychology and let's look at what I know about previous ways that ransoms have been paid or not paid. All of this data all at once, and using that to calculate the perfect package of freaking out the people who are involved and getting them to pay. And the other aspect of this is the sort of democratization of cybercrime. That vibe coding that we hear so much about, where you sort of just say, yeah, bro, I just want to make a program that plays funky tunes when I'm taking the dog for a walk. I don't know. And it does that.
B
Is that your bro voice?
C
What was it?
A
That's my bro voice. Yeah, sort of Southern Californian. Anyway, thank you. Thank you. So the bro, fine, he's making a fun little app, and the dog goes for a walk, and then something goes wrong with the app, and then you say, oh, man, I need this to be fixed. This is much different. This is. I don't know how any of this works, but what I would like to do is get this company to pay me X amount of dollars. Or in the case of this actual case study, they saw someone who was able to create and sell ransomware packages without the knowledge needed to sort of break into Windows system using ChaCha20 encryption, anti EDR techniques, Windows internals exploitation. All of this that the report says would typically require years of specialized training. So I like, Emily, that what we have is a company laying out what could be seen in some areas as its own dirt. Like it is. This is anthropic saying, here is what our stuff was used to do, and we'll Talk a little bit later about how they're trying to mitigate it and how they have mitigated some of the issue. But this is the kind of stuff that I want to see from all companies. Mea culpa. We've got our eyes on it at the very least. And not only do we have our eyes on it, we're showing you and telling you what we've seen.
B
Yeah, I have my eye on Anthropic because they are one of the only people or companies that does this, and it does come from their CEO, and it's something that they've done a couple times. And I'm like, are they good guys? Is this real? Could this be happening? But that one counterpoint is a. They have, you know, lawsuits about, like, using copyrighted material, using copyrighted books. There was this crazy report in Ars Technica that they, like, physically scanned a ton of books, like paper books, and then just, like, burn them all. Like. I don't know. They. So they have their fair share of, I don't know, skeletons in the closet, but they do seem to be a bit better than others. But I. I want to see if it holds.
A
Yeah, yeah, exactly. Can we see if it holds? Another one of the. I actually talked about this story a little bit before, but I did not know how heavy the ties were to AI. There was a story recently about this woman who, like, she lived in some rural community, and she had in her house a laptop farm. And it turned out that North Korean computer scientists were using her house and therefore her IP address to work for companies in the US which would result in those researchers or those scientists earning money, which would then be filtered back into the North Korean economy. I knew about that, but what I didn't know is the role that AI has played in it because previously, the North Korean regime faced a bottleneck in actually training IT workers with sufficient technical skills needed to be able to make money in these modern companies. But now, according to the report, operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their positions. And in fact, according to the data that anthropic had approximately 80% of Claude usage by these people that they were able to detect was consistent with active employment maintenance, suggesting that they're successfully infiltrating Fortune 500 companies.
B
So you're like, so, yeah, yeah, that's literally it.
A
So, yeah, that's happening.
B
I'll just leave that there. We have our little.
A
Yeah, what else? It's a Mic drop situation. What else do you say?
B
You're like, so these North Korean workers are hacking into this woman's farm? And yeah, so that happened.
A
Yeah, but that's the thing is it was. It was more of a social hack because they found this woman on. She was in. She was part of some online community and she needed money and she didn't know who they were, you know, precisely. They. They reached out, they said, hey, we'll send you a bunch of laptops. We want you to set them up, get them connected. And basically she needed to set up VPNs for them and give them, you know, basically tunneling access so that they could work from those computers. And all you have to do is make sure that they stay charged, will pay your electric bill. You will occasionally be needed to appear as though you're the employee working for this company, and you'll get a cut of the profits. And she didn't ask questions she didn't know or claims that she didn't know? Whether she knew or not, I don't know.
B
But, yeah, I think she has some liability.
C
I don't know.
A
She definitely has liability, yes.
B
Questionable.
A
Questionable, yeah, absolutely. To the extent that, you know, whether or not she knew that these were North Korean workers attempted, I don't know. Anyway, it's a wild story, but apparently it's happening in more ways than we realized. And it's all because as much as North Korea would not want you to believe or know, it is a country that consistently is hurting economically because of all of the sanctions against it. And so it has to find other ways to make money. And one of those is filtering US dollars into its economy. Now, along with this, there have been some other kind of involvements of not just claude, but other AI systems. AI has been embedded throughout criminal operations. So it part of the victim profiling portion. It could also be part of the service delivery portion. One actor, these are bad actors, as we call them, used Model Context Protocol with claude and Model Context Protocol, for anyone who doesn't know, it's sort of a universal language among AI models and operating systems. It's kind of in these various spaces to let your AI model properly communicate with a set of data. And the way that it goes about doing that is through the Model context protocol. So CLAUDE was able to analyze stolen browser data to create behavioral profiles of victims. Once they have those behavioral profiles, they know how to go about scamming those people. Precisely. Another operated a telegram bot, Telegram, the messaging platform with more than 10,000 monthly users that marketed Claude as a high EQ model for crafting emotionally intelligent romance scam messages. So let me be clear, these are people, these 10,000 monthly users are there to get the service of creating romance scam messages so that they can scam other people with them. That's a lot. 10,000 monthly users.
C
Wow.
A
And yeah, so let's talk a little bit about the vibe hacking aspect of it. With this, this vibe hacking system where they used Claude code, AI was able to autonomously scan thousands of VPN endpoints, extract and analyze credential sets during live intrusions, create obfuscated malware variants when initial attempts were detected. So if whatever software or service was in place to detect the malware on the fly, this was able to go, oh, it detected that. Let's figure out a way to pivot, let's get around it. And then able to, as I mentioned before, because of the information that it had about a person and their behavior, generate victim specific ransom notes based on analysis of stolen financial data.
B
Okay, okay, one thing, I'm wondering if this is a little bit of an advertisement for Claude.
A
Okay. Oh, oh, whoa.
B
You know what I mean? Because like, wow, they're coming out with this and they're like, this crazy thing happened. Look at how these people were able to use our technology. It's so crazy and powerful.
A
Oh, Emily, you're ruining this for me.
B
No, it's. See, this is where I have my. I have my eye on Anthropic.
A
I'm glad you're very. The skepticism is important, the cynicism.
B
I'm just wondering because if I was a criminal, probably wouldn't use Claude. It sounds a little expensive for North Korea. Like, why not use an open source model that's already fine tuned by some criminal to do exactly this? Or even use Deep Seek. I mean, the Chinese are probably like, yeah, use the model for this. So, like, why Claude? You know what I mean? Yeah, yeah, Claude's probably like, I don't know, they're doing the right thing, but they're also sending a message that's like, hey, people abroad are using our model. Look how powerful it is.
A
So interesting.
B
I think they should report it. But you know, it's like a weird.
A
It's also a flex.
B
It's a flex. A vibe Flex. Everything's a vibe now.
A
So I'll end this by talking about how. Because you're hearing all this, right? So what has Anthropic done for this? Well, Anthropic, after detailing all of these awesome ways that Claude works, buy us now for 9.90.
B
Anyway, at the bottom, please subscribe. Subscribe.
A
What is it? Please write and subscribe. Anyway, here are some of the defensive measures that the company has taken. Developing tailored classifiers for specific attack patterns. So essentially quickly being able to categorize when it sees its pattern matching. We see this happen and even though every instance of it is a little bit different by the nature of the way that AI of large language models and this generative AI works, it sort of is a Gaussian blur system where you're seeing the forest for the trees and so it doesn't need to look too closely to see a pattern and then categorize it as such. Also implementing new detection methods for malware upload and generation. So a lot of times what you have is somebody uploading some code and saying, you know, I need help with this specific project and we've got to pass it through this and do. And it turns out that the initial code has some malware built into it. Of course, then there's also the case of people figuring out ways to just straight up ask for malware generation. And that's more likely to happen on the Claude code side or something that's a little bit more toward the API as opposed to just a straight up a chatbot and then sharing technical indicators with authorities and industry partners, obviously. And lastly, as was the case here, successfully auto disrupting some operations before they could execute. Because in the North Korean malware distribution case, so not the North Korean case where people were appearing as if they were American workers, but when North Korea was working to distribute malware, the automated systems from Anthropic banned accounts so quickly that, quote, the threat actor abandoned the remaining accounts without executing any prompts. So yes, that is, that is one thing that has happened according to, you know, Anthropic itself on how they're attempting to mitigate some of this. But I look at the solutions.
B
And.
A
Or the answers and I see sort of a smaller amount than all of these wild things that have been done. And I think there's more that could be done. I love though that OpenAI and Anthropic, I don't know if you saw, but they are testing each other's systems as well.
B
I just turned in a story to my editor about that she probably edited. Now, there was an interesting thing in both their reports that's actually kind of of relevant to what we're talking about with Anthropic being the good guy, is that in Anthropic's report on the results of the studies, they disclosed that they shared the findings with OpenAI before publishing and that both companies did that, which kind of suggests, like, you know, it was maybe sanitized. They got to suggest language to each other, like, don't make my product look bad, or, hey, don't include this or that. So, you know, it was kind of editorialized, you know, of course it was, but it was like, you know, responsible for them to disclose that. And OpenAI did not disclose that. They were just like, this is what we found. And it has, like this unbiased, you.
A
Know, at least it's a warning. Yeah, yeah, yeah.
B
So Anthropic felt more like a true. More of a responsible resource or research organization where they're like, this is the method. Like, this is what we did and we shared it with them and then we published it. So you can see the difference in culture there.
A
Absolutely. That is one of the reasons why the room. The potential rumors that I have heard about Apple attempting to catch up an AI looking at Anthropic as a purchase make the most sense to me because it seems aligned with how Apple likes to present itself. Whether that is actually how the company is or not. That is how Apple likes to present itself, is being responsible, doing the right thing, et cetera, et cetera. And so Anthropic, at least outward, maintains that ethos. And I think that's. That's refreshing. But, yeah, it's important to.
B
It's good. And I don't need to be overly skeptical. I think it's good. I just, you know, you write about stuff enough, you're like, you got to stay on guard.
A
Absolutely. It's very important. In any case, I want to thank you so much for taking the time to join me today. It's always a pleasure to get to chat with you. You always bring great conversations to the table. We'll look forward to seeing that story that's in editing right now. In fact, if people would like to keep up with what you're doing, where are the places they can go to do so?
B
I think right now, Blue sky is the best. But if you really want to look at all my articles, this is my PCMAG bio page. I'm Emily Forlini. LinkedIn. Bluesky. I mean, I'm everywhere, so I would love to hear from you.
A
Awesome. Thank you, Emily.
B
Thank you very much.
A
Alrighty, folks, we're going to take a quick break before we come back with my interview for today. It was recorded early this morning, but I want to tell you first about Smartie bringing you this episode of Tech News Weekly Discover what's possible when address data works for you Smarty is revolutionizing how you handle address information, bringing automation, speed and accuracy to processes that used to be manual, error prone and frustrating. With Smartie's cloud based address validation APIs, you can instantly check and correct addresses in real time. No more bad days compliance risks undeliverable mail or costly delays. Add Autocomplete to your web forms so your customers select valid verified addresses as they type. This will improve their user experience and yield much better data for you. Companies like Fabletics have drastically increased conversion rates for new customers, especially internationally with Smartie. Want more than just clean addresses? Smartie's Property Data API unlocks 350/plus insights on every address, from square footage to tax history, automatically enriching your database. It's incredibly fast, 25,000 plus addresses per second and very easy to integrate. The Red Cross needed accurate address data to allocate resources. A project manager says the Smarty tool has been fundamental. I've never experienced any issues with the tool and they seem to be getting better all the time. The address verification really does make an impact. We're able to reach the communities we serve because we have good addresses. Smartie is a 2025 award winner across many G2 categories like best Results, Best Usability, Users Most Likely to Recommend and High Performer for Small business. Smartie is also USPS Cass and SoC2 certified and HIPAA compliant. Whether you're building your first form or modernizing an entire platform, Smartie gives you the tools to do it smarter. Try it yourself. Get 1000 free lookups when you sign up for a 422 day free trial. Visit smarty.com TWIT to learn more. That's smarty.com TWIT thank you Smarty for sponsoring this week's episode of Tech News Weekly. All right, we are back from the break and now it's time for an interview about the Google Pixel 10 Pro. I am excited to be joined today by the Verge's own Alison Johnson, who is here to tell us about the Google Pixel 10 Pro. Welcome to the show, Alison.
C
Hey, thanks for having me.
A
Yeah, it's a pleasure to have you on. Pleasure to chat with you about this because we got Jimmy Fallon's introduction to these devices.
C
He was very excited.
A
Yeah, yeah, he was very excited and I would love to. And I think our listeners too would love to hear a little bit bit more about it. It seems like the review embargo is up. You had a chance to check it out and kind of Kicking things off. Your review does seem to kind of Frame this Pixel 10 Pro not just as a phone itself, but more importantly, perhaps the vehicle for Google's AI. Like a chip is a vehicle for dip or a sandwich is a vehicle for toppings. You kind of call it the phone's main character. Could you tell us what it means for a phone to be so centered around AI?
C
Yeah, and I think Google has been kind of pitching us on this for the past few Pixel phones, you know, saying they're AI first phones, it's supposed to make your life easier and all this. But on the 10 Pro, I feel like it, it starts to come together in a way. Previously it's sort of been, you know, AI is in this app. You can talk to Gemini, Gemini can be in your Google Docs. It was just sort of all over the place, but this time around there's a little bit more of a glue, I think, to holding it together.
A
Absolutely. Now, one of the features that I think I was certainly excited to see mentioned on the show because it just on the show, pretty much a show and talk show, was the magic cue. That one seemed to be quite helpful. And you talk about how it kind of lives up to its promise. Could you tell us a little bit about MagicQ, what it does, and perhaps some real world examples of when it helped save you time or effort?
C
So MagicQ is interesting. It's sort of always floating around in the background. It's not so much an app you interact with, but the idea is it runs on device and it works in specific apps, Google apps like Messages, Gmail, Calendar. And it's sort of just always checking what you're doing and seeing if it can pull up a helpful piece of information and sort of suggest it for you. So one way I found it really helpful was in messages. I was chatting with a friend, we're going to get coffee and he suggested a day. It gives you a little prompt to check your calendar, which is good because I will just, I'll agree to stuff and then have to go, oh, I'm sorry, I was actually busy then. I do that constantly. And so we, you know, settled on a time. And then you get a prompt that's like, put this on your calendar and you tap it and it has all the details right there for you. I'm terrible at calendars. I don't know how know a person is terrible at calendars, but I will put things for the wrong day. I will just, I'll just not put something on the calendar and then it's a surprise to my husband. I'm like, I'm sorry, you have to pick up our child from daycare. So this is not like, I wouldn't say, you know, earth shattering, life changing stuff, but it was just a few little moments like that where I was like, oh, I, I can see how this is going to be really helpful for me.
A
Absolutely. I think sometimes it is though. There's sometimes these pie in the sky ideas with AI, but it is these small changes that really make a big difference. One of the I was speaking with, I think it was Patrick Holland from CNET last week who kind of did a wrap up of the show and something that stuck out to me. This next question was I didn't realize how much of this stuff that's going on and MagicQ as an example is happening on device. And I think that makes a big difference. The tensor G5 chip seems to be kind of a turning point for Google, which is known for doing a lot of the cloud side and server side processing, enabling many of these AI features to run on device. What difference does this actually make for users in terms of privacy and performance?
C
It is mostly a privacy thing and it's, I think, a really good thing, you know, especially with something like MagicQ where it's, you know, it's, it's not doing something like taking screenshots of your screen and constantly saving them or anything like that, but it is paying attention to the context and what's on what you're doing and what's on your screen. So knowing that that stays on device and it doesn't save them for very long, I think it's maybe seconds, you know, rather than hours or something. Knowing that's all staying on device and it's not going up to, you know, a cloud server and making a round trip there is really, I think, the peace of mind that I need to kind of feel like, okay, I will use this and I'll not feel a little weird and creeped out by it. So that's, that's mainly especially with something. There's a journal app too this time around, which I have mixed feelings about. But they use AI to, you know, it reads your entries and it'll prompt you with reflections or say, you know, you talked about this yesterday, maybe how are you feeling about it a little bit later? I would not want that going to a cloud server. Definitely. So it's a peace of mind definitely for me.
A
Absolutely. Now you also tested kind of on the flip of that, the ProRes Zoom feature that uses AI to enhance photos at extreme zoom levels. It's the zoom and enhance we've all been waiting for. Does this feature though work well and when does it start to break down or produce something where you go, this thing's shopped. I can tell by the pixels it's.
C
So this is on the the two Pro phones exclusively and it is in the camera app itself. But you only notice it only kicks in when you're at 30 times Zoom, all the way up to 100 times. So this is digital zoom. You know, it's well past what the optical zoom is, 5x. Typically you get a pretty bad image. It just doesn't have a lot of of data to work with. It's filling in things with algorithms to try and decide what pixel is what. So instead of one of those kind of traditional algorithms, this is a generative AI diffusion model that's looking at your photo and deciding, okay, this is this and this is that all runs all on device and it happens after you take the photo. So you see it kind of go through and you keep the original and then you get this new AI'd version. And if you're closer to the 30 time zoom range, it's impressive. It's very good. Honestly, I'm one to avoid taking digital zoom photos just because I know they're not good typically, but it does a way better job than what I've seen in the past out towards 100 times. There's just a lot going on. You have like atmospheric haze. There's. You're shaking your hand, right. It has a lot, it has a lot harder times. So you get things that look, you're like, that does kind of look like a crane waiting in a pond, but I'm not, I'm not gonna frame that.
A
One could be a towel sculpture on a bed instead of.
C
Exactly.
A
A little bit kind of stepping away from AI for a moment. One of the big hardware updates that we saw was the inclusion of the Qi 2 wireless charging standard for anyone who's not familiar because there's different QI versions. Can you tell us a little bit about QI2? What does it actually add? And magnets. What's the big deal about magnets?
C
Yeah, so anyone in the, in the Apple ecosystem will know this basically as MagSafe. So it's, it's a wireless charging standard. You don't necessarily need to have the magnets. There have been other phones. Samsung's phones support Qi 2, but in a way where you need to use a case that has the magnets to get the, you know, the full experience. Google has included Qi 2 full on. You know, magnets are built into the phone. You don't need a special case. So you get the wireless charging speeds on the regular 10 and the Pro it's 15 watts. And then the Pro XL will go up to 25 watts. Wireless charging on a Qi 2 stand. And it, it's Google's calling it Pixel Snap. So that's their word for magsafe, I guess. And it's really just kind of a convenience. They have a couple of accessories. There's a wireless charging stand and like a little ring you magnet to the back of the phone. That's kind of popsocket. Ish. I never use a case with a phone. This is probably a bad life choice. But I live with it anyway. So I find it super handy if I'm going to use something like that and just kind of like plopping it onto the wireless charger at the end of the day and knowing I don't need to kind of get it into.
A
The right position, just right. Yeah, absolutely.
C
It is nice.
A
I agree. I mean, I'm in the Apple ecosystem for the most part. And I remember when wireless charging first hit and I had this sort of silly pedantic problem with wireless charging because to me there was a wire running from the charger. Therefore it's not true. What I saw is true wireless charging. But I am a convert entirely. My phone right now is thwacked to a wireless charger. And so I'm glad that everybody's getting to kind of join the fun of that because it just makes it really easy to mount your phone wherever you need to. Getting back, though, into the AI of it all, you do mention, and we even talked a little bit now about some AI features that feel kind of gimmicky. Can you tell us about one or two that you feel missed the mark, at least in their first iteration?
C
Yeah, and Google's been kind of piling these things on, you know, over the past couple years, so. But the new one this year that kind of stands out is the Journaling app App. And I don't, I don't really have a problem with it, you know, on, on principle, I guess. You know, a journal app is fine if that's where you want to journal. The AI is a little weird for me. I, you know, wrote some entries and it, it'll misconstrue things. I mentioned one of my, my son is in preschool and one of his friends was, was having her last day and he was sad. The Journal Thought I. That she died.
A
Oh, no.
C
It said it's okay to feel sad about her passing. And I was like, hold on here. Yeah, so just strange little moments like that where I'm like, it feels a little weird to know that it's reading your journal. And I think that changes what you say when, you know, even, even if you, if it's, you know, it's not a person, it's a, an algorithm.
B
Yeah.
C
And then to get things wrong where I'm like, oh, no, thank you. I think I would opt out of this.
A
The misunderstandings do certainly don't help. I remember using this device, you may have heard B, and it was this little wrist thing and I was watching a show and somebody on the show was in the midst of stuff that got them in trouble with the FBI. And later that night I looked back at my summary and it thought that I was getting questioned by the FBI. And then it made me think, at what point does this system need to actually reach out to the authorities? Because they've heard, you know what I mean? It's processed something that, oh, this person's trying to commit these crimes Anyway. It made me immediately go, yeah, this is not for me.
D
Thank you.
A
So I totally get that. Lastly, to kind of wrap us up here, you described the Pixel 10 Pro, some of its features, at least as a glimpse of the future with the messiness of now. Right. For someone considering upgrading from an older phone, what makes this worth, or perhaps not worth the thousand dollar price tag? What could give somebody pause here?
C
You know, I think my general advice for phone buying is to stick with the one you've got until it's, you know, not working for you anymore. I'm still rocking an iPhone 13 mini, which I am not letting go of for many reasons. But yeah, what I get to see in my job as a phone reviewer is like the coolest, latest and greatest things that, you know, may trickle down to other devices. They may not. It's Google wouldn't say whether something like Magic Cue is, is possible to bring to an older Pixel phone, but it is a look at kind of where their thinking is and where they're going. And I'm just really glad to see AI that feels useful and doesn't feel like it's something. Another thing I have to babysit. Like I have to remember to take a screenshot, remember to go ask this app about, you know, ask it in this way so that it understands. This was kind of that first moment where I was like, oh, it will just understand what I need and then do that thing for me. It's still early days, so I would, you know, in Magic Q is a bit limited right now, so I definitely wouldn't want anyone rushing out to buy it thinking it's going to solve all their calendar problems, because maybe not, but it is a glimpse, I think, of where we're headed and I, I'm, I, I'm glad that that that's where we're going.
A
Yeah, absolutely. Especially, you know, these, these helpful features that seem to just make light improvements on what we're already doing and maintaining that context. Right. I think that's what Magic Q is good about doing because just like walking into another room and forgetting why you went into that room, I have that on on the phone for sure. Where I, I have to. What was that tracking number again? Wait, why did I come here.
C
Constantly?
A
That I think is exciting. Alison, I want to thank you so much for taking the time to join us today. Of course, people can head over to theverge.com to check out your review of the Google Pixel 10 Pro, but also all of the other great work you're doing there. If people would like to follow you online to keep up with what you're doing, is there a place they should go to do that?
C
I am alisonjo1 Threads and on Instagram and you might see some strange AI pictures pop up at some point just, just as a, a warning for what you're in for.
A
Wonderful. Well, thank you so much again for taking the time and hopefully we'll see you again soon.
C
Thanks for having me.
A
Bye bye. All right, we are ready to take another break before I round things out with my final story of I want to tell you about threatlocker bringing you this episode of Tech News Weekly. Ransomware is harming businesses worldwide through phishing emails. We just talked about this. Infected downloads, malicious websites and RDP exploits. You don't want to be the next victim. ThreatLocker's Zero Trust platform takes a proactive deny by default approach that blocks every unauthorized action, protecting you from both known and unknown threats. Trusted by global enterprises like JetBlue and Port of Vancouver, ThreatLocker shields you from zero day exploits and supply chain attacks while providing complete audit trails for compliance. ThreatLocker's innovative ring fencing technology isolates those critical applications from weaponization, stopping ransomware and limiting lateral movement within your network. ThreatLocker works across all industries, supports Mac environments, provides 24.7us based support, and enables comprehensive visibility and control. Mark Tolson, the IT Director for the City of Champaign, Illinois, says quote Threat Locker provides that extra key to block anomalies that nothing else can do. If bad actors got in and tried to execute something, I take comfort in knowing Threat Locker will stop that. Stop worrying about cyber threats. Get unprecedented protection quickly, easily, and cost effectively with threat locker. Visit threatlocker.com TWIT to get a free 30 day trial and learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance. That's threatlocker.com TWIT thanks so much to ThreatLocker for sponsoring this week's episode of Tech News Weekly. Fortune had a really interesting and in depth story this week. Meta transforming a rural Louisiana farmland into what could become the world's largest data center complex, committing, oh, you know, a cool $10 billion to build Hyperion, a massive AI training facility that will eventually consume as much power as 4 million homes all on its own. This ambitious project in Richland Parish, where a quarter of residents live below the property line excuse me, where a quarter of residents live below the poverty line represents more than just another tech expansion. It's potentially setting the template for how big tech and utilities will partner nationwide in order to feed AI's insatiable appetite for electricity, raising critical questions about energy, infrastructure, environmental impact, and of course, who ultimately pays for the AI revolution's power bill. We are talking about a scale like no other, a scale one could call raw ambition. Meta's Hyperion project defies comprehension in its scope. The initial phase involves nine buildings covering more than 4 million square feet. That's larger than Disneyland on 2,000 acres of what was once farmland. But Mark Zuckerberg envisioned something even grander, what he calls a supercluster that could eventually cover a significant part of the footprint of Manhattan and consume up to 5 gigawatts of power. Pastor Justin Clark, the First Baptist Church that is nearby, expressed, I think, like a lot of people, my initial reaction was kind of blown away that a site so rural was selected for something like that. As we started learning more about what it was and what the scope entailed, that feeling just continued. An amazement of good grief, just think of Charlie Brown. The project represents Meta's aggressive pivot in the AI race. Following previous stumbles, including, oh, you know, the multi billion dollar Metaverse initiative, Zuckerberg is now framing this as the pursuit of super intelligence, backing it up with $250 million compensation packages to poach AI talent. Of course, we've seen several of those folks who've been poached by Meta leaving soon after they join the company in any case, this is a big power problem. The energy requirements are staggering. Keeping Hyperion servers operational will initially require twice the power consumption of New Orleans. And as I mentioned, that's just the beginning. In order to meet this demand, regional utility Enter G will construct three new gas fired turbines with 2.3 gigawatts of combined capacity, making the first subject such build out in decades. Louisiana Public Service Commissioner Devonte Lewis highlighted the national implications, saying the deal could signal to other states that this is how data centers should be governed and operated. This would be a test across the nation. I've heard that from investors. I've heard that from credit agencies. I've heard that from fellow data centers. Whatever comes out of the metadeal may be the framework for the them all.
D
Okay, today's show is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game? Well, with a name your price tool from Progressive you can find options that fit your budget and potentially lower your bills. Try it@progressive.com Progressive Casualty Insurance Company and affiliates Price and coverage match Limited by state law not available in all states.
C
25 years ago, a small group of business and government leaders met in Washington D.C. they envisioned the creation of an independent non profit organization with a mission to help people, businesses and government mitigate the growing threat of cyber attacks. Today, the center for Internet Security embodies that vision. For 25 years it's worked with a global community of IT and cybersecurity experts experts to develop the CIS benchmarks and CIS critical security controls. These proven security best practices defend against common cyber threats and streamline compliance with industry frameworks, regulations and standards. Today, CIS provides cybersecurity services, threat intelligence and critical resources to help public and private sector organizations alike strengthen their Cyber defenses. Visit cisecurity.com that's the letters cisecurity.org to find out how CIS can help your organization as we create confidence in the connected world.
D
I'm no tech genius, but I knew if I wanted my business to crush it, I needed a website. Now thankfully, bluehost made it easy. I customized, optimized and monetized everything exactly how I wanted with AI. In minutes my site was up. I couldn't believe it. The search engine tools even helped me get more site visitors. Whatever your passion project is, you can set it up with Bluehost with their 30 day money back guarantee. What do you got to lose? Head to bluehost.com that's b l u e h o s t dot com to start.
A
Now let's talk about the financial arrangement, the deal structure between Meta and Entergy could become the industry standard, as they hope it will. Meta will pay power costs for the $3.2 billion gas plants for the first 15 years. They will cover some of the transmission costs, and they will commit to helping build 1.5 gigawatts of solar and battery power throughout Louisiana. You know you gotta balance it out, right? The arrangement has pushed energy's stock to record highs, but critics worry about the long term risks to ratepayers. Logan Burke of the alliance for Affordable Energy said, the problem here is that this is going to set precedent. The settlement puts all of us, all of your constituents and customers in the state at the mercy of a non public contract between two corporations. Because yeah, that's just 15 years. What happens after 15 years? But Meta's not alone in working on a massive build out. The hyperscaler spending spree is unprecedented. Amazon, Google, Microsoft, each investing 75 to $100 billion in data centers. For 2025, Meta's data center budget has jumped from 28 billion to 70 billion. And OpenAI's Stargate project received $100 billion upfront for a proposed $500 billion Texas complex. That's 1,000 million, 500 billion. Like 500 more times. That's. I can't even grasp that. Wild. Anyway. A Department of Energy report estimates that data centers grid needs could triple by 20, consuming up to 12% of the nation's electricity industry research projects roughly 46 gigawatts of new gas fired electricity coming online in the next five years, which is a 20% jump in construction. So do we have any concerns about the environment? Well, the project has united unlikely allies in opposition because the Louisiana Energy Users Group, which includes Exxon Mobil, Chevron and Shell, believe it not, warns that the project increases Entergy's Louisiana energy demand by 30%, which results in unprecedented financial risks. Environmental groups, of course, raise multiple concerns. Margie Vickner, prey of the Sierra Club's Louisiana chapter, wanted to know. The Richland Data center is to be the largest in the world. How can we ensure that blackouts won't become more frequent? What we have yet to fully understand is the impact the data center will have on the land, our resources and the people. Water consumption for cooling poses additional challenges. So how will the water be shared? And what happens if the farmers are unable to water their crops? Critical unknown in this is whether these massive investments will prove necessary, because energy analyst Kathy Kunkel suggests efficiency improvements are inevitable, either because they get more efficient or because they don't and they go bankrupt. So you gotta become more efficient or you go away. The recent emergence of China's deep Seq demonstrating that AI can become cheaper and more efficient in theory raises questions about whether this stampede for power might be built on the Big B. It's not billion, it's bubble. Be built on a bubble. Mike o' Boyle of Energy Innovation warns, I know the environment right now federally and in the industry is build, build, build as fast as we can. But costs must be considered. We're in a limited resource environment where supply is much lower than demand and it's causing prices to skyrocket. Fortune has a whole heck of a lot more in this really in depth piece for you to check out, so I recommend heading over there from the link in our show notes to learn more about Hyperion. But as it stands right now, we are seeing these big, big tech companies building, building, building. In any case, that brings us to the end of this episode of Tech News Weekly. So I appreciate every single one of you for tuning in or checking out the show later as it hits your podcast app of choice. If you would like to check out the show, publish or rather subscribe to the show, you can head to Twitter TV tnw if you're not already subscribed, audio and video formats and of course if you aren't already, I'd love to invite you to become a member of Club Twit Twit TV Club Twit. When you head there you can join the club and in doing so you will get ad free episodes of every single one of our shows. You will get access to the Twit plus feeds that includes behind the scenes, before the show, after the show, our special club Twitch shows including Book Club and Crafting Corner as well as access to our newer feed which is the News Announcement feed. So there whenever different companies are having news events like the recent Made by Google event, our commentary is available to members of the club, so be sure to check that out as well. And lastly, access to the Discord Server, a fun place to go to chat with your fellow club Twitters and those of us here at we would love to have you in the Club Twit TV Club Twit and you start things out with a free trial. So if you haven't joined the club yet, haven't tried it out, please do. We can't wait to see you there. If you'd like to follow me online, I'm ikeasargent on many a social media network or you can head to Chihuahua Coffee. That's C H I H U a H U a coffee Where I've got links to the places I'm most active online. Thank you for being here this week and I'll catch you again next week for another episode of Tech News Weekly. Bye Bye.
D
The tech world moves fast and you need to keep up for your business, for your life. The best way to do that Twit tv On this Week in Tech, I bring together tech's best and brightest minds to help you understand what just happened and prepare for what's happening next. It's your first podcast of the week and the last word in tech. Cybersecurity experts know they can't miss a minute of security now every week with Steve Gibson. What you don't know could really hurt your business, but there's nothing Steve Gibson doesn't know. Tune in Security now every Wednesday. Every Thursday, industry expert Micah Sargent brings you interviews with tech journalists who make or break the top stories of the week on Tech News Weekly. And if you use Apple Pro products, you won't want to miss the premier Apple podcast, Now in its 20th year, Mac break Weekly. Then there's Paul Thurat and Richard Campbell. They are the best connected journalists covering Microsoft. And every week they bring you their insight and wit on Windows Weekly. Build your tech intelligence week after week with the best in the business. Your seat at tech's most entertaining and informative table is waiting at TWiT TV. Subscribe now. Marketing is hard, but I'll tell you a little secret. It doesn't have to be. Let me point something out. You're listening to a podcast right now and it's great. You love the host. You seek it out and download it. You listen to it while driving, working out, out cooking, even going to the bathroom. Podcasts are a pretty close companion. And this is a podcast ad. Did I get your attention? You can reach great listeners like yourself with podcast advertising from Libsyn Ads. Choose from hundreds of top podcasts offering host endorsements or run a pre produced ad like this one across thousands of shows. To reach your target audience in their favorite podcasts with Libsyn Ads, go to Libsyn ads.com that's L I B S Y N ads.com today.
Date: August 28, 2025
Hosts: Micah Sargent, Emily Forlini
Guests: Allison Johnson (The Verge)
This episode of Tech News Weekly dives deep into three essential themes shaping the current tech landscape:
The episode rounds out by examining Meta’s $10B Hyperion data center project in rural Louisiana, highlighting the environmental and infrastructural implications of the AI revolution.
Timestamps: 02:53–16:55
Speakers: Emily Forlini, Micah Sargent
Timestamps: 17:16–36:42
Speakers: Micah Sargent, Emily Forlini
Timestamps: 40:03–56:10
Speakers: Micah Sargent, Alison Johnson (The Verge)
Timestamps: 56:11–63:22
Speaker: Micah Sargent
On AI Responsibility and Enthusiasm:
"You can be enthusiastic about something... while still making sure that you are looking at these potential harms." — Micah (06:45)
On AI in Cybercrime:
"Cloud not only performed on keyboard operations, but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming ransom notes." — Micah (21:17)
On the Purpose of Pixel 10 Pro:
"The phone's main character is clearly AI." — Alison Johnson (Paraphrased, 40:53)
On Early Stage AI Features:
"The journal thought I... that she died. It said it's okay to feel sad about her passing. And I was like, hold on here." — Alison Johnson (51:49)
On Data Center Risks:
"The project increases Entergy’s Louisiana energy demand by 30%, which results in unprecedented financial risks." — (62:53, paraphrasing critics)
On the Unfolding AI Future:
"It is a glimpse, I think, of where we're headed and I, I'm, I, I'm glad that that's where we're going." — Alison Johnson (54:56)
| Segment | Main Speakers | Topics / Key Insights | Notable Quote / Timestamp | |-----------------------------------------------|--------------------|----------------------------------------------------------------------|-----------------------------------| | AI Chatbots & OpenAI Lawsuit | Emily, Micah | Suicide, “sycophancy”, AI harm, school integration | "This is a serious problem." 03:47| | Anthropic Cybercrime Threat Report | Micah, Emily | AI’s agentic misuse, North Korea, “vibe hacking”, AI self-critique | "Cloud not only performed..." 21:17| | Pixel 10 Pro Review & Magic Cue AI | Micah, Alison | On-device AI, Magic Cue, hardware updates, privacy, real user impact | "It starts to come together..." 40:53| | Meta’s Hyperion Mega Data Center | Micah | Energy demands, environmental impact, precedent for national grid | "Whatever comes out..." 60:53 |
This episode exemplifies Tech News Weekly’s signature blend of critical analysis and enthusiasm for innovation, focusing on both the promise and perils of contemporary AI. It offers clear, nuanced takes on how generative AI is shaping our devices, our digital security, and even the nation’s infrastructure, while never losing sight of the human stakes and responsibilities involved.
Recommended for: Anyone interested in the intersection of AI, personal technology, cybersecurity, and public policy—and especially those curious about how today’s decisions will shape the digital (and very physical) world of tomorrow.
For further reading & listening:
(End of summary)