
Loading summary
A
OpenAI is giving up on video generation. Here's the real story behind it. Apple is going to make a bunch of AI assistants available in Siri and Meta loses a landmark court case that could spell even more trouble ahead. That's coming up on a Big Technology Podcast Friday edition right after this.
B
Fiscally responsible financial geniuses, monetary magicians. These are things people say about drivers who switch their car insurance to Progressive and and save hundreds because Progressive offers discounts for paying in full, owning a home and more. Plus you can count on their great customer service to help when you need it. So your dollar goes a long way. Visit progressive.com to see if you could save on car insurance, Progressive Casualty Insurance company and affiliates. Potential savings will vary. Not available in all states or situations.
A
Welcome to Big Technology Friday Edition where we break down the news in our traditional cool headed and nuanced format. We have a great show for you today. We we're going to talk all about why OpenAI killed Sora and really the bigger story about what it means for the company's ambitions. We're also going to talk about a couple new models coming out from OpenAI and anthropic that has people on the inside buzzing. This new upgrade for Apple and Siri isn't going to amount to anything. We'll touch on that. And then finally, Meta lost big in court this week and that may be news that will harm it in the future. Joining us as always on Friday to do it is Ranjan Roy of Margins. Ranjan, welcome.
C
Sora is dead. Erotic chatbots are no more. Apple might be fixing Siri. This this feels like a good week. I'm currently in Park City, Utah and there's no snow so I'm not very happy about that. But I'm happy about all this news
A
so well we will dig into it and clearly the sidequest era is over and the biggest casualty so far has been the death of Sora, the video platform that we talked about so much that went to number one on the App Store not long ago. And not only that, but the API as well. This is from the Wall Street Journal. OpenAI is planning to pull the plug on its Sora video platform, a product that released to great fanfare last year and has since fallen from public view. The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential IPO as soon as the fourth quarter of this year. Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won't support video functionality inside ChatGPT either. Ranjan, let me just throw, let's look at it micro and then we can zoom out. Let me throw a couple ideas at you in terms of why Sora died. One is maybe it's just that the appeal of video AI isn't that great. Like if there's an initial thrill when you generate a video, but maybe most people just want to watch instead of create. And then secondly, maybe it's just that all the videos ended up looking the same. Right? That, you know, you take AI, it generates the average of averages and that's what you got. And then all of a sudden the utility of this just nose dived.
C
Yeah, I think separating out Sora specifically from just the promise or kind of current state of video AI is important here, I have to say. I mean given I'm talking to retail consumer goods type people on the marketing side all day long, video is still very interesting and everyone has been veo from Google seems to be kind of becoming the industry standard. So. So everyone is actually very interested in it, especially from like a, like a true production standpoint. So I think video, I don't think like, I still am surprised that they're actually completely seeding that area and not even making it available via AP anymore. Not making any video functionality. But Sora, the product, I mean, I don't know, I have not opened it in a long time. What were your greatest Sora hits to date?
A
It was me and Jake Paul walking old lady across the street very enthusiastically.
C
I went through actually just In Memoriam of Sora. And so my son out actually who's 7 years old, we would, I would ask him to give me prompts and I had like a chicken and a horse running around. A toilet bowl was one of them. It was like, it was basically a lot of stuff like that. And it was funny. And his friends, they would all get a good laugh out of it. I guess that was not necessarily the market validation one needs to address the tam of whatever they're going to have to pitch investors for the IPO. But yeah, some 70 year olds are going to be unhappy about this.
A
Yeah, you burned an entire rainforest just to get the horse running around.
C
The horse running around the toilet bowl.
A
Yeah, but, but that, that joke really goes to the heart of the matter. And I actually do have some intel on the real reason why Sora has been shelved And I will say this week I'm in San Francisco and yesterday was meeting with Greg Brockman, the president of OpenAI and we have an hour, 10 hour, 15 long minute, hour and 15 minute long interview coming next week. So stay tuned that on Wednesday. And of course we begin with this sort of new pivot to enterprise and coding. I won't give it entirely away, but I will share this bit from Greg about why video generation became deprioritized and was looked at as a side quest within OpenAI. So first I'll say I thought this was, and I spoke about this last week, I thought this was a consumer enterprise thing.
D
So.
A
So maybe they thought that the Sora app was more consumery and they're really focusing on businesses. That's actually not what it is at all. So basically OpenAI has seen that these GPT style models are working and there are other ways to try to pursue the most powerful AI. And that most famous other method right now is the world models. So models that actually understand physics and that is part of what was baked in Tesora. And so I spoke with Greg and said, well what's going on here? And he said the important thing to realize is technologically that the Sora models, which are incredible models by the way, are a different branch of the tech tree than the core reasoning GPT series. They're built in just a very different way. And to some extent we're really seeing that pursuing both branches is very hard for us. So I think this is needed and very interesting focus that we're seeing from OpenAI where they're basically like we want to build the strongest, most powerful models as well that we can. We're seeing the results in these text GPT style models that do all the things that call the tools that do the reasoning, that do chain of thought, that are getting things accomplished and we have to decide where to put the computer and to do that in a world model way would really limit the company's ability to progress, to make progress in the area they see as most promising. And that is why Sora is being deprioritized.
C
My goodness, that is focus. That is real focus right there. I have to say because just on last week's episode we were half joking about, now all you have to say is world model and it's kind of the trendiest term and it's. And meta's making a big deal about it. Google's talking about it like that's definitely, I think going to be one of the buzzwords of 2026. So for. For them to actually acknowledge that that is not going to be another area of investment, I think that, okay, that is a pretty big deal, right?
A
So that is the logic and I think that makes total sense. And of course, Greg and I will speak about it more next Wednesday. So folks, please tune in for that. Uh, but now it sort of goes to something interesting, which is like, what does this race look like? Because guess who else hasn't really done world models? Guess who hasn't done the side quests of image generation and video generation? It's anthropic. And so you're starting to see a race. So if you. The race is taking a very different shape than it had not long ago. Remember, it used to be that, well, ChatGPT, OpenAI had ChatGPT and it was winning consumer with all these images and videos and the chatbot and anthropic was enterprise and it was sort of doing the enterprise thing with coding and business and all these, these applications. Well, what's happened now is that both these companies have centralized on the use case, maybe you could call it the open Claw style use case, which is what they both seem to be going for, which is that you give the AI access to your desktop, to your phone, whatever it might be. And if you're at work, you have it do work for you. If it's in your personal life, you have it organize your personal life and take action for you. And I think they do see that this, and I'm going to hand it to you again, this potential agentic use case where the tech goes after what you need, whether it's consumer or business, it's the same thing. It's centralizing in this sort of one stack, so to speak. And now it's just like almost like a battle to the death here between the two of them to get this right and go after it.
C
Well, I mean one listeners cannot see, but maybe on YouTube you'll see me smiling right now because Alex is. It always feels good to be recognized. But I have to admit when I said this kind of autonomous knowledge work, and this is what we started seeing at Ryder when we started building this last June, like it was that magic moment of like pulling files from one place, doing something to it, pushing it to another system and then thinking about like how that applies to absolutely everything, I will say when I made that prediction, I think in October, ish, probably to start, I did not think OpenAI ahead of an IPO would actually be kind of consolidating its entire strategy around that so I'm going to take the win on this one thing I'll note this isn't two players and I'm saying that again self interested because this is what we're working on at writer but more and more so Sierra just this week released something called Ghostwriter Notion is going into this space. It really, I mean I'm seeing this very closely firsthand. Everyone is going after it. So I think everyone has recognized that's the prize. So it's not just OpenAI and anthropic on this, it's even more traditional. SaaS companies are trying to go in this direction. So. So I think it's definitely clear that I was right on this one. Yes, but no, but no, no, it's clear but, but now I am wondering like image generation and video, like does it especially on the consumer side. See I, I still think there's going to be a big difference between enterprise and consumer. And like does Meta start making moves in here and start kind of filling the gap? Does Google just kind of own it? Because to me those kind of functions are still going to be very important in consumer.
A
Right. Well this is the thing like Nano Banana has been a huge asset for Google, their image generator and by the way something that was interesting. I don't want to give away too much of the Greg interview, but I think this is important.
C
I'm not going to push you too
A
hard because it's newsworthy. Guess what's not going away is image generation in ChatGPT. And when you know, when you think about that, what's your initial response? Well, okay, so creating images doesn't take as much compute as creating video, which is true. But what Greg says is basically like it is the image generation is being done with the same GPT style technology whereas like the video generation is done with this completely different technology. So that, that and it goes to the generality of the models where like they can do text, they can do image generation, video generation, they can't. But I think you're right that there is this big opening for somebody to do video generation. Well and clearly there's there are like some startups like Runway, but Google, Google is in great shape here.
C
I, I'm kind of rooting for Runway in this. I, I don't know like three years ago, 2023, because I started testing every single generative AI tool available. And I remember Runway was probably the first place that I actually did an image generation. And this is back in the six fingered days of like, like when you know, four legs all of that and then even video. They might have been the first place I started testing and playing around with video. So maybe this opens the door. But. But I. Yeah. I think one other note though is image generation also. It's not just like create an image of a cat. I don't know actually what was the. One of the best. So as I saw that was circulating around was I think it was a cat with a shotgun shooting a ring doorbell. I don't know. That was Sora at its finest. I don't know if you saw the tweet, but it was from them. It was like we will give credit to everyone who made videos that matter. And it was animals running around toilet bowls and cats shooting ring ring doorbells. But. But I think there's also image generation even in the enterprise like generating diagrams and sl. Like there's a. It's still visual communication in many ways. It's not just make me a funny image. So I think it makes sense too that they, they still have to play in that. That they still. It's still important.
A
Right. And I think the important part also is that it is along the same tech tree as opposed to something completely different. But, but you're right, even if it was different, you'd probably want it in your. In your suite of tools. I want to go back to something you said actually that it's not just OpenAI and Anthropic. Yes, there are others but a lot of these companies are working with OpenAI or anthropics technology underlying. So there's a good chance that they'll see the benefit no matter what. Even if, let's say it's Sierra that ends up being the one that deploys this for business.
C
You're coming into my world right now, Alex. I think so. At Rider we have our own family of foundation models, the palmeier family. So for us and actually there was like a very interesting thing Intercom which now has fin. They announced this week as well that they basically have trained their own foundation model. So I think starting to see some kind of combination of using whether it's like Cursor using Deep seq, which they just basically didn't say that they were doing but then were. But is actually a very like a very thoughtful approach to this. I actually think more and more people the companies are going to start taking this approach. It's not just going to be an API call to OpenAI or Anthropic. So I do think like the. And I say this with the notions and the cursors and The Sierras of the world that like more and more it's that I think a lot of tools to date it was just that API call. I think more and more people are going to start either customizing or fully training on the foundation model side prediction.
A
I'm going to come in skeptical here and if I have to eat crow again, I'll do it. But, but I do think that the foundational malo companies are going to be. Well, without a doubt there'll be big players here. Let me, let me take this to you again. You know, if the battle between OpenAI and Anthropic shapes up to be not the way that it was previously, but like going head to head on the same use case which they hadn't been previously. Like Claude was happy to not have lots of consumer users and OpenAI was happy to not go after Enterprise. Now they're really going head to head. What do you think that means for this race or how do you see that shaping up and who do you think is going to win?
C
I don't think this is the right idea for OpenAI. I think like they had a foothold in consumer. I know the business model for consumer has not been figured out yet, but that was still where they had the edge. Like they could start to go after this. We talked about last week. What that means between them and Microsoft is a very big question in my mind because remember when you say Mike Enterprise, this is Microsoft's world and already there's tension around, you know, OpenAI starting to do deals with AWS and like potential rumors the FDA had reported on a potential lawsuit. So like it just puts them in such a different space than they have been. And it's honestly kind of surprising to me that it's like basically we want to be anthropic is what they're saying. Everything they're doing is let's just try to catch up to Anthropic. And yes, Anthropic had a very good year. But like, remember a year and a half ago, people were leaving Anthropic for dead. Maybe that's an exaggeration, but we were even, you know, pulling up charts of declining consumer usage and joking that we're no longer clawed heads and we're Gemini guys. You know, like there was this moment and then they really nailed and we called this, we said it was a risk and a bet, but going all in on coding meant something very different. But I just don't think, I think it's too late for them to make this switch and it's reactive rather than we are our own unique business. We have 800 million users, we're going to get to a billion, we're going to run ads, people are going to search about everyday life and there's a lot of ways to monetize that. I just, yeah, I don't know. I'm, I need to see this in action versus just being reactive.
A
That is interesting but let me put the counterpoint to you here which is we all saw the Open Claw moment, right? And I think many of us still are haven't fully, including myself haven't fully wrapped their heads around what that can be applied to elsewhere. Because Open Claw is basically like you're going to put your, you know, create a virtual machine or get a Mac Mini, put this AI agent, allow it to control that machine for you, plug it into a couple of services that you use and then basically have it be a assistant with persistent memory that gets some get stuff done for you. And so think about let's say and again I think it's important to say that this is not going to be a breakdown of like you know that OpenAI goes after enterprise and not consumer. Think about this type of use case. Imagine you're like dealing with a hospital, right. Or dealing with an insurance company, right. And you're trying to get something covered or you're trying to understand your, you know, your what your data looks like compared to others. To have this like always on assistant with persistent memory that can go out and negotiate on your behalf with the insurance companies or go out and monitor your health situation. Like is that a consumer or is that an enterprise? That's consumer but it's still, still in this agentic world. So maybe to sit that out which seems like it's a bad business decision.
C
Okay. I mean fully agreed given now you're, you're the one saying that kind of like coming up with these broad agentic use cases and visions. And again remember like I think this is the exciting part why everyone's so fired up is like once you feel that power just imagining all the possibilities and I think you put it well like always on connected to your data and able to take action. Those are kind of the three foundations of this whole thing. And again I, I, we don't know what no one has named it yet. I've been like racking my brain. It's it like autonomous knowledge work, Open claw. I don't know if it's going to stick. Maybe that could be the open claw. But like that is harness. I've Harness I've is the let's take over I think but, but I do think yeah it's. I fully agree it's not consumer versus enterprise. Every person I think will have a lot of things that they will be able to build and do with it. So I agree it's a central part of the battle. I just mean more like and we're going to get into the shutdown of the erotic chatbot but like even the advertising business. I was just at Shop Talk in Las Vegas all week. OpenAI ton of talk around commerce. Again it becomes like one of those interesting things that it's consumer, it's also enterprise because you have the retailers but you also have the end consumer who will be shopping on it potentially so so I think I agreed there but I just think that overall as an organization to start cutting these very consumer friendly things. Are you going to be able to focus on and advertise a large advertising business when you're trying to do everything else? That's where I think there's issues.
A
I think you will be. I mean ChatGPT is already at $100 million annualized run rate.
C
Oh that one killed me. It's been out for six Can I, can I, can I make one do it? No one go off say annualized recurring revenue when a product has been out for six weeks like it's just not ARR at that point let it, don't make us extract, don't do the extrapolation. Just say it's been out for six weeks and we've made what would that be like nine, $10 million or whatever it is like that just that's all it is right now and it could be much bigger and I'll be, that'll be great for them but reporters please don't use ARR unless there's some kind of meaningful trend. That's.
A
I think I'll keep doing this, do it doing it on this show just to annoy you Rajam but I message her loud and clear. Let me, let me end this segment with one thing which is this all sounds good in theory but the problem that I have and the problem that many people have is a trust problem where I want the AI to do all these cool things for me but I do not trust it to have access to my Gmail and calendar and desktop and all these things. And obviously like some of the leading models like or the leading providers even opencloth likes like we don't recommend you do this without you know, some, some precautions like running on a separate machine do you think that that trust barrier is ever going to be overcome?
C
Yeah, 100%. I mean I see it myself. Well, actually hold on to add nuance to it, like I have something that just 7pm every day. Here's all the emails that you have not answered that are like from today and then greater than 24 hours old. And here is a suggested response based on your entire Gmail history. So I get this, I don't have it. Send it the email yet. Like so that. So I've not gotten to where I'm actually like press button, send the email to everyone. But as you start seeing it, start actually kind of like tweaking how you want the response is structured. I do think there is a world where I would just have it send the response. So I think that the trust comes with time and quality and like the more you start to see and the more you also start to understand what questions not to ask or where data is going to be bad and you're going to get like a subpar answer that is kind of going to be one of the most, I think important skills. But also that's how people will build trust.
A
This is why I always get emails from you at about 7:10, 7:12, right?
C
The clause like that's wonderful. Come on, let's make sure to schedule at this time.
A
Well, we're about to find out really where this is going to go because we have two major models coming from both, one each one from OpenAI and one from Anthropic. Let's start with Anthropic. There's this very interesting story in Fortune this week. Anthropic acknowledges testing a new AI model representing step change in capabilities after accidental data reveals data leak reveals its existence. Anthropic is developing and has begun testing with Early Access customers a new AI model more capable than it has released previously, the company said following a data leak that revealed the model's existence. Anthropic spokesperson said the model represented a step change in AI performance and was the most capable we've built to date. The company said the model is currently being trialed by Early Access customers. A draft blog post that was available in an unsecured and publicly searchable database prior to Thursday evening said the model is called Claude Mythos. The company believe it poses unprecedented cybersecurity risks. Mythos has also been called Capybara. In the document, Anthropic says Capybara is a new name for a new tier of model larger and more intelligence than our opus models which were until now our Most powerful. Compared to our previous best model, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others. Wow. So we could be seeing a new class of model, you know, anthropic, of course, has its three sonnet opus and what's the other one? Haiku. Oh, haiku. Yeah. So now we might be getting cheap. Yeah, we might be getting Capybara. What, what, what's your. Your quick take reaction to this?
C
I mean, I am a step change in models now that when we've been talking about this solo episode, that now we all kind of know what the battle is, I think will be very, very interesting to see like, again. But it's still hard to understand when you both worry about, say that you're worried about cybersecurity or recognizing these risks, but then say it gets dramatically higher scores on tests of software coding, academic reasoning and cybersecurity. It's still just, I don't know, kind of difficult to actually parse out where they're going with this. What do you think? What's your prediction on what it will feel like the first time you crank out some work on Capybara?
A
Well, I have been thinking about this because we've been sitting here and we review every update that seems incremental, right? Like, oh, it got a little bit better at this, Got a little bit better on this. Got a little bit better on this. And it's starting to feel like it compounds. You know what I'm saying? Like, we started with ChatGPT in 2022 and it had all these sorts of flaws, and over time they've been patched up in a way. And so, like, I just started to think about it this week in terms of like, all the AI CEOs say that there is this like exponential happening. And maybe that's just the way that you get that exponential, right? Like, you know, with your, with interest, for instance. Like, all right, there's like 6% on, on your investment and then another 6% on your investment and that 6% that you got last quarter and then, and then all of a sudden that, that starts to really grow. And it seems like that might be what's happening with these AI models. Maybe I'm being overly optimistic.
C
No, no, no. But, but that's not even optimistic. That's realistic. That's like compounding accrual of value coming is actually the way this is playing out. But the marketing was done before that, like GPT3, GPT4. So like that everything had to be revolutionary and a step change and then people were disappointed when it wasn't. So I do think that's, that could be the right way to look at it and people don't. And maybe it is just a marketing limitation that they have to make a big deal. But it would be kind of nice if everyone actually just spoke about it like that. Like, here's our release notes. It's definitely a little better and then you can do a lot more. And that's all we should really focus on.
A
They will never, ever speak that way.
C
No, no, no, of course not.
A
Guaranteed. Here's another one. OpenAI new model. It's coming out. It's called Spud. OpenAI CEO Sam Altman said the company has completed the initial development of its next major AI model, codenamed Spud. He told staff that the company expects to have a very strong model in a few weeks that the team believes can really accelerate the economy. He added, things are moving faster than many of us expected. You know what's interesting? It's like just as the. And now I'm going to really sound optimistic and I'm trying to check myself, but like, just as the world implements today's models and is starting to find that they can do things with them that they really couldn't do with the previous generations and that's leading to this like, explosion of possibilities. And it's like, wait, and they're building better models than these. Some that they say are sizable leaps. Like, it is one of those moments where you like sit back and just go, this is crazy.
C
I mean it feels, it definitely feels like that. But sorry, I, I just have to stop for a moment and say the name Spud did not jump out to you as what the hell is going on? And open. Sorry. Anthropic. Sitting on Mythos and Capybara. I don't know, that's kind of like. But Spud is the code name for their model. Where is this coming from?
A
Not exactly inspiring.
C
No, no. He's literally the team believes can really accelerate the economy. He's not even saying like you'll, you'll be able to do a little bit more multi step reasoning. He's being, again, as we were just talking about, everything has to be grandiose and can really accelerate the economy. And it's called Spud.
A
I don't know, maybe imagine you're at your job and your manager walks over and they're like, got to tell you, we're replacing you with Spud. That would, that would hurt my feelings. But it might happen maybe.
C
Do you think this is. It's so bad that it actually makes me. I will never forget the name versus Mythos and stuff. Maybe it'll just kind of fall into haiku. I couldn't remember off the top of my head.
A
But Spud, you'll remember.
C
Don't even call it GPT6. Just come out with our new class of models, family of models and Spud. Spud.
A
But look, let's be clear here. It's just a code name. It's not like OpenAI is going to release Spud the. No, no.
C
I'm saying the product names bud and then I codename. But again most tech project like codenames that I've ever come across or been part of all everyone has a somewhat ambitious name or like an ex kind of like strong, grandiose big name. So that's why this one really jumped out at me. But I kind of like it now.
A
The same people who are branding the Pentagon operations have come and started to code name.
C
No. But no, the epic Fury would be more in line with Myth Spud. I don't know about that. I don't know.
A
Well, we'll find out soon enough. All right, folks, if you are enjoying seeing Ranjan riled up, well, just wait till the second half where we talk about Siri. We'll be back with hopefully some better news about Siri's direction, but I can't promise anything. We're back right after this.
D
It's not just something you made. It's the privilege that you get to work with your hands. It's building something that serves a purpose. Proof that you have the grit to keep going. At Timberland, we understand you take your craft seriously. And we do too. Which is why our products are built to the highest quality. We put in the work so you can perfect yours with purpose, in every detail, and crafted with intention. Timberland built on craft visit timberland.com to shop.
E
This episode is brought to you by Indeed. Stop waiting around for the perfect candidate. Instead, use Indeed sponsored Jobs to find the right people with the right skills fast. It's a simple way to make sure your listing is the first candidate. C. According to Indeed data, Sponsored Jobs have four times more applicants than non sponsored jobs. So go build your dream team today with Indeed. Get a $75 sponsored job credit@ Indeed.com podcast. Terms and conditions apply.
A
And we're back here on big Technology podcast Friday Edition. Siri is on its way to a new improvement. This is from Bloomberg. Apple plans to open up Siri to rival AI assistance in iOS 27 update the company is preparing to make the change as part of a Siri overhaul in its upcoming iOS 27 operating system update. The assistant can already tap into ChatGPT through a partnership with OpenAI, but Apple will now allow competing services to do the same. The company is developing new tools to allow AI chatbots installed via the App Store to integrate with the Siri Assistance. The chatbots will also work with an upcoming Siri app and other features in the Apple Intelligence platform. I'll just quickly share my perspective here and then, you know, let you riff off it. Ranjan People were and I initially took this as maybe Siri is saved, but then I realized that it's just going to be the same disappointing user experience that you use to find ChatGPT in Siri. So in other words, you same Siri new stuff you can access with it. Not a Siri that is actually, you know, a distilled version of Claude or something like that. And that just makes me even further disappointed in what we're going to get on the iPhone in terms of AI Assistant.
C
I will refrain and try to remain from. I will try to remain calm, but this was this made Spud look like pure poetry of a name. Because when I'm reading this announcement, I agree there's been a lot of like very promising things about Siri that have made me question my hatred for and think there is a possible future. Again, like this got even more confusing because Gemini being the kind of base foundation of Siri and actually making it valuable is kind of exciting. This not only like when's the last time you actually used the ChatGPT integration in Siri?
A
I don't use it. I mean why would I do anything but go to the ChatGPT app?
C
Yeah, yeah. So but then like the actual technology here, like Siri actually having connected apps or kind of, I think they called them maybe skill. There was something for many years like trying to actually integrate specific functionality within apps has existed within Siri. So like just having that as the query, I guess it'll be a little less friction if it means that Siri will then be able to directly read out the answer to you or have it appear rather than opening up the target app. But like I don't know, this is. Yeah, I agree this is horrifying. Like if this is still where their heads Siri has to be able to be good enough to compete with those apps like that. It just has to. It should not trigger some other thing. And I thought that's what they're working on. So the fact that this is going to be some kind of thing. And actually the last line, this approach should allow Apple to generate more money from third party AI subscriptions through the App Store. That was the most depressing part. That if this is just like, well, buy your Gemini through the App Store and then we're going to take a cut. This is, this is troubling. I, we, we'll see. I'm still, I'm going to give them, give them a chance here, but not positive.
A
Two months till wwdc. So let's see. But let's see. Not very enthusiastic right now about what's going on. Although go ahead.
C
I was gonna ask, do you think WWDC they're gonna have like a well fleshed out vision of what Apple Intelligence is, what Siri is?
A
No. I wouldn't be surprised if we didn't hear anything about Apple Intelligence this year as well.
C
Well, but even the Siri overhaul and stuff, right, like you think they're.
A
No, nothing, no. Maybe a few minutes. They have to release products. They have to let the products do the talking. They cannot tell us again about what's coming. That seems to be.
C
I agree. I agree at least. Yeah. Okay, agreed.
A
Let's go to our other friends over here in Silicon valley. Meta and YouTube. This is a pretty big deal actually the a case. A court in California found them both liable for harming a young user with their futures with their features. This is from the New York times. Meta and YouTube found negligent and landmark social media case. It says META must pay. So they, yeah they, that they. The landmark decision could open up social media companies to more lawsuits over use users. Well being. Meta must pay 4.2 million in combined compensatory and punitive damages and YouTube must pay 1 1.8 million. The bellwether cage switch was brought by a now 20 year old woman who had accused social media companies of creating products as addictive as cigarettes or digital casinos. And they led to anxiety and depression and the court found in this person's case and there are thousands more of these lawsuits coming through now. I was on CNBC just as this happened and I was like, you don't want to lose these cases because you're going to have others. And the other panelist that was with me was like basically, you know, this is a win for them because the amount was so small. It was only I think just a few million dollars, six million total. But when you lose you open the door for other losses and some might see the award and say if they got that, we can get even more. And lo and behold, Meta stock has just tanked over the rest of the week. So how do you read this, Ranjan, in terms of the potential for successive liability for Meta after this problem, after this.
C
But hold on, I'm trying to. Because I had also seen that they're ordered to pay 375 million in civil penalties.
A
Yes. So that's a separate case. They also lost this week in New Mexico, which for some reason wasn't talked about as much. And I have a theory as to why it wasn't talked about.
C
Okay, hold on, walk me through. Because, and just to note, like Andy Stone, the head of comms over Met, I'd seen a tweet where like basically he was like, it's not that much. It's a fraction of what the state sought, even for the 375 million, which was just terrible.
A
2 billion. But the part of the point is that like if you lose, you open yourself to self up to further losses. So that's that to me. So I guess I can sort of give my perspective here. The issue here is that the ruling is basically telling them that you can't use Section 230 as a shield anymore. It's not necessarily that you're being sued over the content on your platforms. You're being sued directly over the way that you design them. And courts are finding that. Yeah. That you can't hide behind section 230 which protects like forum owners from the content that people put on top of them. And now we could potentially see thousands of similar cases. That is a problem.
C
Yeah, I think, I mean, and honestly this is something since 2015, 16, I've been hoping would be recognized. So it is. I mean, 10 years late, but still I'm very positive about it. But I think like, especially yeah, the. It not only being liable, opening the door, the fact that the, the smaller case still found that and it said like the reporting, the finding validates a novel legal theory that social media sites or apps can cause personal injury. I mean that's a huge deal. Like it's, it's that the actual being liable, not like for the actual injury side of it. I think gets really interesting. How do you see this playing out? Like for. For how many years? For at least a decade. Every one of these has come and gone and Meta keeps meta ing so and Instagram, every single person I know just all day long scrolls on it like they're still, they're still doing what they've Always done and have been doing a very good job at it. So how do you see this? Do you see this actually affecting their business?
A
I saw a great interview with a law school professor this week where the law school professor and his name escapes me. So apologies was basically like, Meta's has to appeal this, and they will appeal this because this is effectually effectively setting precedent in the country for whether Section 230 can work to protect you or not. And the court just found no. So the way that this professor sees it going is that it goes all the way to the Supreme Court. And then the Supreme Court rules, you know, specifically on the boundaries around section 230. And if the Supreme Court, and I'm just saying, like, let's say it goes out the way that this guy thinks. If the Supreme Court rules that Section 230 is not protective, it's not just the thousands in action now, it could be even more that come in. And, you know, I mean, I guess that's somewhat concerning in terms of, like, all right, if you're a content business, are you now liable? Like, you know, we have the discord, and luckily everybody's pretty well behaved and happily contributing there. But, like, am I now liable for everything said? You know, in our discord instance, it just opens up, you know, this Pandora's box that could, you know, cause damage to the Internet and especially to Meta's business. And on the Meta's business side, just one more thought here. They are spending a lot of money this year, 115 to 135 billion, on AI infrastructure. And the reason why the market has, quote, unquote, allowed them to do that is because they're making all this money. If you start seeing your margin trimmed by a, like sort of death by a thousand needle pokes from these lawsuits, then all of a sudden your ability to spend on innovation goes down because your margin comes down and the market doesn't give you the leeway that it might have otherwise. So that's a, that's a potential problem there.
C
Okay, so a couple of other thoughts. See, I, I guess it's just been so long where they have not been affected that it's just still, it's almost unbelievable for me to think that they will or could be. I mean, that's just, it's been years and years of. When was Mark Zuckerberg in front of Congress? Way back? Well, that was like 2020-1820-1720-1718. Yeah. Like, remember good memes back in the day? But like, I think I was there. Oh, you're okay. So yes, that it was so long ago, but I think a couple of things that jumped at one, the cigarette analogy is interesting to me because like I saw like some interesting, you know, like, is that really the right analogy? I think there was an op ed in the New York Times on this. Basically the idea is like there is good and there's bad, so it's not like cigarettes, which, I mean, maybe you can argue there's good, but like in general, I think most people are not even pretending there's any true good out of it versus like social media can be a net societal positive. It can also be very negative to me though, and I've written about this a lot. Like the algorithm is the cigarette or the tobacco. Like it's not the content. It's not even like the core technology of posting a photo. It's just algorithmic based content. And I think if this finally gets people back to talking about that as a danger, I'm very, very happy about it. I think it's good. But yeah, I'll, I'll just like fixing Siri. I'll believe there is a material impact to Meta's business when I see it.
A
I know we haven't been in the courtroom, but let's just. I'm curious if you agree with the, with the verdict here because the Meta argument is teen mental health is complicated. It doesn't come down to one app. You cannot blame everything on a single app. I mean, obviously in some cases they're contributing to teen mental mental health issues. But then again, there is some merit, I think, in saying that, like there's a combination of factors and not just one culprit. What do you think?
C
So I will say, and again, this has been a long, long rant. I think it was like 2019. We'd written the margins, five ways to fix social media. One of them I still loved, which we'll never get. But it's that the timeline should be reverse chronological by default so there's no algorithm suggesting the content. Because to me there is one culprit. It's the algorithmic recommendation of content. That's it. Like whether it's on YouTube, whether it's on Meadow or Facebook, whether it's on Instagram, TikTok, that's the entire platform. That's what show. It radicalizes people, makes them feel bad. So I do think there's one culprit here. I think it is interesting.
A
Like, hold on, you're saying that this person's mental health issues you would say are entirely due to the algorithm.
C
I mean that's like saying is smoking responsible for lung cancer or could obesity or environmental factors and air quality. And come on, we all use social media. We all like everyone. Like it's just. I don't know, I guess I, I do love like a lot of the times I have friends who are like I am not influenced by social media. I'm not influenced by the ads. The posts don't actually make me feel like I'm missing out on something or I need to improve my vacation or. But to me, I don't know, does. Is that not the most centrally clear thing to you?
A
Well, I guess this is the, this is sort of. I see your point. It's like the counter argument to Meta's argument is it's not that smoking cigarettes lead directly to cancer, it's that smoking cigarettes are a known carcinogen. So your odds of getting cancer and then therefore in, in many ways the cigarette companies are liable for sort of the additional cancer cases that they cause. Even though you can't draw a straight line one to one. And maybe there's. There's a similarity with like are more kids depressed today because of social media. If you can prove that and it's tough, then I'm just again talking through these arguments.
C
Yeah, yeah, like causality is very difficult to prove. Which I actually now that we're talking, I do think makes this really a big deal now that because causality it's. It is especially in this case with mental health is like feels nearly impossible to prove. I don't know, maybe you could like based on their usage statistics somehow start to like draw more of a direct correlation of that specific user. Especially if you're looking at individuals. But I mean anyone who has clicked on the YouTube right rail of like recommended recommended videos, it's anything like it just exacerbates, exaggerates, like radicalizes in many cases. I mean it's just designed to make you feel and the easiest way to make people feel and stay engaged is to make them not feel great
A
maybe, but they keep coming. But you, let's say yeah, people keep coming back. I don't know. I want to. I call it doom scrolling and I want to doom scroll. That's my choice.
C
That's what a good addictive. If you want a sports scam, bet on sports. If you want to vape, if you want to smoke cigarettes or whatever your vice of choice might be like, it's similar to me. I know. Do you want to know one of my hot takes? So Twitter changed from reverse chronological by default in the spring of 2015. And two, everyone's default feed was algorithmic.
A
Right.
C
And what happened through 2015 into 2016? Or hold on, let me get the exact same thing.
A
Yeah. That is responsible for the political climate globally right now.
C
Yes.
A
Okay, first of all, a couple of things. Number one, what you described, somebody smoking and vaping and sports gambling and spinning through reels. That's basically my weekend where I've got the vape and this. Look, one thing I'll say about the whole algorithmic thing is that's always going to be weird for me personally, because I was reporter at buzzfeed at the time, and I got the scoop that Twitter was moving to an algorithm that was on like.
C
Oh, sorry, hold on. February 2016. So it's even more in line with my theory here. February 10, 2016.
A
Well, I mean, look, if you could say that Trump, I mean, out of all the candidates, he played social media well. But I think the thing that really put him in office were those debates where he just. I mean, Shane Gillis has, like, a pretty good bit on this. Just that, like, you know, he's like, you know, one. One candidate's like, I'm Rand Paul and I believe in schools. And then Trump was like, you're a complete loser. And everyone's like, you could do that.
C
No, but it traveled more because of
A
the time, and then it traveled because you're right. That's interesting. Yeah, it's possible. I'm not saying it's people, people. That was a weird election year also. I mean, not to bring us all the way back to it, but, like, I definitely, you know, I. When I was also at buzzfeed, I did some reporting on Trump rallies and, you know, got retweeted by this Tennessee GOP account that was like, you know, the mainstream media will never show you this. And. Which was funny because, A, like, I was part of. I don't know if you call it the mainstream media, but the media at the time. And B, that that account, Tennessee gop, which was massively followed and influential due during the election, was run from St. Petersburg. But that's a different story. Like, we could talk about that another time.
C
What a, good, good times.
A
But here's my. So I got that scoop that there was going to be this Twitter algorithm on Friday, and then there was a big thing that happened. It was called RIP Twitter. I don't know if you remember that, like, a million people tweeted RIP Twitter over a weekend after my story came out. And that Led Jack Dorsey to say we were never planning to introduce an algorithmic feed next week. And then my mentions flooded with people saying you're a liar, your career is over. How does it feel to have no credibility? And I thought I was totally gaslit. I thought I was done. And then they made the announcement that Tuesday, the following Tuesday. Jack. Just a minute, just a minute. Reporting on that company, that was crazy. All right.
C
Yeah, so that's my theory. I'm sticking to it.
A
Okay. Should we talk about the tech stocks? Very rough week for the tech stocks. This is from cnbc. The tech stocks suffer their worst week in nearly a year. Driven down by war worries, meta legal woes. I mean Microsoft is 30% off. It's high. 30%. Do you think this is just war or is it like a growing uncomfortableness and unease around the spending and the lack of near term profits from AI for these guys.
C
So I do think it is very important this week what's happening and I think the kind of headline side of it is yes, the market's been getting creamed this week. Tech stocks have been on like epic runs anyways. So like giving a little back feels a pretty, like a pretty natural thing. But to me, because of all the circular financing that's at the foundation of a lot of what's happening in AI right now because of all the kind of like follow on effects of just if the tech giants start actually being in a little bit of trouble, what does that mean for the industry writ large? So, so I think like and again Microsoft I think could be a whole other segment is in terms of why are they not doing as well as the others.
A
But give it to us. I do think 60 seconds on, on Microsoft.
C
Well, I mean I think it's, it's clear there they have fallen behind. There's nothing exciting coming out. They have the install base of everyone using Copilot but people are not paying up and converting to paid subscribers in any meaningful way. They just replace copilot leadership. So like on the whole AI thing it is pretty crazy that they had like OpenAI. They were the partner at the beginning very early and now still they're not really anywhere notable. Like what's the last exciting thing for Microsoft in AI that you can think of?
A
Bing.
C
I mean they had Bing. Like they, they actually they were the first to, they could have pushed this through. Yeah, I think everyone's kind of coming around that and maybe it'll just be a good wake up call. I'm sure given their install base, given their like who they are, if they figure this out they will be a force. But I think like, I think there the market is recognizing it a bit.
A
Okay, let's end this week with one of our traditional product slash feature funerals and ladies and gentlemen, we're gathered here today to pay our respects to the short and quite eventful Life of the OpenAI adult mode, which has left our world indefinitely and doesn't seem like it's coming back anytime soon. From The Financial Times, OpenAI has shelf plans to release an erotic chatbot indefinitely as it refocuses on core products following concerns from staff and investors about the effect of sexualized AI content on on society. Sam Altman startup has already delayed the release of its adult mode. Amid internal discussions over whether to scrap the mile entirely, the sexual chatbot faced growing pushback over how it could encourage unhealthy attachments to AI systems and expose miners to problematic sexual content. Rest in peace adult mode on chat GPT Perhaps our planet is is better off that you never saw the light of day.
C
How do you feel about this? Alex? Companionship has been one of the cornerstones of the Cantroid school of the future of AI.
A
Well, I don't want to be the morality police and say you shouldn't be able to like have cyber sex with your chatbot, but speaking of businesses that OpenAI shouldn't be in, this seems like one of them. It just opens up this whole can of worms. I think this is the right choice. What do you think Matt?
C
Unquestionably the right choice. Like we've brought this up when the moment they said enterprise I was like you cannot have erotic chatbots running around and like then still pretend that people are going to trust you. But, but again, I don't know, maybe they could have done something interesting. Maybe this focus all the creativity in the industry. Are we losing the weirdness of Sora and potentially erotic chatbots now that everyone's just making claws?
A
Well, speaking of potential competition, it does open up the door for other chatbot providers to use some of the underlying technology and make this erotic chatbot of their own just because you can't use it within the ChatGPT interface. Maybe you can use like a GPT based adult mode chatbot and you can make a pretty good, pretty good startup that way.
C
Question and with the disclaimer that we are not lawyers and will not pretend to do so. But given the idea around liability in the social media use case, should AI company be responsible for the end content created with its via APIs? They have control over the. And again, I mean, actually this ties back to the Pentagon in the war question, but like, like, should they be responsible for adults?
A
No, like adults should sign off that they don't know where this is going to go and they shouldn't be liable. But for kids, absolutely. What do you think?
C
Well, hold on, you're saying OpenAI, if some other service is calling their models via API and then it's adults having erotic chatbots, is that. Is OpenAI delivering that service and content, should they be responsible for whatever happens? And yeah, definitely. If then kids are using this, that's a whole other thing. And should the service be liable? Should OpenAI also be liable? Okay, so two separate questions.
A
Yes, that's great. That is a great question. Because it's a little bit different than like the comparison is cloud hosting, but it's a little bit different than cloud hosting. Right, because it's like cloud hosting is you store your stuff here and that enables you to do what you want to do. Whereas, like, chatbot is like, you're using this technology to do what you want to do.
C
No, no. And it's actively generating new content.
A
I think it's the person. Sorry, I think it's the person that deploys it. I don't think OpenAI should be liable if somebody else uses their technology. I think they should have terms of service because they want this technology to have a good reputation. Remember, they're the polling issues. But I don't think they should be legally liable if somebody else deploys it in this way.
C
Okay, then in the terms of service, do you think they're going to kind of like prevent others from creating erotic chatbots?
A
Probably not.
C
I mean, because if you think about
A
it, are you thinking what I'm thinking? Do we have to call it, like, make. Make our own version of this?
C
No, no, no, no. Because maybe, maybe.
A
But if we do, you know what we're calling it?
C
Wait, what?
A
What? Chat.
C
Oh, no, stop. No, no, no, no, no, no, no, no. I was going to say something in a whole different direction about, imagine if it's a brilliant maneuver that they could actually, like see a ton of API based revenue of basically everyone else creating erotic chatbots and then put that under the umbrella of kind of like enterprise revenue and have hockey stick charts about, look how fast our enterprise and API business is growing, because that's technically enterprise. But I can't think anymore because,
A
I mean, it would be diabolical, a diabolical plan.
C
I think you have to explain to anyone who missed. I think you have to Explain to anyone who missed last week's episode the context of what chat this app name.
A
All right, folks, so last week at the end of the show, we talked about dry chatting, which is where you practice a conversation with a chatbot before you go in and do it live with a real person. And so of course, the, the not if you don't do that or the actual live chat with a person wouldn't be called a dry chat. It would be called, it would be called a wet chat. And, and it's just disgusting to think about that. And, but I'm just saying it'll be a good name for chatbot app.
C
I've gone over the whole, I think this, I think we should make this our second episode that ends with that term that I just cannot bring myself to say. And, and then, and then hope next week it doesn't.
A
Yeah, that will be our biggest hope, is we can end an episode without bringing that up. But if I had to bet, I would say I doubt it. All right, Ron, John, thank you so much for coming on 50 50.
C
All right, see you next weekend, man.
A
All right, everybody, thank you for listening and watching. On Wednesday, Greg Brockman, president and co founder of OpenAI, is going to come on and share lots of new information about OpenAI. Don't miss it. Thank you again and we'll see you next time on big technology podcast.
D
Quick Pause.
A
Something useful for you. Love fishing. TikTok isn't just for young people. It's full of real tips. Better knots, better baits, better catches, quick videos from people who actually fish. Download TikTok now. That new thing.
F
Yeah, we've got it. The Drop by GNC bringing you all the newness that matters. Hand picked by the pros who actually know what's up and what's proven to work. We keep you on top of the trends and dialed into what's next.
C
Next.
F
Whether you're crushing it at the gym, leveling up your game, or thriving every day, the Drop by GNC is where the latest solutions in health and wellness land first nonstop innovation and fresh finds daily explore what's new and what's next on the drop by GNC.
Big Technology Podcast – Friday Edition
Episode: "Why OpenAI Killed Sora, Did Apple Just Save Siri?, Meta’s Big Loss"
Host: Alex Kantrowitz
Guest: Ranjan Roy (Margins)
Date: March 28, 2026
This episode dives deep into major recent shifts in the tech ecosystem:
The tone is insightful, bantering, and occasionally irreverent, with both hosts blending reporting, industry insight, and tech-wonk humor throughout.
[01:45 – 14:00]
Summary:
Core Reasons for Sora’s Demise:
Notable Moments:
[08:02 – 14:35]
Summary:
Quote:
“It’s not just OpenAI and anthropic… everyone has recognized that's the prize… more and more it's that a lot of tools to date it was just that API call. I think more and more people are going to start either customizing or fully training on the foundation model side.” (Ranjan, [09:41, 14:35])
Debate:
[24:08 – 31:36]
Anthropic:
OpenAI:
Quotes:
[32:51 – 36:55]
Summary:
Quotes/Moments:
[37:00 – 44:38]
Event:
Implications:
Quote:
[43:03 – 49:10]
Debate:
Notable Quotes:
[52:24 – 55:03]
Summary:
Quotes:
[55:03 – 61:25]
Event:
Quotes:
Segment Closes with Humor:
| Time | Segment/Topic | |-----------|------------------------------------------------------------------| | 01:45–14:00| OpenAI kills Sora; deeper look at platform shift | | 14:00–18:07| OpenAI & Anthropic’s strategy convergence; agentic AI | | 24:08–31:36| Next AI model war: Anthropic (“Claude Mythos/Capybara”), OpenAI (“Spud”) | | 32:51–36:55| Apple/Siri “improvements” and analysis | | 37:00–44:38| Meta/YouTube found liable—Section 230 challenged | | 43:03–49:10| Social media algorithms as the “tobacco” of tech | | 52:24–55:03| Tech stock slide and Microsoft’s AI dilemma | | 55:03–61:25| RIP OpenAI “adult mode” – debate over AI liability and weirdness| | 61:40–end | Closing banter, next week’s preview |
This episode captures a major inflection point in Big Tech and generative AI:
In sum: The AI arms race is sharpening, consumer fun is giving way to enterprise utility, and the legal and trust landscape for big tech is shifting underfoot—setting the stage for deeper changes ahead.