
Loading summary
Paul Raetzer
Google has to just be salivating right now. Like, if, like we've said this before, like, I would never bet against Google in the end here. And all these things that Sam is now trying to solve for. He's got talent leaving left and right. He's got to raise money just to like solve for the fact they're losing 5 billion. He's trying to convince people to build the data centers. Like, who's got all of that already? Google has it all. Welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Raetzer. I'm the founder and CEO of Marketing AI Institute and I'm your host. Each week I'm joined by my co host and Marketing AI Institute Chief Content Officer Mike Caput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all. Welcome to episode 117 of the Artificial Intelligence Show. I am your host, Paul Raitzer, along with my co host Mike Kaput. We basically could just do an entire episode on the week that was for OpenAI. It was an insane week. Like it was. I mean, Mike and I were joking before we got on. This may be the most links for a single main topic we've ever had to deal with for OpenAI. Yeah, so that is. That is going to be main topic number one. It's going to be a rather comprehensive main topic. But before we get into all of that, today's episode is brought to us by Rasa IO. If you're looking for an AI tool that makes staying in front of your audience easy, you should try Rasa IO. Their smart newsletter platform does the impossible by tailoring each email newsletter for each subscriber, ensuring every email you send is not just relevant, but compelling. And it also saves you time. We've known the team at Rasa for a long time. They've been an early supporter of Marketing Institute going back to almost the very beginning. And no one else is doing newsletters like they're doing it. The true personalization based on behaviors is a real key if you want to scale a newsletter. Plus, they're offering 5% discount with the code 5M A I I. Again, that is 5M A I I when you sign up. Visit Rasa IO. Maii today. Once again, that's Rasa IO. Maii. So Mike, the It was honestly hard to follow everything happening with OpenAI this week like it was a It was a flood of news and it just seemed to like overwhelm everything else that happened last week. Oh my God. Like it was, it was wild. So kick us off with what in the world we went through with OpenAI last week.
Mike Caput
We also may be running a record for most things that are going into the newsletter only this week because There are like 30 topics we considered that we're not going to get into, thanks largely in part to the drama at OpenAI. So it has been a huge week at OpenAI, but not always in a good way. Now, the week started innocently enough. Early last week, OpenAI finally rolled out advanced voice mode to all ChatGPT plus and team users. I will say it is pretty amazing. I'm really enjoying it. At the same time, there's some big reports that were largely exciting came out that Sam Altman and other tech leaders had actually been pitching the White House on building enormous data centers ones they're 5 gigawatts each, that is the equivalent of five nuclear reactors all across the US they're pitching this idea to build these to power the AI revolution. These would roughly cost about a cool hundred billion dollars each. But things quickly got a little messy because right around the same time as this report, CTO Mira Morati said that she was leaving. OpenAI posted a letter revealing why that was the case, saying it was time to step back and explore different opportunities. At the same time. We also saw the Chief Research Officer Bob and the VP of Engineering VP Barrett Zoff also leave the company at the exact same time. So we now instead of just having Mira Marathi lead, we also had multiple other people. So we have Bob McGrew, company's chief research Officer and Barrett Zoff, Vice President of Research, departing at the exact same moment. Now pretty quickly, Sam Altman goes into damage control mode. He posted on X thanking them for their work and noting quote, mira, Bob and Barrett made these decisions independently of each other and amicably. But the timing of Mira's decision was such that it made sense to now do this all at once so that we can work together for a smooth handover to the next generation of leaderships. The very next day. Reuters reports that OpenAI is working on a plan to restructure into a for profit company and give Sam Altman equity. Now we had known something like this was in the works, but it seems to confirm that this is moving forward quickly. Now this is then followed by a series of unflattering reports about what's going on within the company. So one of them from Karen Howe in the Atlantic presents a pattern of persuasion and consolidation of power by Altman internally at OpenAI. A second report from the Wall Street Journal said, quote, the chaos and infighting among executives at OpenAI is worthy of a soap opera. And a third from the information wasn't directly about the executive shakeups, but it did reveal that OpenAI is now being forced to train a new version of its SORA video model. As, quote, several signs point to the video model not being ready for prime times when OpenAI first announced it earlier this year. So overall, all these departures, the scrambling by Sam Altman to address the issue. Greg Brockman also had posted about it. Sam in the moment and then these kind of insider reports of shakeups, delays, uncertainties together are kind of painting a bit of a chaotic picture at OpenAI. And Paul, I really liked in this week's edition of the Exec AI Newsletter, which is a new weekly newsletter you're writing through Marketing AI Institute's sister company, Smarter X, you wrote, quote, I've spent 13 years monitoring and studying the AI industry. This was one of the crazier weeks I can recall. OpenAI alone had what seemed like a year's worth of news condensed into five days. Can you maybe like take a step back for us here and tell us what the heck is going on at the company? Like, how worried should we be about this? What's the deal?
Paul Raetzer
Yeah, it, I mean, it really was hard to kind of follow and understand everything that was happening. And each of these things, like the data center thing, we could probably spend 20 minutes talking about what all that means and unpack that. Like each of these items on their own could probably be a main topic. But what we're going to try and do is connect the dots why all this is happening at the same time and what it probably means. So at a very high level, this is a nonprofit research company that many of the top AI people in the world went to work at, starting back in 2015 to work on building artificial general intelligence, to be at the frontier of developing the most advanced intelligence, non human intelligence, that the world has ever seen. And that was what they were there for. And at some point around 2022, ChatGPT emerges and OpenAI all of a sudden catches, you know, lightning in a bottle and they start becoming a product company. And that appears to sense that time be, be creating enormous friction within the organization. There are people who still are there for the pure research side of this. To be at that frontier, to have access to the computing power, to do incredible things and build incredible things. And then there's people who are business people like Sam Altman who are trying to capitalize on a potentially once in a generation or once in a, I don't know, once in a world, lifetime opportunity to build a massive company. And so there was an article that came out, I think it actually came out like after I wrote the exec AI newsletter on Friday and it said OpenAI is, is growing fast and burning through piles of money. And this was a New York Times article and they had gotten access to financial documents that I believe are being shared with potential investors. So as we've talked about, you alluded to Mike, they're raising money. Word is they're actually going to finalize the decision this week of who is going to be allowed to invest. So they're kind of, they have more investors lined up than they're going to take. So a couple of key insights here and I think this is really relevant. So in the New York Times article they said monthly revenue for OpenAI hit 300 million in August, up 1700 percent since the beginning of 2023. Now that's a, that's kind of like, it's one of those things where you can make data say whatever you want it to say. So beginning of 2023, ChatGPT was two months old. So the 1700 percent is kind of like a useless number. I'm not sure why they do that. But the company, this number is significant. The company expects about 3.7 billion in annual sales this year. So that, that's a big number. OpenAI estimates that its revenue will balloon to 11.6 billion next year. So this is actually very relevant. We've heard the rumor that OpenAI was being valued at around 150 billion in this investing round, which like a couple months ago when they were doing some internal stuff, it was like 86 billion was the number being thrown around. So 150 billion sounds like a lot. But if they're projecting like a forward revenue, one year forward revenue 11.6 billion, then that's actually a pretty reasonable multiple range. And I'll explain the multiple range a little bit more when we talk about Anthropic's valuation in one of the rapid fire items, but a roughly like 10 to 13 times forward looking revenue is not unheard of in the technology world. So that $150 billion valuation now starts to make a little sense. But then the New York Times also said they expect to lose 5 billion this year. So even though they're doing 3.7 billion in annual sales this year, they're going to lose 5. ChatGPT on its own is bringing in 2.7 billion this year. So that's 2.7 out of 3.7. So, you know, the vast majority of their revenue is coming from Chat GPT. They did 700 million last year for comparison. And then their, the other billion is coming from other businesses using its technology, assumed through the API. So roughly 10 million ChatGPT users pay the company a 20 monthly fee. The New York Times had that. They're planning on raising that by $2, I assume per month by end of the year and then aggressively raising it to $44 in the next five years, which to me seems like absurd that they even could forecast that because, like, what happens when they're different versions of the model and more intelligent and maybe it's 2,000 per month. So I wouldn't put a pun at the ton of stock in that. And then the big kind of grand finale from the New York Times is OpenAI predicts its revenue will hit 100 billion in 2029. So this, this kind of like is the first time that I know of that we're seeing these sort of numbers, like true inside look at what's going on and. But it does confirm what I've said on the podcast recently, which is that the 6 to 7 billion they're raising isn't enough money. Like this is just a prelude to everything else. And I don't remember it was this article or another one I read over the weekend that was talking about the complexity they're going to deal with. So to, to raise this money, they have to convert the company into a for profit. But to convert into the for profit, they can't just like wipe away what existed in the nonprofit. Like, the nonprofit has to get assets and value out of this thing and it's going to be really complicated. So apparently what they're doing is raising this money and then they have two years to complete the process of converting over from the nonprofit. So this is going to be really, really messy and weird and almost like unparalleled. It's not like we've never had a nonprofit become a for profit, but probably not of this size. And so, but again, like they're losing 5 billion this year. Raising 6 to 7 isn't going to solve anything. And so I still feel like everything happening right now is just a prelude to an ipo, like as quickly as they can probably get to an ipo. And so, you know, again, it's just keep in context, why this is all happening, like all this drama is likely coming from the fact that we had this research firm that's trying to become this massive trillion dollar company within a couple years. Because at 100 billion in 2029 in revenue, I mean, you're talking about a 1 to 2 trillion dollars market cap publicly traded company at that point. So, you know, one of the, what, 15 biggest companies in the world basically is what they're projecting in five years. So this leads to another really good article from the Wall Street Journal where it was turning OpenAI into a real business was tearing it apart. And this one was fascinating. I think this one came out either Friday night or Saturday morning if I remember correctly. So I'm just going to go through a few key excerpts from here because this is stuff we've never heard before. Like there's some insights in here that to my knowledge we had not seen. So the first is it says some tensions are related to conflicts between OpenAI's original mission to develop AI for public good and then deploying money making products. But this gets into Mira leaving and some of the other people leaving. So Mira is now one of 20 OpenAI researchers and executives who have quit this year, including multiple co founders. John Schulman, who we talked about recently on an episode, Andrej Karpathi, Ilya Sutskever, like all of them were like they were co founders. They've been there from the beginning and they've all quit. That, that's, that's a trend. That is not like an anomaly. That is like something is going on. The article said that Altman has been largely detached from day to day characterization that the company disputes. But you can see it like he's everywhere, globally, trying to raise tons of money. He's doing lots of interviews, he's, he's all over the place, but he's not involved in the technical side of the business and the day to day operations. Meanwhile, he's the CEO of a company that went from 770 employees last November when he was fired as CEO for five days and then brought back. They're now at 1700 employees. So lots of growth. So you could just look at this and say this is just standard growth stuff and it's complicated, but that doesn't actually seem to be the case. So it got into Ilya and when he left the company, so it said that Mira and president Greg Brockman actually went to Sutsko's house and tried to convince him to come back when he left because the company was in disarray and to quote, might collapse without him because they saw all the top researchers were going to leave if Ilya was out. And so apparently this is again the first time I'm seeing this anywhere he was ready to come back. Like Ilya was kind of thinking, okay, maybe I'll come back, maybe we'll work this out. And then he got a call from Greg Brockman who said, we're rescinding the offer, like, you're not welcome back, basically. And apparently what the article said is that they couldn't figure out his new role. Like they'd already replaced him and they couldn't make that guy like step back down for you. So they couldn't come to an agreement on what his role was going to be. So he left. Then the other one that I thought was really intriguing is, you know, when Greg Brockman took his sabbatical the day after 4O came out, if I remember correctly. So it's weird, like we had advanced voice comes out, Mira leaves And these other two executives a few months earlier, 4O comes out, Greg Brockman leaves the next day on sabbatical. And what I said at the time, and I've said multiple times since, is it's a very oddly timed sabbatical, like, with very little information. Well, ends up that apparently Sam and Greg agreed, quote, unquote, mutually, that Greg should maybe step aside for a little bit. So what this article is saying is that people haven't always loved Greg's management style. It says, quote, his management style caused tension. Though President Brockman didn't have any direct reports, he tended to get involved in any projects he wanted, often frustrating those involved. According to current former employees. They said he demanded last minute changes to long planned initiatives, prompting other executives, including Miramorati, to intervene to smooth things over. For years, staffers urged Altman to reign in Brockman, saying his actions demoralized employees. Those concerns persisted through this year when Altman and Brockman agreed he should take a leave of absence. So that was interesting. And that kind of jives with why all of a sudden the sabbatical. And then the final thing I'll end with here is this idea again of Sam pushing growth, because that's what Sam does versus the research team and the technical team led by Mira that often was pushing back, saying, we're not ready to do these things. And if you recall, like ChatGPT was this, like, Sam's the one that greenlighted ChatGPT and said, basically you have like three weeks to launch this product. And maybe the technical Team Ilya Mira maybe they didn't grieve in back then. And so they gave a couple of examples. So it said that Mira repeatedly delayed the planned launches of products including Search, which we still don't have, Search GPT, which was announced months ago, Voice, which we finally just got. But Mira is the one that took the PR hits over voice and then Sora, which is the one you mentioned, which is Video, which again Mira was the one getting interviewed saying well what's the training data? And her having to lie to people, basically saying I don't, I don't know what the training data is. Of course you know what the training data is. Weird. Come on. So they said that the art. The information had it, that it's just not ready, that what we saw the demos of was was misleading probably at best that it, it takes the, the one source said 10 minutes to create like a few seconds of video like what we were seeing demos of and that it can't create consistent characters and objects and all these things. So what I wrote on, on LinkedIn and I'll just kind of read this and, and then, you know, see if we have any other thoughts here. Mike so the Wall Street Journal article that we'll link to the, the one thing that jumped out to me is it said in this spring, tensions flared up internally over the development of a new AI model called GPT4O, which is the one where Greg left the next day that would power chat, GBT and business products. Researchers were asked to do more comprehensive safety testing than initially planned. But given only nine days to do it, executives wanted to debut 4o ahead of Google's annual developer conference and take attention away from their big rival. Which is funny because we always joke about that, like just look and see the calendar of events and you know when the next products are going to come because they're just going to launch them the day before Google does their stuff. It went on to say the the safety staffers worked 20 hour days and didn't have time to double check their work. The initial results based on Incomplete data indicated GPT4O was safe enough to deploy. But after the model launched, people familiar with the project said a subsequent analysis found the model exceeded OpenAI's internal standards of persuasion. Now that's interesting because that's what I've said multiple times recently on the podcast is these things are far more capable of persuasion than they're letting on and they're actually extracting that capability by trying to find when it's doing it and stop it from doing it. But, but natively these models are insanely persuasive and they probably have been since GPT4 was first created. So then I said this is a microcosm of the industry. Now companies are racing to preview and launch more advanced models and products before their competitors in order to win market share, investment dollars, stock price increases and ego boosts. This leads to half baked hardware like Rabbit and humane AI pin product demos we won't see in production for months or years like Meta Orion, which we'll talk about in a minute. Apple intelligence, which they launched phones that don't actually have Apple intelligence and yet they're running ads featuring Apple intelligence, which makes no sense. OpenAI Sora and Advanced Voice and Google's Project Astra. And then the models may be more capable and dangerous than what we're being led to believe. In fact, I know that they're more capable than what we're being led to believe. And yet the balanced side of this is the tech is still very real. Its impact on businesses, the economy, society is still completely underestimated and underappreciated. And then I said the for the first time in human history, we have intelligence on demand. It's going to be messy. We're going to have instances like this where the companies at the frontier have really bad days and weeks and take a ton of PR hits and lose really talented people because this is unparalleled. Like what we're doing here is like nothing we've ever seen. And so these companies that are actually out there leading and the leaders who are pushing us into this frontier of the intelligence age, it's not going to be a straight line. And so that's kind of where it leaves us. And then the one final thought, I'll leave us, Mike, is Google has to just be salivating right now. Like if, like we've said this before, like I would never bet against Google in the end here. And all these things that Sam is now trying to solve for. He's got talent. Leaving left and right, he's got, he's got to raise money just to like solve for the fact they're losing 5 billion. He's trying to convince people to build the data centers. Like who's got all of that already? Google has it all. Google has the ability to pay two and a half billion to bring back Norm Schaser from character AI, who is one of the authors of the attention is all you need paper. They have data centers, they have infrastructure, they have everything. They're the ones that created all the innovations that drove everything. And so I just, I mean if I'm Google, I am just sitting back and I am like watching all this turmoil and I am looking for the opportunity to re seize the leadership position that you had for two decades. And I just, I couldn't help myself like all weekend thinking about that. They have to be ready to pounce. Like yeah, right now is when you go like all like the Sora models not ready. You've got Project, Project Astra sitting there, you've got Notebook LM which obviously alludes to way more advanced voice capabilities than the public knew they had. There's so much stuff that I would just be like, I don't think Anthropic can do it. They're trying to raise their own money. They got all these problems. They don't have their own chips, they don't have, you know, they don't have distribution. They have anything. Yep, this is, this is Google's game to win again all of a sudden. And I find that fascinating, especially the.
Mike Caput
Whole on device AI thing as they're baking Gemini into, you know, all of your different apps, all of your different phones. Especially where we talked about Apple kind of whiffing a little bit at the moment. It's really, really an interesting time.
Paul Raetzer
Yeah. And to that point, like there's rumors now that like part of the reason Apple Intelligence isn't out on time is because Apple's pulling back from their open eye relationship, that they're not taking the board seat, they may not invest in the next round and maybe they're seeing these conflicts as well and saying, well, maybe that's not the horse to bet on. And now I'm not saying that they won't still integrate ChatGPT capabilities into Apple Intelligence. My guess is they will. Yeah. But I would assume they are now far more aggressively building their own capabilities. So six to 12 months from now they don't need ChatGPT. And there it's going to be Apple models doing everything. Or in a crazy turn of events, maybe it's Google's models, maybe it is Gemini, maybe it, maybe it's somebody else. They've done plenty of big deals before, so I don't know, like I, I, I, I mean OpenAI is going to raise their money. They're going to announce sometime this week or next week probably that they raise 7 billion or 10 billion or whatever that number is and they're gonna have all kinds of amazing investors. But it is not a sure thing right now. That's all I'm saying is like there is a lot of instability and this is the stuff that's surfaced to like New York Times and Wall Street Journal. Imagine what's happening that hasn't surfaced yet. And then I did one other quick note. I saw an anthropic employee, a former anthropic employee, because anthropic there was some people sort of like taking a victory lap here that OpenAI was getting a bad week and that person was like, listen, my NDA is up. You better watch yourselves because you know, everything that's going on, OpenAI is happening at anthropic too. So don't be, you know, don't be pretending like you've got this all figured out. So yeah, it's, it's hard like companies building at this pace and these kind of valuations, like there isn't much precedent for how to do this right. And so it's going to be messy and maybe it, maybe we have some unexpected left turns I guess for some of these companies that seem like the winners right now.
Mike Caput
Well, I actually think that's a really good transition into the second topic because the second topic is also Open AI related but kind of shows like zooming out where at least some of OpenAI's leaders and researchers think, think things are going. Because in addition to all this stuff going on and I'm sure regardless of what Sam Altman said, he did not plan all of this happening. No, he tries to pretend that this is, there's contingencies and things which I don't think there are for this stuff. But he did find the time to actually publish what I, I don't think it's crazy to say is a prophetic article called the Intelligence Age. And it basically predicts this awe inspiring and very disruptive road that AI is about to lead humanity down. Now it's really important this next point, which is this is not just Sam Altman like writing a bunch of corporate hype. I realize, like love him or hate him or be skeptical of him, like his writings are actually really, really important to pay attention to because back in 2021 he published an article called Moore's Law for Everything that basically outlined where AI was going almost two years before the launch of GPT4. I think we actually read that article at the time and got like literally a peek into the future. And this article, which is incredible and well worth reading today, went largely unnoticed by businesses and government leaders. And they would have honestly done well to pay attention to that writing at that time. They would have gotten a Leg up on the changes that are coming down the line. And I would argue this time's no different in this article because Altman outlines how, quote, we'll soon be able to work with AI that helps us accomplish much more than we ever could without AI. And that includes things like having a personal AI team working for you, having virtual AI tutors who can provide personalized instruction in any subject, and, quote, shared prosperity to a degree that seems unimaginable today. Now, Altman, in this essay, also predicts we will not just reach artificial general intelligence AGI soon, but we might also possibly have artificial superintelligence asi. This is when AI is smarter than the smartest humans at all cognitive tasks. He even goes so far as to say, quote, it is possible that we will have super intelligence in a few thousand days. And he put an exclamation mark in parentheses after that. It may take longer, but I'm confident we'll get there. So, Paul, I mean, I think it's interesting with OpenAI chaos, like there was a tweet or a post on X rather. We covered when there was the first round of people leaving, where Benjamin Dickraker, who's an AI dev at Grok actually right now, he had tweeted something to the effect of if you were so convinced you were building AGI, why would you leave? And we've seen like, with the drama, like there's some human elements on this rocky road, but really it's important to remember that that is the key mission here. That is what they're going for. Like, we've talked a bit about why we need to take Sam's writing on this stuff seriously. Why is this article so important for us to pay attention to?
Paul Raetzer
Because he's been right before and people didn't listen. So this is how the week started, actually. This was the Monday thing. So the OpenAI week starts with Sam posting this and then they voice and then all the articles start flowing. But yeah, so like back in March 16, 2021 was when the Moore's Law for Everything article came out. I remember vividly, like I posted that on LinkedIn and I could go back and find it. And my guess is there was like a thousand impressions, right? Which is not a lot. It people weren't ready yet in 2021 to hear this stuff. They didn't care about AGI. Most people outside of AI in the technical world didn't know who Sam Altman was or care who Sam Altman was. They just weren't in like the they. They weren't known enough yet for people to pay attention. And I remember I started using an excerpt from that Moore's Law for everything at the start of my talk. So like I've been doing Intro to AI class once a month since fall of 21 and I use that excerpt every time 42 times. I've run that class now. And so and any a lot of times when I do keynotes, I use an excerpt from that. Because my point was always we have known this was all coming. That if we would have just listened to what the research labs were doing back in 2015, 16, 17, 18, all the way through to when Sam wrote this in 2021, they had already seen the early forms of ChatGPT. Google had early forms of language models that could do these kinds of things. The we knew what they were working on. And so in the Moore's Law for everything which he so this is published two years before GPT4 came out. It was 20 months before ChatGPT came out. And he said the coming change will center around the most impressive of our capabilities. The phenomenal ability to think, create, understand and reason. Those are what these models now do. So he was telling us where the future is going. So when we look at kind of where we are now. So in the intelligence age, he writes, as you said, like we'll have these personally, I will have full of virtual experts in different areas working together could almost anything we can imagine. Our children have virtual tutors who can provide personalized instruction in any subject. Like he's sort of looking out ahead and then he addresses the question of like, well, why like why have we arrived at this moment? And I thought it was he was putting a stake in the ground because there are people who still believe that we're not on the path to AGI, we're not on the path to superintelligence. And so what he says is in 15 words deep learning worked, got predictably better with scale and we dedicate dedicated increasing resources to to it. That's truly it. Humanity discovered an algorithm that could really truly learn any distribution of data to a shocking degree of precision. The more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is. Technology brought us from the Stone Age to the agricultural age and then to the industrial age. From here, the path to intelligence age is paved with compute energy and human will. And so my point when I put this one up is Like, I hope people are listening this time. Like, yes, there may be some hyperbole here and maybe it doesn't happen in a few thousand days, the super intelligence thing. But like, who cares about the super intelligence thing? I'm still more like the road to the general intelligence thing, progression along that is enough to disrupt everything. Like, I don't even. The super intelligence conversation is fine. Like we can have that, but that is, it's so abstract to people. Like we're still trying to wrap our heads around what 4.0 can do, more or less, right. What super intelligence can do. So yeah, I just, I think people need to process this stuff. They need to think about the reality of the world, that we have intelligence on demand and it gets smarter every day. And we can reasonably predict one to two years out what these models are going to be capable of doing. And if we can look out ahead, then we need to be doing way more with educational systems, governments, the economy, businesses. Like, we, we just have to accept that we're heading into a, a very different timeline.
Mike Caput
You and I have talked about this in one way or another quite often, but I really do want to emphasize to the audience, despite all the chaos and news and stuff to learn and keep up with, there's only, I would argue, like a few things you have to take really deeply, seriously things like these predictions to get some of the way directionally correct on where we're going. Even if you put aside all the other noise, if you had only taken seriously the Moore's law for everything article and now this, I think you'd still be quite far along in terms of preparedness, in terms of at least having a sense of the broad contours of the future. So I think that's like an underrated point. People think they always need to learn more, do more. It's like, no, take this seriously.
Paul Raetzer
Yeah, like back in our books when we published in summer of 2022, I mean, there's a section of the book that says what happens when AI can write like humans? So the reason that I wrote that section was because I knew that, that that was what was about to happen. I didn't know ChatGPT was coming out that fall, but we already seen early forms of GPT. Like we, we knew where the labs were going. And so yeah, to your point, like if you look out ahead and you can make these assumptions about what the AI is going to be capable of, one to two years out, think about the head start you can get. Think about how many companies didn't do anything until Chad GPT came out, had no idea that like genitive AI was even a thing until that moment. It had been a thing for years. Like we knew what they were building. So yeah, like being prepared as, as Leopold would say, that situational awareness of like be aware of what is happening right now and the stuff that we can reasonably predict will be true one to two years out. Anything beyond that, that's a fool's. Aaron but like one to two years out we have a reasonable idea and we should start planning for that, not for the current capabilities.
Mike Caput
And I would just lastly add, don't assume it all has to be perfect and accurate for it to be disruptive like we've talked about before. Even if we get 30% down the road that they're predicting, that is one of the biggest disruptions we will see in a long time. It just has to be good enough, not perfect, not super intelligent to have a huge effect. All right, so our third big topic this week. On September 25th, Meta had its 2024 Kinect event and they debuted a ton of new AI related products and developments. So first up, kind of the headliner of the show were the company's Orion Augmented Reality glasses. Now this is not a full fledged product, but it was a prototype that they revealed of augmented reality glasses that look similar to regular eyewear and these use advanced projection technology and include the same generative AI capabilities as the company's current Ray Ban smart glasses to basically augment your reality as you're wearing them and engaging with the world. Now they fully acknowledge this product is not being sold yet. Meta CTO Andrew Boz Bosworth actually posted on X the following message quote we just unveiled Orion, our full AR glasses prototype that we've been working on for nearly a decade. When we started on this journey, our teams predicted that we had a 10% chance at best of success. This was our project to see if our dream ar glasses wireless FOV FieldView display, less than 100 grams wireless were actually possible to build. Not only do they work, we'll be using them internally as a time machine to help build the core experiences and interaction paradigms needed for the consumer. AR Glasses we plan to launch in the coming years. Now Meta also announced the Release of LLAMA 3.2. This now has 11 billion and 90 billion parameter models that can also process and reason about images and are performing comparable to closed models on a range of benchmarks. That Release also includes a 1 billion and 3 billion parameter models that are specifically designed for edge and mobile devices. Now There were also a couple other really interesting updates for other Meta products and areas. There's a new Quest 3S VR headset that costs just $299.99. There are updates to the existing Ray Ban Smart glasses, including improved AI responsiveness. Facebook and Instagram are testing AI generated content right in the platform. And there are some new fun celebrity voices available when you use Meta's AI chatbot. So, Paul, as you're looking at these, what kind of jumped out to you here as worth paying attention to that.
Paul Raetzer
This thing is not being produced anytime soon like that at all? Yeah, like, everybody, like the, the media went nuts and like Twitter went crazy with this Orion thing. And one, I don't know if they're like trolling OpenAI using the Orion name. Maybe that's been the project name all along, but that's been the rumored name of the next model from OpenAI. Yeah, I think the key thing people have to know is these glasses are not coming anytime soon. The Ray Ban ones are there. You can go buy them, you know, if you want. But I, I don't even see Meta winning here. Like, so Zuckerberg is obviously all in. And you got to keep in mind the history here. Zuckerberg hates Apple, so he, he despises the fact that his app is controlled by another platform and another company. So anything he wants to do with all of his different apps, you know, Instagram and Facebook and WhatsApp is controlled by the App Store and Apple, he does not want the future to live on someone else's platform. So the reason he tried to do Metaverse and now he's doing glasses, is because he wants people off of phones. He wants to control the platform where all of this stuff lives. And so he even said in an interview a couple days ago that he thinks by 2030 that these glasses, that glasses replace phones, that, that he, you know, and he, he's not shy about this. Like, you go listen to his interviews, he, he hates Apple. So that being said, I'm betting on Apple and Google in, in this one. So, like, if I'm, if I'm placing futures on who wins for glasses, it's not Meta, in my opinion. And the reason is they have no hardware capability. This is, I mean, they have Quest and stuff, but, like, they can't manufacture at scale the way Apple does. So if we're talking about a hardware problem, which is what this is now is like, this is a manufacturing problem now. Intelligence will be a commodity. They're all going to have really smart models. They're all going to have the capability to see and understand the world around them. The multimodal, like computer vision stuff, all of that's going to be table stakes. This is a hardware thing in the future and a software thing. I'm betting on Apple when it comes to that. The Vision Pro amazing technology isn't going to scale. Too heavy, too expensive. Apple knows that. They put it out anyway. Google worked on Google Glasses 10 years ago. Sergey Brin is back in the building every day working with the team on Gemini and likely working on hardware for Project Astronomy, which is their demonstration of seeing and understanding the world around them. I just feel like chips, batteries, supply chain logistics, manufacturing expertise, that's what this becomes. And Google and Apple will, in my opinion, crush Meta when it comes to that. Now it's hard to bet against Zuckerberg. He obviously has the will and the vision and the money to do really complicated things.
Mike Caput
Yeah.
Paul Raetzer
But I, I just feel like this is going to be a three horse race and, and right now I would probably lean more in the direction of Google and Apple eventually winning this now, but Zuckerberg's got probably more motivation than everybody else because he's basically betting the future again on, on kind of blurring his metaverse and now his new AI thing together and it and his hatred for Apple.
Mike Caput
That could be a powerful motivator.
Paul Raetzer
Yeah. So again, like, while I think that this could change, like again, like right now I would be putting odds on, on probably Google and Apple to eventually figure this out. And, and you know, it'd be interesting to see like who buys what apparel companies and you know. Yeah. Eyewear companies and things like that. I don't know, like it's, it's going to be fascinating to see it play out. But I agree that, that, that we will experience the world through our phones, which can see, which is Project Astro and through glasses. Like I think those are the two things. But I don't see phones going away. Like this was the whole play with the humane pin. Right. Rabbit and all that crap. Like phones aren't going anywhere. It's a great form factor. We did have an article we didn't talk about recently that Johnny Ivy is who, you know, created the iPhone with Steve Jobs is supposedly working on something with Sam Altman, something that supposedly isn't a phone. We don't know what it is yet, but there's going to be lots of attempts made at embodying this intelligence in different form factors. But I still feel like glasses and phones are probably the more likely outcome of how this plays out.
Mike Caput
Yeah, it's interesting this got so much attention because honestly, I thought some of the implications of llama 3.2 were more exciting. Yeah, open, robust intelligence that can go on devices.
Paul Raetzer
I mean, yeah, they cannibalize their own news with a product that obviously Zuckerberg just wanted to show off that they'd. They did Something with the $10 billion he spent over the last 10 years, basically is what this looked like. And yeah, it's like Elon Musk showing off some car or future concept that like, we're not going to see for five or 10 years. And it's gives a nice little stock boost and ego boost. And like we said earlier, like, we get these hardware examples of things that aren't going to be here anytime soon and likely won't look like anything. And it's interesting because Apple is the one company who never gave into that. Apple kept things secret until the day they came out. And even Apple gave in finally and introduced Apple intelligence months before it was available. And then, even then they launched the hardware without it in it. So, you know, it's the, the pressure to do things in AI is massive right now for private companies and public companies. And once Apple gave in and did it, I was like, okay, like, it's game over. Everybody's just going to start showing off all this stuff. And, and by the way, yes, I'm aware for any listeners that Snap came out with glasses too. Irrelevant. Like it's, they're, they're not going to be a player in this, so. And plus, they're terrible. But you heard it here first. I like that.
Mike Caput
All right, let's dive into some rapid fire this week. We've got a lot of really interesting things going on that, honestly, if OpenAI was not just dominating headlines, many of these could have easily been main topics as well. So first up, California Governor Gavin Newsom has vetoed the AI safety bill, SB 1047. In California, this is formally known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill would have implemented an extremely strict framework for AI in California. It would have required things like large AI companies in California implementing safety measures like a kill switch and testing protocols to prevent catastrophic harm. It would have applied to AI companies with AI models costing over 100 million to train or 10 million to fine tune. And Governor Newsom actually cited a few reasons for vetoing this bill, which some had criticized as being far too strict and stifling innovation. He said it didn't account for whether AI systems are deployed in high risk environments. It applied stringent standards even to basic AI functions. He also said it could give a false sense of security about controlling rapidly advancing technology and that smaller specialized models could potentially be more dangerous than those that are being targeted by the bill. Now, this bill had faced opposition from tech companies, Meta, Amazon, Google argued it would stifle innovation. Some supporters, you know, like Jeff Hinton for instance, one of the godfathers of AI, thought it was reasonable and had published things supporting it. Elon Musk also was a fan. But that is irrelevant now because the bill is not becoming law. So Paul, were you surprised by the veto actually going through? Like we had known this was a possibility. I mean, what does this mean for AI regulation legislation?
Paul Raetzer
I'm not surprised. I had kind of, I don't remember what I said on the recent episode. I, I feel like I, maybe I said there was like a 60 chance this thing was going to get vetoed. But yeah, I was kind of in that 60, 40 camp and I, I thought it was because there, there seemed to be far more appetite for the current administration to get involved here and that I, I thought there was going to be a lot of pressure once Pelosi got involved on Newsom to sort of pump the brakes on this and let some stuff play out before California stepped in and did something that could seem a little bit overreaching based on the model capabilities today and where we kind of knew they were going. So it's not the end of the story. The big picture here is there's a lot of push from one side that wants regulation at the application layer, not the model layer. So we want, you know, again, I think I used the example of, you know, guns or bombs or whatever. Like you can create dangerous things that aren't dangerous until you use them in dangerous ways. And that's kind of the argument here is the model itself, yes, it has capabilities just like the Internet to be used to do evil things, but it's the use of it to do the evil thing that should be regulated, not the actual general purpose technology. Right. And so that's kind of what won the day here, I think, is like, okay, we need to think this through a little bit more. We need to allow innovation to take take hold. Doesn't mean we're done. I mean, we said on a recent episode There was like 700 bills currently at state level, you know, different stages. And I do think once we get through this election cycle, regardless of what the administration in the US is, there is going to be a far greater appetite for the federal government to get involved. And I think that's in essence what's being allowed to happen here, is let's slow down for a second, let's see where these models are going, and let's have higher level conversations at the federal level to figure out if there's some variation of the AI act that's in the eu, if there's, we should be doing something like that, more of a nationalized approach to this. And I'm not, I'm not a believer that's going to happen, you know, in the near future. But I think that's basically what's going on here, is like, slow down and let's, let's think about this at a higher level.
Mike Caput
All right, next up, Anthropic is exploring a new funding round that could significantly boost its valuation. So this company has reportedly floated a potential valuation of 30 to 40 billion in early talks with investors, which would be roughly double its valuation from earlier this year. Now, as a reminder, we just talked about, OpenAI is trying to raise several billions of dollars at a valuation around 150 billion. So anthropic appears to be attempting to capitalize on the intense investor interest in AI companies. Now, it's worth noting Anthropic's financial position. So the company projects annualized revenue of about 800 million by the end of 20. But it is also expected to burn through about $2.7 billion this year. And a significant portion of its revenue is shared with Amazon, its cloud provider and reseller. In comparison, OpenAI is generating about five times more monthly revenue than Anthropic. And if Anthropic secures funding at a 40 billion valuation, the multiple on revenue, which is about 50x, would be higher than OpenAI's, which would be less than 40x at the current revenue numbers. So, Paul, as you're looking at the numbers being thrown around here, obviously nothing is set in stone yet until he raises money. Does their valuation, given their position behind OpenAI, make sense?
Paul Raetzer
Yeah, it does. And actually one of my favorite podcasts is, I've probably referenced it on the show before, but BG2 with Brad Gerstner and Bill Gurley. It's a masterclass in how VCs see the world every time they publish. And I just happened on Sunday to be listening to their podcast and they talked about this stuff. And so what they said was, so OpenAI's 150 billion valuation is about 15 times forward revenue. The key here is forward revenue. When we're looking at one year out what we what is the revenue going to be? Not what it is. 12 months preceding they said Google IPO'd in 2004 at about 10 times forward revenue. Microsoft invested in Meta, which people may not know or have forgotten, in 2007 at 50 times revenue. But when Meta IPO'd in 2012 they were at about 13 times revenue. So yeah, like the 10 to 15 times forward revenue for this kind of company is a, is a reasonable range to be looking at. And so when you look at anthropic's projected growth 12 months out to get into the 40, 50 billion range, that's my guess is they're probably somewhere in that 10 to 15 times forward looking revenue range. And that's a reasonable way to assess this kind of company.
Mike Caput
Company gotcha. That is good context because I do think people sometimes, especially if you don't know anything about how these numbers are devised. Like you see these just eye popping numbers. But as we've talked about, all the numbers are eye popping now in AI, but also the lot. There is logic behind some of this, at least in certain circles.
Paul Raetzer
Yeah, we'll drop the link to that podcast episode in there because they talked about nuclear as well and energy and infrastructure and stuff. But yeah, I mean Brad and Bill are just brilliant. So again, highly recommended. Again they take the perspective from a vc, but they've been involved in some of the biggest investments and deals over the last 20, 25 years. So they just have so much insight and it's. Yeah, I mean there's some podcasts where I would gladly pay to listen to it first time. This is one of those where I would happily pay because I feel like the value I get is so insane.
Mike Caput
And I would also argue too like some people that might not be as I, you know, investing in client, like what do you have to learn from a vc? Like these are the people driving the behavior at these companies. So I would argue even if you have zero interest public markets or private investments, you can understand the levers that our people are pulling and the motivations that are driving these founders and CEOs as well.
Paul Raetzer
Yep.
Mike Caput
All right, so next up, the Federal Trade Commission FTC has launched Operation AI Comply, a law enforcement sweep targeting companies using AI to deceive or harm consumers. Now these actions from the FTC span a range of what we would call AI related deceptions. They can include both false claims about AI's capabilities or things like use of AI tools for actually causing harm, like generating fake reviews. A few notable cases have already been swept up in this, a company called Do Not Pay. They have some misleading claims about an AI lawyer product that they claim does some things that the FTC disagrees. It actually does. A company called ascendecom has some false promises of AI powered passive income that have gotten swept up in this, in this operation. And then an AI tool called Writer. Now, R Y T R, not the word writer, which is another tool we have talked about. Rytr Writers AI tool also has been flagged for creating fake product reviews. So this initiative is part of a broader effort by the FTC to address AI related consumer protection issues. Now, Paul, this is clearly somewhat of a problem if the FTC is bothering to take action around it. And you know, you and I have talked about. We've literally spent years observing AI companies like exaggerate or even sometimes unfortunately deceive about what their technology is capable of. Like how big a problem is this today?
Paul Raetzer
This is the application level at work is what I was saying on the previous topic. So the model level is we stop the models from being capable of doing things like this. We protect consumers in that way. The application layer, which is the government kept saying, we have existing laws that cover this stuff. This is misuse of AI as that layer. And this is the government showing its muscles saying, we will stop this already. Now the timing is interesting with SB 1047 being vetoed and the government coming out with this at a federal level saying, hey, we've got this at the same basic time may just be a coincidence, but I think the government's trying to demonstrate that we already have laws in place to protect consumers and we will enforce them. And so, yeah, I mean, it is so prevalent. There's no way the FTC has enough employees or in the near term, AI agents that can monitor and, and pursue all the ways AI is being misused already and going to be misused. Right. But you know, you got to. It's like speeding tickets. Everybody's speeding. But you got to every once in a while catch a few people so that hopefully slow everybody else down. That's kind of what this is. It's like we're here, we have laws, we can choose to enact them when we want to and we'll make some examples out of people and hopefully other people don't do this.
Mike Caput
All right, so Scale AI is a company that provides data labeling services for AI developers. And they have been getting a ton of attention lately thanks to their remarkable growth. The company sales nearly quadrupled to almost $400 million compared to the same Period last year with annualized revenue approaching $1 billion. Now that's because scale AI actually pivoted from primarily serving self driving car companies with their data labeling to actually becoming a crucial infrastructure provider for major AI developers like Meta Platforms and Google. The company employs hundreds of thousands of hourly workers to fine tune data for AI models and positions itself as a sort of hybrid human AI system for producing high quality data at a low cost, which is necessary to train the models that we all use every day. Not to mention, it's got a pretty high profile and outspoken founder, Alexander Wang, who is just 27 years old. Now Wang has just given an interview on the A16Z podcast that kind of gave us a look under the hood at Scale AI success and why this company is important in the general AI ecosystem. Him so Paul, I know you found a lot to pay attention to in this interview with Alexander, like what's worth noting here.
Paul Raetzer
We've talked about him a few times in the episode, but again, I think the, the point here is this is someone everyone should know and be paying attention to. He is heavily influential in the training of basically every major frontier model. His company is kind of leading the way in working with all the major frontier model companies. I just, you could, I don't know how. I think it's just when you like pay attention to so many different channels, you start to notice these trend lines real quick. And so he was featured in the Information, the Wall Street Journal and then the A16 thing dropped simultaneously. Which tells me this is a, a proactive PR effort to do something like, you know, they're telling their story. Yes, but there, there's more to this. There's reasons why you do this. We, Mike and I both came from PR background. I, I owned a PR firm. It, it's pretty obvious when things are being like the stage is being set to do something bigger. So I, I'll just go. Honestly, I wanted to do this one as a main topic, but as you said Mike, there's so much else going on. So I'm going to do a, a quick rapid fire with some of the key things from this interview and I would encourage people to go listen to the A16Z podcast in its totality because he talks in generalities about things he knows very specific details about. And what I mean by that is he can't say most of what he knows, so he's speaking in these general terms. But if you know what he does and the companies he works with, you can read between the lines about a lot of things. And so I'll call a few things out. So first, three pillars of AI. He talks about his compute, the models or algorithms, and then the data. So compute has been powered by folks like Nvidia. The algorithmic advances have been led by large labs like OpenAI, Google, others, and data is scale. That's his company, Scale AI. So they are the data source, what he calls a data foundry. They want to be the place that provides all this data. And I'll explain why that matters more in a minute because prior to this the data foundry was scrape everything from the Internet. We that's not going to get us to the next level of models. He talked about three phases of the state of models. One was phase one, pure research. This was like the invention of the transformer. Smaller models he calls up to GPT3, basically. So this gets us to 2022, roughly as phase one. Phase two is realizing that scaling laws seem to be true, that we give them more compute, more Nvidia chips, more data, more time to train, and we experiment with those models, we get more powerful, generally capable models. So that's GPT3 up until now. And then phase three, he considers heavy research, algorithm innovation. So he said we're entering a phase where research is going to start mattering a lot more. I think there will be a lot more divergence between the labs in terms of what research directions they choose to explore and which ones ultimately have breakthroughs. So for example, Ilya Sutskova leaving OpenAI to create safe superintelligence. Ilya's probably going to push on some areas of research that the other labs aren't yet, but you're going to have some bets made basically. And so he said one of the hallmarks of this next phase is actually going to be data production. So now the data is going to matter, it's going to diverge, it's going to be different between the different labs. Talks about AI agents and how they suck and they don't work and which kind of echoes what I was saying last week. Like we all just talk about agents. The reality is we're not there yet. These are GPT1, GPT2 level things. We're still just looking at experimentation, heavy human involvement in the building and oversight of them. He said the reason which echoes what we, we had said is these things have an inability to string together tools through a chain of thought, like Internet calculator, content management system, knowledge bases. Like they're not good at tool use yet they can use individual tools. But like they can't do what humans do and use jump around between all these different tools and as we've said in multiple times, a lack of reasoning data. The Internet is full of output data of like the final product. It is not full of how humans arrived at the final product. And so that's what his data foundry will do. They will hire experts, PhDs to teach the models how to think at an expert level across all these different domains. So he said, so these reasoning chains through. When humans are solving complex problems, we naturally use a bunch of tools. We'll think about things, we'll reason through what needs to happen next. We'll hit errors and failures and then we'll go back and sort of reconsider. A lot of these reasoning chains, these agentic chains are the data just doesn't exist. So that's an example of something that needs to be produced. And then the final thing is he talk about enterprise adoption and he said the proof of concepts just haven't scaled. There's been too much focus for the last couple of years on efficiency and productivity and not enough focus on innovation. And I agree 1000% like this is what we see all the time. And I talk to enterprises about all the time. Productivity and efficiency gains are the low hanging fruit. Now a lot of companies aren't doing those well yet, but that's like table stakes. It's applying these things to drive innovation, to find new markets, new product ideas, new strategies where I'm just not seeing it yet. Like there's such a lack of understanding about how to do that. And that to me is the opportunity across every industry is like, be the ones that figure out how to, how to drive innovation and accelerate that with AI. And again to hear him saying it's like, okay, good. Like it's not just me. Like sometimes I think I'm just hallucinating that these problems exist. And then I hear someone like him say it. It's like, okay, I'm glad I'm not alone on this one. So yeah, just a fascinating interview. He's, he's a, he's a major player. You're going to hear his name a lot more moving forward.
Mike Caput
All right, so next up we have a new research paper out called the Rapid Adoption of Generative AI and it's making some waves because it's reporting on results quote from the first nationally representative US survey of generative AI adoption at work and at home. This paper is the work of researchers from the Federal reserve Bank of St. Louis, the National Bureau of Economic Research and researchers at Vanderbilt and basically it shows surprisingly rapid and widespread uptake of generative AI. So this survey was conducted in August of 2024 and shows that 39% of US adults aged 18 to 64 have used generative AI. 28% said they've used it at work, and just over 10% say they use it daily at work. Now, when asked about what tools they Most commonly use, ChatGPT was the most common, followed by Google Gemini and Microsoft Copilot. And the study also notes that two years after being introduced widely to the US population, generative AI has reached a level of adoption and this level of adoption at a rate that outpaces, quote, both personal computers and the Internet in their early stages. The researchers estimate that between 0.5% and 3.5% of all work hours are currently assisted by generative AI. And they say that generative AI appears to currently be most helpful in writing administrative tasks and summarizing information. Now, to get at this data, the researchers used something called the Real Time Population Survey rps. This is a national representative survey designed to collect data on various labor market trends. And so what they did is they incorporated a module within this survey to go ahead and measure generative AI adoption. The survey was fielded, like I said, in August 2024. It had just over 5,000 responses and focused on workplace use and non work use. So Paul, I mean, this certainly seems pretty significant just given where it's coming from and who's doing it. Like, did these findings surprise you at all?
Paul Raetzer
No, but I would. Here's what I'll say. And again, this. I feel like I could do a main topic on this one, but I'm glad that the Federal Reserve is involved in this. Yeah, they're assuming correlation between past adoption rates to predict the future impact of gender. They. I. Mm. My general take is this is interesting but not very helpful. Economists generally will look at precedents. They will take a historical view to try and figure out the future. And we're trying to figure out a future that looks nothing like the past. So until we have this kind of approach that layers in the exposure levels we've talked about. So when I built jobs GPT, what I was trying to do is say we can't look at the past to figure this out. Yes, I mean, if anything, this is an indicator that that holds more true. Like we're right, we're looking at adoption rates far beyond that we saw with the PCs and the Internet. So this is, this is not apples to apples even then. But until we look out one to two years and say these things are going to be expert level of persuasion and reasoning and they're going to have computer vision to understand the world around them. They're going to have all these things that they don't currently have and it's going to be omnipresent in everything we do. Then we could get into like a reality of what is the impact on jobs in the economy. And that's where I just again, I've had conversations with leading economists and I see the same problem every time. They're, they're looking at the world through the past and the present. They are not truly understanding a future. That seems quite apparent when you look at what the labs are building. And that's, that's my concern. So it's interesting, by the way, a great use for Notebook LM from Google. Like I dropped this in there and I was like having conversations with it, chatting with it, which by the way we didn't have time for this rapid fire item. But they announced YouTube and audio support now for NotebookLM. So you can now drop in a YouTube link, can continue plug. It's just an awesome product. But again, these studies are interesting but not very helpful and I hope that the government isn't basing decisions about the economy on this kind of study.
Mike Caput
Yeah, I think that's why it's also worth mentioning is just understanding that even very, very smart people like government economists have blind spots or preferred ways of doing things that may not always map to the transformative effects we're going to see with AI.
Paul Raetzer
Yep.
Mike Caput
All right, so our last topic today is a little bit of a warning about AI use cases. So a post on X from a machine learning expert has a warning for anyone using AI on work or business calls. This comes from Alex Bilzerian who's a machine learning head at Hive AI, and he posted the following quote, A VC firm I had a zoom meeting with used Otter AI to record the calls and after the meeting it automatically emailed me the transcript, including hours of their private conversations afterward where they discussed intimate confidential details about their business. He then noted in a follow up post Otter AI users and read this with sarcasm and rest easy knowing that quote, DFJ dragonfund China is on the board watching closely. He included a screenshot of PitchBook data showing that certain companies on Otter's board may not be ones that are super friendly to your confidentialities. So Paul, we've kind of like griped about this trend before in like other contexts with people being really free about how they use AI recorders On calls without asking permission. This adds like, a whole new dangerous element here. Like, should we be rethinking or people be rethinking broadly how they're using these types of things on call?
Paul Raetzer
Yes. I am shocked, honestly, by how often note takers are showing up to meetings without, like, permissions. There's no permissions level. So if you think back to like, with Zoom, you know, at first, like pre pandemic, you didn't assume, like, we were going to be on video together. Like, we kind of like came to an agreement that we were going to turn the videos on. And then the pandemic led. Everyone just assumed we were going to be on video all the time. But even now with Zoom, like, if I want to record our call, it pops up with an alert saying, hey, Mike is recording this call. And I say, okay. Like, I opt in and I know it's being recorded, but the note takers just show up and. And you're like, and now most people are familiar with Otter, but sometimes you'll get this note taker shows like, I've never even heard of that app or that company. Like, what, what is this? And where's this information going? And personally, like, I find it kind of offensive that, like, people just assume I'm cool with their random note taker showing up for our conversation, and I immediately just become extremely guarded about anything. I'll say, because you just don't know. And then we're like, we're seeing this happen, play out like, like DNA. Like the 23andMe, I think, is the company that got acquired. And it's like, well, good luck if you sent your DNA to them. Like, whoever buys them at auction is going to own your DNA, which is an interesting concept. And so, yeah, this is again, the kind of buyer beware. Be aware of the tools you're using, be aware of who is invested in those tools, where those apps came from. And I would just take a very cautionary approach to this stuff. And this is why I think at the end of the day, the companies that win in AI are things the big existing tech companies, because for better or worse, they already have all our data and you, you already trust them to some degree. Where there's plenty of these apps that people use where I'm just shocked that they're giving them the data they're giving them.
Mike Caput
Yeah. I also wonder how much of this is like, how much do people understand where these tools can go wrong too? Because I also think they're just like, okay, as it becomes normal to do AI recordings and companions sending out summaries. What if someone wasn't on the call? Does that person understand that AI summaries can just be bad or wrong? Like if I go on an intro call, right, Representing our company to customers, clients, potential speaking, and I get this crappy summary after that doesn't give any of the context to what I said. Someone else is reading it that wasn't on the call. I mean, that's just not a good look either.
Paul Raetzer
Yeah, again, it goes back to that. People just don't understand. Yeah, you're right. You could misrepresent something you said and people assume it was fact and they're not gonna go take the time to check it. Yeah, this is one of those that's just. It's become such a popular use case. That is. There's so little understanding of. Of it and there is this assumption that it just works and it's cool if I send my note taker and. Yeah, yeah, I don't know, man. And people just have it automatically join every call. Like, it's like, oh, that's my note taker. It's just invited to every meeting. It's all automated. And then you forget you're even doing it. Yeah, like, slow down on that. Like, if you're. If you're a. If you're one of those people who's just sending your note taker to every meeting, don't assume that people are just cool with everything that they say being recorded. I personally am not, but people do it anyway, so just. Yeah, that's a good.
Mike Caput
A good warning.
Paul Raetzer
Public service announcement. I guess that.
Mike Caput
All right, Paul, that's all we got this week. A big busy week in AI. As always. This one a little more so as a final reminder to everyone, check out our newsletter. MarketingAI institute.com forward/newsletter. Like I said, this week we have literally dozens of things that we could have covered if we had unlimited time. They did not make it into the episode, but they'll be in the newsletter and we'll dive deeper into them there. Also, if you can and have not yet left us a review on your podcast platform of choice, we'd very much appreciate it. It helps us improve and get to more listeners. Paul, thanks so much for breaking everything down this week.
Paul Raetzer
I hope this week is a little slower. I could use like a. A little less drama this week. So, yeah, hopefully we will be back next week, but hopefully it is not with the chaos that was the week prior. All right, thanks, Mike. Thanks for listening to the AI show. Visit marketingaiinstitute.com to continue your AI learning journey and join more than 60,000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community. Until next time, stay curious and explore AI.
The Artificial Intelligence Show - Episode #117 Summary
Release Date: October 1, 2024
Hosts Paul Roetzer and Mike Kaput dive deep into a whirlwind week in the AI landscape, focusing primarily on OpenAI's dramatic developments, Meta's advancements in augmented reality, and significant legislative actions in California. This episode also touches on broader industry trends, including funding rounds, regulatory actions, and the rapid adoption of generative AI technologies.
The episode opens with an extensive discussion on OpenAI's eventful week, marked by significant leadership changes and strategic shifts.
Leadership Exits: CTO Mira Morati, Chief Research Officer Bob McGrew, and VP of Engineering Barrett Zoff all departed simultaneously. Mike highlights the impact:
“OpenAI is growing fast and burning through piles of money… but they're losing $5 billion this year.” [02:54]
Financial Struggles and Restructuring: OpenAI is grappling with massive financial losses while attempting to raise funds to sustain its operations. Paul elaborates on the complexities of transitioning from a nonprofit to a for-profit entity:
“They're raising money and then they have two years to complete the process of converting over from the nonprofit. So this is going to be really, really messy and weird.” [07:15]
Valuation and Revenue Projections: OpenAI's valuation is speculated to be around $150 billion, supported by projected revenues of $3.7 billion this year and an anticipated $11.6 billion next year. Paul provides context by comparing it to historical tech company valuations:
“Roughly like a 10 to 13 times forward looking revenue is not unheard of in the technology world.” [07:15]
Internal Conflicts and Sam Altman’s Leadership: Reports from the Atlantic and Wall Street Journal suggest internal power struggles and a potential shift towards prioritizing rapid growth over research integrity. Mike notes:
“These companies that are actually out there leading and the leaders who are pushing us into this frontier of the intelligence age, it's not going to be a straight line.” [23:49]
Amidst the chaos at OpenAI, Sam Altman published an influential essay titled "The Intelligence Age," predicting the profound societal transformations driven by AI.
Key Predictions: Altman foresees AI enabling unprecedented personal and professional advancements, including:
Artificial Superintelligence: Altman speculates the possibility of achieving artificial superintelligence (ASI) within a few thousand days, emphasizing the urgency for societal preparation:
“It is possible that we will have super intelligence in a few thousand days. (Exclamation mark)” [29:19]
Implications for Businesses and Governance: Both hosts agree on the necessity for proactive measures in education, government policy, and economic strategies to adapt to the imminent AI-driven changes.
Meta showcased its latest forays into augmented reality (AR) during its 2024 Kinect event, unveiling several AI-integrated products and prototypes.
Orion AR Glasses Prototype: A significant reveal, the Orion glasses utilize advanced projection technology and generative AI to augment reality. Meta CTO Andrew Boz Bosworth shared:
“We plan to launch in the coming years.” [38:24]
LLAMA 3.2 Release: Meta introduced LLAMA 3.2, featuring models up to 90 billion parameters capable of processing and reasoning about images, comparable to proprietary models.
Additional Announcements:
Paul critically assesses Meta's hardware ambitions, expressing skepticism about their immediate viability compared to competitors like Apple and Google:
“I'm betting on Apple and Google when it comes to that.” [41:23]
Governor Gavin Newsom vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which aimed to impose stringent regulations on AI development within California.
Bill Provisions:
Reasons for Veto:
Mike and Paul discuss the implications, noting the preference for regulating AI at the application layer rather than the model level:
“The model itself… it's the use of it to do the evil thing that should be regulated, not the actual general purpose technology.” [46:28]
Anthropic is exploring a new funding round that could elevate its valuation to between $30 billion and $40 billion, potentially doubling its current valuation.
Financial Overview:
Valuation Context: Paul compares Anthropic's valuation multiples to industry standards, suggesting the proposed valuation aligns with forward-thinking revenue expectations:
“The 10 to 15 times forward revenue for this kind of company is a reasonable range to be looking at.” [51:25]
The Federal Trade Commission (FTC) launched Operation AI Comply, targeting companies that misuse AI to deceive or harm consumers.
Enforcement Focus:
Notable Cases:
Paul emphasizes the importance of distinguishing between regulating AI at the model level versus the application level, highlighting the FTC's role in enforcing existing consumer protection laws:
“It's like speeding tickets. Everybody's speeding. But you got to catch a few people.” [54:23]
Scale AI, a data labeling service pivotal for AI development, experienced remarkable growth by pivoting from self-driving car data to serving major AI developers like Meta and Google.
Financial Milestones:
Founder Insights: In an interview with the A16Z podcast, founder Alexander Wang discussed Scale AI’s strategic positioning as a data foundry, emphasizing the critical role of high-quality data in advancing AI models.
Industry Impact: Scale AI is positioning itself as an essential infrastructure provider, enabling the training of sophisticated AI models through scalable and efficient data solutions.
A new research paper titled "Rapid Adoption of Generative AI" reports on the first nationally representative US survey of generative AI adoption in both work and home settings.
Key Findings:
Critical Analysis: Paul critiques the study’s reliance on historical adoption models, arguing that AI’s transformative potential necessitates new analytical frameworks:
“We're trying to figure out a future that looks nothing like the past.” [64:58]
The episode concludes with a cautionary tale about the misuse of AI tools in business settings, highlighting privacy and security concerns.
Case Study: Alex Bilzerian from Hive AI warns against the unregulated use of AI transcription tools like Otter AI, which can inadvertently expose sensitive business information:
“Don't assume that [AI tools] just works and it's cool if I send my note taker and… be aware of who is invested in those tools.” [72:01]
Ethical Implications: Paul underscores the need for informed consent and awareness when integrating AI tools into business communications to prevent unauthorized data exposure and misrepresentation.
Paul and Mike wrap up the episode by emphasizing the importance of staying informed through their newsletter and acknowledging the frenetic pace of developments in the AI sector. They encourage listeners to remain vigilant about AI’s applications and implications, advocating for proactive adaptation to navigate the evolving technological landscape.
“Be aware of the tools you're using, be aware of who is invested in those tools, where those apps came from.” [72:00]
Notable Quotes:
Paul Roetzer at [07:15]:
“I still feel like everything happening right now is just a prelude to an IPO.”
Mike Kaput at [23:49]:
“Google has to just be salivating right now… they're the ones that created all the innovations that drove everything.”
Paul Roetzer at [29:19]:
“We're heading into a very different timeline.”
Mike Kaput at [46:28]:
“It's like we have existing laws… it's misuse of AI at the application layer.”
Paul Roetzer at [64:58]:
“Until we look out one to two years and say these things are going to be expert level… we can’t fully grasp the impact on jobs and the economy.”
Paul Roetzer at [72:05]:
“Public service announcement.”
For more in-depth analysis and additional topics, subscribe to The Artificial Intelligence Show and explore resources offered by the Marketing AI Institute.