
Loading summary
A
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B
There's a new open source AI model leader. Who is it? Why are Google workers reportedly using anthropics clothes Claude to train their models? The godfather of AI increased the likelihood AI will crush humanity. What does that mean? And why did Microsoft put a 100 billion dollar price tag on artificial general intelligence? Yeah, we might have had the holidays and still in the middle of the holidays craze here in the us But AI did not say sleep. So ah, at least this week we won't either. What's going on y'? All? My name is Jordan Wilson and welcome to Everyday AI. This is your daily podcast, newsletter and podcast. Wait, podcast live stream newsletter? Like we do so much. It's your daily free live stream, podcast and daily newsletter helping us all not just keep up with what's going on in the world of generative AI, but how we can use it to get ahead to grow your company and your career. If that sounds like you bet, welcome, you are in the right place. The other right place for you is your everyday AI dot com. That is our website. So yeah, you can go sign up for the free daily newsletter where every single day we recap our live stream podcast. But also on there there's more than 430 episodes. You can go back, listen to them all sorted by category. It is a free generative AI university, so make sure you go check that out. All right, so if this is your first time listening, Maybe it's your 430th. We do this almost every single week on Mondays going over the AI news that matters so you don't have to spend hours every single day trying to keep up. You can just tune in on Mondays and we cut it all down for you. We break it down, cut through the marketing fluff and tell you what matters. So with that, well, first of all, live stream audience. Hey, how's it doing? Yeah, like what Jackie said Last few of 2024. Bron, Bronwyn, thanks for joining Richard and Samuel and Brian and Joe. Thank you everyone for tuning in. All right, let's get to the AI news that matters, y'. All. A lot going on, a new AI model leader. Yes, at least when it comes to open source models in, it is extremely impressive. All right, so Deepseek has launched its new open source B3 model, competing directly with OpenAI's GPT4O. Yes, we have an open source large language model that is competing head to head on benchmarks with the most powerful models, the proprietary close ones. So the v3 model is now available on GitHub, allowing users around the world to utilize and modify the model, promoting collaboration and innovation without the constraints of proprietary software. So yeah, if you're brand new to the large language model game, you're like, what's open source, what's closed source? And there's some things in between. But you know, open sources, anyone can go and download them and run them, right? And costs are extremely low. So you can use Deep Seek online. More on that here in a second. So Deep Seq emphasized that V3 has achieved performance comparable to top proprietary models like GPT4O and Claude Sonnet 3.5 in various benchmarks, highlighting its capabilities and logical reasoning and problem solving. So it uses a mixture of experts or moe architecture and V3 optimizes its performance while remaining accessible to developers and researchers who may not have the the resources of larger corporations. So this release comes at a time when demand for flexibility and customizable AI solutions are on the rise, attracting interest from companies and individuals looking to integrate advanced AI into their projects without incurring a high cost. So deepsea positions v3 as not only a cost effective alternative, but also a strong contender for organizing for organizations focused on developing specialized applications in coding and mathematics. The model is governed by a license that permits reproduction, modification and commercial use, enhancing its appeal among startups and independent developers looking to innovate without facing legal barriers. So pretty big news here out of from Deep Seq and their new V3 model. It is a fraction of, of the cost of others and it's open source y'. All. So this is from Chinese company and y' all 2025 get used to a lot of AI news and large language models news coming out of China, but it is a fraction of the cost to use Deep Seek. So it's, it's still a brand new model. I've barely had the chance to play with it. But hey, if you have, maybe you're in our live stream audience or listening on the podcast, send me an email, let me know. Do, I mean do you all want to know more about these, you know, models, these open source models, models from overseas right here here on the show. The only kind of open model and it's not true open source that we talk about a lot is Meta's Llama models, you know, not true open source but you know, kind of in between. You know, I don't know how much interest there is out there in these models. You can obviously use this on hugging face so you don't have to have, you know, a super computer in order to download this. But it's a fraction of the size, a fraction of the cost. We'll be dropping all of those bullet point details in the newsletter. All right, our next piece of AI news that matters. This one's weird. I'm not personally a fan, but, you know, it's not going to impact me too much. But according to reports, Meta is expecting AI to start posting on social media. Yes. They are wanting to flood their platform with AI users. Yeah. Not real humans. They're just saying, yeah, we're gonna have a bunch of essentially AI characters, you know, posting on our platforms. Weird. All right, so according to the Financial Times, Meta is set to introduce AI generated characters on Facebook, aiming to enhance user engagement through interactions similar to those with real humans. Yeah. Are we living in a simulation? Yeah, every, you know, I don't know, are AI is just going to be, you know, running their own social media. You know, you have one AI posting a bunch of other AIs fighting with each other in the comments. I don't know, I'm not a huge fan. But Meta did state that these AI characters will eventually function like real human user accounts, complete with bios and profile pictures, the whole thing. So users will be able to create and interact with these AI characters through Meta's AI studio, which has already gained some popularity for generating virtual relationships. Yikes. So the AI studio currently features hundreds of thousands of characters, and Meta plans to expand access to more countries over the next two years, focusing on making AI interactions more social. So this obviously raises a ton of safety risks, right, including the potential for exposing users to inappropriate content and obviously the spread of misinformation and disinformation. Also, there's not a ton of information yet out. This is just the one reports from the Financial Times, but there's not a lot of information yet on how Meta might plan to police these AI generated users that are going to be out there running amok on their social media platform. So hopefully there will always be some sort of either watermark or something to let everyone else know that this is an AI user, whether it's commenting or posting stories. So I. I do hope and assume that Meta will at least go through that. To me, that is the bare minimum of what would actually have to happen in order for this to be successful. But I don't know. I don't personally see the value of this. It does look like, you know, people may be able to make uh, within Meta's AI studio, you know, you could make a bunch of uh, you know, different Personas and then potentially you know, release those person us to go interact on their own behalf. Right. To have agency on social media. So you know, you might go assign some of your AI avatars to go interact with, I don't know, real human content, other AI avatars, I'm not sure. Again, maybe, maybe this is, you know, I'm old and not cool. I don't, I don't see the point of this. You know, I think there's already, there's, it's already gonna, it's already been hard enough over the past decade, decade for people especially on Facebook, to understand what is real, what is not. Especially when it comes to news misinformation and disinformation. So then when you add in kind of this human quote unquote, right, this human element. Not sure I'm a huge fan of it, right. Because I think people can go do their own, right. Especially I think news that is not accurate spreads pretty quickly on Facebook. But you know, there's ways to verify that. There's not really great ways to verify who's real and who's not on social media. So like I said, hopefully Meta will be extremely strict in how, when, why these, these kind of AI Personas can interact on their platforms. All right, here's a big one. New information is out on OpenAI's new for for profit structure. So OpenAI's recent announcement, they did put a blog post on their website couple of days, you know, here in, in the US after Christmas. So OpenAI's recent announcement to transform its structure into a public benefits corporation. So you probably, if you follow AI news, you're probably going to be hearing a lot about PBCs this week and next week. So they've announced their plans to transform their structure into a PBC highlighting its urgent need for capital support, its ambitious AI research initiatives. So according to reports and OpenAI did release a blog post on their website which we will be linking to in our newsletter. So OpenAI is aiming to create a public benefit corporation to facilitate easier capital raising, stating it needs to secure more funding than previously anticipated. So the restructuring follows a 6.6 billion dollar funding round that valued OpenAI at $157 billion contingent on removing profit caps for invest two years. So the non profit arm, or what OpenAI technically is right now will still remain a significant interest in the pbc, ensuring it remains, well resourced despite the shift to a for profit structure. So critics, including original OpenAI co founder, now competitor Elon Musk in meta platforms, they are opposing this transition, raising concerns about prioritizing profit over public benefit. I, I don't know y', all, I don't, I don't think, you know, there's obviously a lot of legal drama that's been going on for more than a year between Elon Musk and you know, open AI. But y' all like OpenAI was originally in 2015 established as a non profit. That's not what it is anymore, right? I think people, I think people are griping and complaining about this, right? Like oh, open, there's no openness and Open AI there, you know, know they're driving profit. And y' all like what do you expect? What do you expect, right? I don't think the original founders really foresaw how quickly large language models would catch on and the need for them to even still, right, for them to still achieve their quote unquote non, you know, their nonprofit mission, which was to essentially better humanity with AGI and AI, right? You need to have money to do that many billions of dollars, potentially trillions of dollars of funding. So you know, 10 years ago, I don't know if OpenAI could look into its crystal ball and see, oh yo, we might need trillions of dollars in five years, right? I, I think even the most optimistic claims 10 years ago could not have really pinpointed where we're at today. So this move obviously aligns open AI with everyone else. I don't know why people think OpenAI needs its own set of rules just because it was originally set up as a non profit. But it's y' all as someone that worked at a non profit for 10 years, right? Obviously not this type of non profit, but it is very common, extremely common for non profits to either branch out and create something like a PBC or to essentially or eventually turn into a for profit corporation and keep the nonprofit a, a smaller piece. But you, you know, something that can still help align its long term goals. This is very common in the nonprofit world. So I, I, I don't know why this has turned into a, a year long battle. But anyways, OpenAI finally came out with some more information detailing their ambitions and goals around this pbc. So like I said, if you want to read that, we'll be linking to it in our newsletter. So, Dr. Scott, asking any thoughts on the Musk lawsuit trying to block this? It's, it's, it's it's for show, right? I've. Once this original lawsuit came out, I did like an hour long podcast on that, on that lawsuit. It's, I mean, again, I'm not a lawyer, right? It's, it's without merit. I think it's a joke. Uh, I, I would say most of what Elon Musk is putting out there in the lawsuit is really just, I think, to. How should I say this? He, he has created a direct competitor in xai. Their large language model, Grok is supposed to be upgrading from, you know, V2 to V3 here any day. I think Elon Musk wants to keep himself, XAI and Grok in the headlines as much as humanly possible. He now has a position of power right, in the US Federal government, which I'll hold my thoughts on that. That's wild to think about. But continue to write, especially with that, especially with Elon Musk having a, you know, unofficial but kind of official position in the US Government. Expect this to go on. Expect this legal drama to keep going. I would say, again, from a objective, a somewhat objective standpoint, it's without merit. I would say his lawsuit, I mean, go read it, judge for yourself. It doesn't take long. You know, even for fun, I put it into the lawsuit, into every single large language model and just said, hey, is this, does this thing have legs? Is this real? Right? And every single large language model essentially right, aside from trying to be politically correct about it, said, no, this is, you know, essentially a joke of a lawsuit. All right, here's an interesting one, and I have some personal insight on this. So Google contractors are reportedly using Anthropics Claude to improve their Google Gemini. All right, so according to reporting from TechCrunch, Google contractors are tasked with scoring Gemini's outputs against clauds, focusing on criteria like truthful, truthfulness and verbosity. Is that how you say it? Verbosity. Verbosity. With up to 30 minutes allocated per prompt evaluation. So internal communications. Again, this is according to reports from TechCrunch. So internal communications reveal that references to Claude are appearing in Google's evaluation platform, suggesting a direct comparison between the two models. Notably, Claude's responses prioritize safety more than Gemini, often avoiding prompts deemed unsafe, whereas Gemini often is being flagged for for safety violations in certain responses in the testing. So Anthropic's terms of service prohibit using Claude to train competing models without explicit permission, leading to speculation about whether Google has obtained such approval. A spokesperson from Google DeepMind confirmed that while model outputs are compared for Evaluation. They do not train Gemini using Anthropic's models. So, you know, kind of working in the gray area there. And concerns have obviously been raised by Google contractors about their ability to accurately assess Gemini's responses on sensitive topics like healthcare, indicating potential risks in AI generated information. All right, I'm going to go ahead and reveal something I haven't revealed before. A trillion dollar company reached out to me and recruited me to help them train their models. All right, not gonna say who anyways, you know, just, just for fun, I, I went through, went through the process just to learn more, right? It's not every day a trillion dollar company reaches out to you to say, hey, can you help us train our large language models? But from the process and seeing how these contractor agreements generally work with these large tech companies, I think this is happening all the time, right? Even going back to our initial big story with Deep Seek, the new Chinese large language model, the V3 model, right. A lot of people are talking about online. If you ask Deep Seek, you know what it is, you know, what are you based on? You get these responses from deep seq that says, Hey, I am GPT4. I am based off of OpenAI, right? So I'll probably do a dedicated show on this at some point, maybe in 2025. But I can tell you this. The combination between remote hybrid workers, you know, using contractors to help in various aspects of training large language models, all models. And y', all, I'm not talking about Google, Claude, whatever here. All models use outputs from other models to evaluate, period. And I would go ahead and say this is, you know, I'm guessing a little bit here, but I will say it is very safe to say that certain large language model makers also use other large language models, specifically the GPT models, to train their models as well. So I don't think this is necessarily right. If you're someone that, you know, like myself, this is not shocking, but I think this is grabbing a lot of headlines right now, right? Like, oh my gosh, you know, Google contractors are using Claude to either evaluate and maybe train its Gemini models. The evaluation part, very common, right? I will go and say, I would say most models, right? When they're training their. Sorry, most model makers, when training their models, they're probably running the exact same prompt through at least eight different large language models to see how theirs compare, right? Especially when they're going through that reinforcement learning with human feedback, right? So the RLHF, right? And they're trying to tell a model, hey, when this input When a user enters this input into the model, it should have this type of output. This is an example of an output. This is a good one. This is a bad one. Guarantee everyone's using every other large language model out there, right? To see, because you want to be safe, you want to make sure that a large language model has good, strict guidelines. But obviously, like we just saw in this story, Claude Anthropic specifically states that you cannot use their model to train, right? So there is this ambiguity that I think a lot of companies operate in training versus evaluating, right? So you're not necessarily using outputs to say, hey, this is how we should respond from other large language models, but evaluating, right? It's very common. Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on gen AI. Hey, this is Jordan Wilson, host of this very podcast. Companies like Adobe, Microsoft and Nvidia have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for ChatGPT training for thousands or just need help building your front end AI strategy, or you can partner with us too. Just like some of the biggest companies in the world do. Go to your everyday AI.com partner to get in contact with our team. Or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI. On gentlemen, our next piece of AI news. Google is making Gemini its top priority in 2025. So according to reports, Google is intensifying its in its focus on artificial intelligence, particularly its Gemini language models, as it aims for significant growth in the coming year. So the company is working to establish Gemini as a leading AI product targeting half a billion monthly users by 2025. So according to reports, during a recent strategy meeting, Google CEO Sundar Puchai emphasized the importance of Gemini, stating that the stakes are high amid increasing competition in the AI landscape. So Google aims for Gemini to be the 16th product in its portfolio to reach 500 million monthly users, joining the ranks of popular services like Gmail and YouTube. So Google revealed plans for substantial updates to Gemini, envisioning a universal AI assistant capable of handling various tasks across multiple Devices and formats. Pachai acknowledged the rising prominence of rival systems, particularly OpenAI's ChatGPT, which has become synonymous with AI for many users. So, yeah, a big piece of this report of Google's kind of internal strategy meeting is essentially when people out in the world, everyday, people talk about AI, people just automatically assume you're talking about Chat GPT, right. So he's kind of, kind of voicing this concern or wanting to change the narrative that when people are talking AI and large language models, everyone automatically assumes they're talking about Chat gbt. Right. Because Chat GBT was the first. Right. The first big company to offer large language models on the front end that users could chat with that didn't require. Right. Like a backend API or using a third party service. It became synonymous with AI. It exploded overnight. So Google executives are confident, according to the report, that Gemini will offer a richer and more integrated experience, leveraging Google's extensive platform capabilities to outpace its competitors. So despite facing scrutiny from global regulators regarding potential monopolistic practices, but Chai stressed the need to focus on delivering innovative technologies responsibly. So Google employees, though, according to reports, express concerns about the potential costs associated with using their AI tools. Essentially they were saying, yo, we don't want a $200 per month plan like OpenAI has. However, leadership assured them there were no plans for, for high subscription fees such as that $200 per month. Here's the thing, y'. All, I'm not going to say it's too little, too late for Google. And I've been extremely harsh on Google, especially, you know, early 2023 and early 2024, because Google was asleep at the AI wheel. They were, you know, it was almost a year, it was almost a year after ChatGPT that Google released what was then called Bard to the public. For a company like Google, they dropped the bag. I think Google could have a stranglehold on the AI market. They have access to all the users. They completely failed in 2023 and early 2024. Let's be honest. I think they won the last quarter, right? They're, they're just their December with the release of their experimental Gemini 2.0, Gemini 2.0 flash. Right. Those are the experimental, not the final versions. Their, their new research, uh, tool, all their agentic tools, a lot of them in beta. I think Google won the fourth quarter of 2024 because of their strong December release, going head to head with OpenAI when Open AI had their 12 days of OpenAI. So I, I do think Google is now a serious competitor at least. Whereas if you would have asked me six months ago and companies do ask me all the time, FYI, right? Because people always ask, yes, companies pay, pay me, pay us, pay everyday AI to consult them. And I, I tell people and I've told people six months ago, don't touch Google. Here's why Google is fantastic. Google AI Studio, their vertex, all of their backend platforms have been great since they launched. Google's Gemini front end has been abandoned up until December. So you're right when you log in to you know, gemini.google.com and you're quote unquote chatting with Gemini, it has been absolutely terrible up until December. That's because Google, whatever their go to market strategy is with their large language models was completely backwards. Right. What you were using on Google, Gemini was generally a model that was six to nine months behind their most recent model. So if you went into Google AI Studio, Google was competing all along with OpenAI head to head. What people don't understand, let me say this, this is free consulting advice for you people at Google, at Claude, everywhere, right? The people evaluating large language models are not always CISOs, they're not always CTOs, they're not always developers, they're not always software engineers. A lot of times for enterprise companies it's someone like a chief marketing officer, it's CEOs. I've sat in the boardroom with C suite of a company doing $20 billion in revenue. So an enormous company and they're evaluating models with front end platforms. Okay, because this whole like democratizing AI, yes that's happening. But guess, guess what that means smart people at Google who fumbled the bag. That means you have non technical people who are evaluating models for the largest companies in the world on the front end. They are not logging into your Google AI studio. As great as it is. They're going to gemini google.com they're going to chatgpt.com they're going to claw AI, they're going to Mistral. They're making those decisions because AI large language models has democratized access to AI. You have non technical people going in there to front end products, not using the AI, not using developer councils and they are making decisions for multi billion dollar the largest companies in the world. Some of them aren't but some of them have been evaluating models for the last two years that way. So Google absolutely made a, I would say in the long run trillion dollar mistake by doing that. All right, I'll get off my rant, sorry y' All I know this is AI news that matters, but had to get a little rant in there. It's been a while. All right, our next piece of AI news. OpenAI is apparently reconsidering a humanoid robot. All right, so according to reports from the information, OpenAI is reportedly considering again developing its own line of humanoid robots. So the company has reportedly reestablished its robotics research group after closing it in 2021, indicating renewed interest in the field. OpenAI's potential move into humanoid robotics could position it alongside industry leaders like Nvidia, Tesla Figures and other, while also highlighting its strategic investment in companies like Figure AI and 1X. So, yeah, OpenAI has been investing a lot of money in other robotics companies, but now, according to reports, has reestablished its robotics research group after closing it about three and a half years ago. So, although humanoid robots is a lesser focus compared to OpenAI's advanced AI models, these developments could significantly impact industries from warehouse operations to consumer applications. Yeah, there were other reports that we shared in our newsletter last week and the week before, also talking about how Nvidia is deepening its focus on robotics and humanoid robotics as well. And without giving too much of my AI predictions away for 2025, get used to talking about humanoids and robotics a ton. It is both exciting and extremely scary. I'm not gonna lie to y'. All. All right. Could AI be manipulating humans? Yeah. Yeah, it is. All right. So a new study from the University of Cambridge reveals that artificial intelligence tools may be used to influence online decision making, raising concerns about the potential consequences for consumer behavior and democratic processes. So researchers at Cambridge's Lever Hume. I'm not going to get that right. Lever Hume, center for the Future of Intelligence described an emerging intention economy where AI assistants can forecast and manipulate human intentions, selling this information to companies. So this intention economy is seen as a successor to the attention economy that we've been living in here for decades, where social media platforms profit from capturing user attention. So now it is going into this AI predictive intention economy, because AI can influence and manipulate humans, according to researchers. So the study suggests that large language models like ChatGPT could be employed to anticipate user needs and steer human decision making based on psychological and behavioral data. So advertisers could create personalized ads using generative AI tools, dramatically adjusting their strategies based on real time user interactions and preferences. So the research indicates that tech executives foresee a future where AI models will accurately predict user intentions and present tailored information to guide their choices. So, yes, we finally have a official study from a reputable institution confirming what we've already known to be true for a couple of years. So, yes, AI can manipulate humans. Not only that, but large language models have already been used for a very long time in creating synthetic data, Right? And guess what? That synthetic data, or essentially, you know, these large language models are creating data that is used to train other models, Right? So model distillation, but also it's using synthetic data for synthetic user testing, right? So big companies, instead of running focus groups with, you know, dozens or hundreds or thousands of humans, they're just using synthetic AI data. So this is not new. But I do think as we start transitioning from kind of 1.0 or step one models to step two, so going from, you know, kind of GPT models, transformers, your traditional chatbots, to models that learn, models that reason, models that self improve, and models that ultimately can manipulate humans. Right. And can kind of predict how humans will react. Right. So I think when you start to combine online user tracking with the data that all of these companies have on us with large language models. Yes. I, I don't think it's wild to say or wild to think about or wild to think about that AI will be able to intentionally manipulate people's opinions. Right? It's, it's already been, I think, in the darker parts of the web, been being used for a very long time. Hey, speaking of doom and gloom news, here's more. All right, so the godfather of AI, Jeffrey Hinton, has raised concerns over AI's now increased potential to threaten humanity's future. So in a striking statement, Jeffrey Hinton, one of the most prominent, prominent figures in the history of AI, has increased his estimates that the likelihood of AI leading to human extinction within the next 30 years has doubled. So initially he said there was a 10% chance, and now the Nobel prize winner in physics has said that there is a 10 to 20% chance that AI could lead to human extinction. Yay. Isn't that fun to talk about? So in a recent interview, he expressed concerns that humanity may soon be outsmarted by AI systems, likening humans to toddlers in comparison to these advanced technologies. So like I said previously, Hinton had estimated a 10% chance of catastrophic outcomes from AI, but now suggests that risk has doubled and has saying a 10 to 20% chance. So he highlighted the absence of historical examples where a more intelligent entity has been successfully controlled by a less intelligent one, raising alarms about the implications of creating artificial general intelligence, or AGI. So Hinton previously worked at Google, resigned from Google specifically to speak more freely about the risks associated with AI, emphasizing the potential for misuse by bad actors. He called for government regulation of AI development, arguing that relying solely on corporate profit, profit motives will not ensure safety. And that's a great way to wrap up today's show talking about company profit because at least according to reports, that is how Microsoft and Open AI are defining AGI. All right, yeah, a lot of buzzwords in the AI news this week, I know, but artificial general intelligence, AGI is essentially when AI is way smarter and way better than humans at almost any task, right? And I, I say this a lot. If you're looking at definitions of AGI from 20 years ago, we've definitely already achieved it, right people with OpenAI's new O3 model, which costs like a trillion dollars to, you know, ask one question, people are saying, oh, this new O3 model is AGI. I, I don't think so because today's definition of AGI is changing by the minute. But there is some important, I, I, I think, uh, decisions that are being made. AI and Microsoft on defining what AGI even means. So according to reports, the two companies agreed to define AGI as a system capable of generating $100 billion in profits. So this is according to reporting from the information. So OpenAI's public definition of AGI has been described as a, quote, a highly autonomous system that outperforms humans at most economically viable work. Sorry, economically valuable work, end quote. And I, I would say even according to that definition, if you've used 01 Pro, right? Even though 01 Pro, OpenAI's reasoning model, it's most powerful, right? So we had 01 Preview and 01 Mini. Now we have 01 Regular and 01 Mini. But also if you are on a $200 monthly plan, you have access to 01 Pro. I have access to it. It is mind bogglingly good, right? So it doesn't have access to all of OpenAI's other tools right? Now, you know, the Internet, the ability to upload spreadsheets, right? Like using some of these other very important tools. It doesn't have access to all of those yet, but when it does, I don't think you could make an argument, right? And, and as you know, we've seen some, some rumors about, you know, some autonomous capabilities coming to chat GPT, like something called tasks where you can schedule different prompts to run at scheduled interval, at scheduled intervals, right? When you have that, when you can schedule a model like O1 Pro to run at scheduled intervals and connect to the Internet and handle with all your, and work with all your data. At that point, it's like, yeah, that, that's, that's AGI, right? At least according to OpenAI's definition. But that's not the definition anymore. So notably, according to this, these reports, any system classified as AGI will be excluded from intellectual property licenses and other commercial agreements between Microsoft and Open AI in their partnership. Right? Because OpenAI has reportedly received more than $13 billion in investments from Microsoft and Microsoft reportedly has a 49% stake in OpenAI. But there was this clause that essentially is like, yo, when we get, right, when OpenAI gets to AGI, Microsoft can no longer use that technology in its products. Because right now, even though we saw a lot of reporting recently that Microsoft is diversifying, reportedly diversifying the models that you can use inside of its Microsoft 365 copilot platform, but right now it exclusively runs on, as far as I know, it exclusively runs on OpenAI's technology, right? So obviously Microsoft has hung its hat on the future of OpenAI's models. And there's been a lot of talk recently, especially, you know, in the Microsoft circles, is what happens when OpenAI says, hey, we've achieved AGI, because previously it was OpenAI's board of directors, their board members essentially, that said, hey, we've achieved AGI. And then at that point Microsoft would only be able to use pre AGI technology inside Copilot and its other platforms. And that's not right. It's not, not a great future, especially if you are a Microsoft fan, a Microsoft user, right? So now, according to reports, they've changed that agreement because that is an extremely, you know, I'm surprised that it took it this long to get to where we at, to get to where we're at, because to have a company like Microsoft, which, you know, depending on the day is the largest company in the world by market cap, to have it say, like, hey, board of directors, hey, handful of people at OpenAI, you essentially get to decide when we've quote, unquote, achieved AGI. And then at that point, Microsoft Copilot and its billions of users, right, of Microsoft systems can no longer benefit from the future technology that is considered AGI. Yeah, that's not a great way to determine, you know, the future of a trillion dollar company. So now it is essentially when a AI system has the track record or the capability to reach $100 billion in revenue. So interesting. Also, OpenAI CEO Sam Altman has expressed optimism about achieving AGI sooner than expected, suggesting Its impact may be less significant than anticipated. Yeah, I think a lot of people are thinking, you know, oh, AGI is. It's going to be this like catastrophic line in the sand that, you know, it's like, oh, after AGI happens, you know, the, the world completely changes. I don't think that's the, I don't think that's the case. Right. Because it is going to be arbitrary, but now not so arbitrary. Right. Before it was, you know, kind of like a vibe check. You know, OpenAI's board of directors would be like, yeah, yeah, we've achieved AGI. Right. Like I said, I think when an O1 model or a model that does chain of thought or reasoning or a logic based model or once it, you know, whether you want to talk about O3 or whatever, right. But once you give it access to everything that the GPT models have, right? If, if, Right now, if Zero1 Pro had access to everything GPT4O has, right? So all the different, all the other, you know, Chat GPT's canvas tool, its ability to browse, you know, Chat GPT search, Dolly, Sora, whatever, right? I know right now you can't Access Sora within ChatGPT. Right? But if 01 Pro had all of these things and the ability to schedule prompts or to schedule tasks like, I don't know, I'd say that's probably AGI. But now there's a hundred billion dollar price tag on the AGI definition. All right, that's it, y'. All. As a very quick recap, Here are the AI news stories that matter for the week of December 30th. Our last AI news that matters for 2024. So Deep Seek has launched their open source V3 model and it is benchmarking very close to the world's strongest and most capable proprietary models such as GPT4 and Claude. Sonnet 3.5 Meta this one's weird. Meta is expecting AI characters to be posting on its social media platforms soon. OpenAI has released more details on its plan to restructure into a public benefit corporation. Google is reportedly sorry. Google contractors are reportedly using Anthropic's Claude AI to compare and evaluate its Gemini responses. More Google News. Google is saying AI is its prime focus in 2025. About time. Google, you're only like three, three years too late. According to reports, OpenAI exploring humanoid robots researchers are saying that AI tools could manipulate online audiences. The godfather of AI has says now there's a 10 to 20% chance that AI is going to extinguish all us humans. And Microsoft and OpenAI have reportedly put a $100 billion price tag on the definition of AI AGI. All right, I hope this was helpful, y'. All. If it was, please do a couple of things. Tell someone about it, right? If you know, we I think we only have one more show in 2024. If you gained right. If you gain benefit from open it or from, you know, everyday AI this year, please tell someone about it. If you haven't already, go to your everyday AI.com Sign up for our free daily newsletter. We will be recapping all of today's stories and a lot more. But also please tell someone about this. If you're listening here on LinkedIn. A lot of work goes into everyday AI every single Monday through Friday. We'd appreciate it if you could share with your network tell someone about it, but also tell them to go to our website because there's like four. Like there's so much thousands of hours of free AI information sorted by category, no matter what you care about by the world's leading experts. All right, y', all, thank you for tuning in to our last AI News that Matters of the year. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'. All.
A
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going for a little more AI magic. Visit your everydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
AI News That Matters – December 30th, 2024
This episode is the year-end edition of the “AI News That Matters,” where host Jordan Wilson breaks down the most pressing stories in AI as 2024 comes to a close. Jordan brings his signature candid, jargon-free style to spotlight breakthroughs, corporate maneuvers, paradigm shifts, and the ethical debates shaping AI right now, all aimed at helping everyday listeners stay ahead in their careers and businesses.
[05:29]
[10:24]
[16:41]
[25:10]
[34:21]
[42:32]
[44:07]
[46:05]
[47:20]
Jordan wraps the episode by urging AI professionals and enthusiasts to share the knowledge, join the free newsletter at youreverydayai.com, and explore the trove of resources—over 430 episodes to date.
“If you gained benefit from Everyday AI this year, please tell someone about it ... there’s like thousands of hours of free AI info, sorted by category, no matter what you care about, by the world’s leading experts.”
— Jordan [51:08]
(Summary faithfully represents the original episode’s tone: practical, direct, and occasionally blunt, with a skeptical but optimistic lens on AI’s social impact.)