Jordan Wilson (22:55)
I'm not using Deep Seek and I think for the most part you probably shouldn't either, especially if you are a legitimate business in the us, I probably wouldn't be right. There's a reason why in certain states Deep Seek is banned. In certain countries Deep Seek is banned. Right? I won't get into that too much. Right. But I will say just because it made the list doesn't mean I'm recommending it again. When I say this is the top. These are the most popular or most used, you know, features or modes from some of the big companies. And yeah, I'm not going to get into that, but just go ahead. I did the Deep Dive. I went through and did all the debunking for you. But go Listen to episode 460 the Deep Seek Deep Dive is an AI Sputnik moment or a national security threat. Yeah, probably more the latter than the former, y'. All. All right, number eight genspark. So if you don't know genspark. They actually had a big partner announcement with Microsoft at Microsoft's conference. But it is a multi agent workspace agent that distributes tasks across specialized agents working in parallel. I know I just said the word agent a lot there. It's much different than something like a cursor or an anti gravity because these are essentially just you connect your data sources and it's in front end agent, right? So you know, this isn't using, you know, terminals and you know, running code. Nope, you're just typing in a box like ChatGPT. It connects all your data, then these agents run out and do different things for you and that connects to different models. So a lot of new things. So they have research agents, coding agents, analysis, creative agents, etc, but essentially this gives you a team of AI specialists instead of a single agent or a single AI assistant. So you get multiple perspectives on complex problems. So yeah, pretty good genspark. All right. And yeah, live stream audience as we go. Gen Spark, what to get, what's your vote? All right, number nine, Gemini three Flash. And this is pretty big for developers, but mainly Google's AI mode, right? So you might be wondering like what does Google's AI mode have to do with Gemini 3 Flash? Well, Gemini 3 Flash, number one is a ridiculously good model, all right. On some third party benchmarks, it's actually a top five model in the world, right? So if you don't know what a Flash is, it's essentially the small version of Gemini 3 Pro and usually these small or the mini versions, you know, there are many, many steps behind. Not the case for Gemini 3 flash. It is a almost state of the art model, right? It is that close. But it's technically free to use because if you have a Google account, even a free Google account, you're just logged in, right? And you just Google something and you go to the AI mode. Well now AI mode is so much better because of Gemini 3 flash, whereas before it was running on Gemini 2.5 flash, which again, not a terrible model but a huge jump. All right, so this is obviously great for developers, but the reason I'm including it on this is because this is probably going to be the most used aside from some of the chat GPT tools, right, because they have 800 million weekly active users. And for those chat GPT modes that are at least you have a little bit on the free side are going to be used a little bit more than this. But Gemini 3 flash worldwide is probably going to be one of the most used models Because AI mode is powered by it. And if you don't know AI mode, I tell people, always use AI mode. Don't use AI overviews. AI overviews are not good. Use AI mode, it's one extra click. All right. This is best for everyday AI tasks, high volume applications or anyone who found frontier models too expensive. So on the dev side, yes, it's obviously a great choice. You know, a lot of developers use Gemini 3 flash. It's fast, super fast, super cost effective. But even for non technical front end users, this is what you're probably going to be using when you're using Google AI mode, which I recommend you use all the time. Right. Anytime my wife like literally gets out her phone and starts typing anything, I'm always like no AI overview. AI mode. Right. It's a rule in the house. All right, number 10 Gemini 3 Pro. So this is the big model. This is the behemoth, the newest, the biggest, the baddest and technically the number one model in the world. Both on LM arena and on artificial analysis. So this is the best. So it did dethrone OpenAI dethrone everyone. But it is multimodal by default so it can process video, audio, images with equal depth, depth reasoning by default. On the API side there's a million token context window which is a ton. And if you are on one of Google's higher plans, you do get the Deep Think mode. I didn't include the, I could have included Deep Think as a top feature but because it's only available right now, I believe on the ultra plan, I'm not going to do that. All right, so what does this unlock? Well, a lot. Right. This unlocks the future of I think AI models. Right. There's a reason that Google has absolutely dominated 2025 and I think it's because, well number one, their models are really good. All right, they had a very bad release with Bard in the original Gemini. They were kind of caught tail between the legs in 2024 and they just head down shipped and crushed everyone in 2025. And it was, it wasn't really close. If you just look at progress in 2025, Google won. And I think in large part the way they finish with Gemini 3 Pro, it is an extremely capable model. But like I said, the fact that it is multimodal by default, it understands the physics in video. Right? So not only can it create video, right. If you go to Gemini, you can create video. Gemini 3 Pro uses Veo, but it also understands sense video. People don't know that Right. It can ingest video, not just transcripts. It can know what's going on. Like, you know, if I was sitting here, you know, if I throw mine in there, it's going to be, you know, it'll describe me, It'll say, oh, I have a, you know, Nike shirt on, whatever. It understands everything, but this is best for complex multimodal analysis, video understanding, log, document processing, enterprise intelligence, etc. For me, I love using Gemini 3 Pro on AI Studio. Another top 50, but not quite top 25. I love the new vibe coding updates from Google AI Studio and the team over there, which I talk to a lot. But sorry, y', all, this is too much. Good stuff. Couldn't crack the top 25 or Google AI Studio. But personally, I'm using Gemini 3 Pro in AI Studio. And speaking of that team over there in Gemini 3, if you didn't want to know a little bit more, go listen to episode 656. That was with Logan Kilpatrick, and that was like an hour after Gemini 3 Pro was released. Had a very quick chat, but great chat with Logan. All right, number 11 and let me know as we go along, y', all, this. I can't believe that Gemini Canvas mode number 11 was released in 2025. It feels, because I use it all day, every day, for whatever reason, it feels like it's been out for like two and a half years. But yeah, Gemini Canvas mode was actually created, or, sorry, released in 2025. So this is an essentially an interactive workspace within Gemini for creating and editing documents, code and apps in real time. This is another one of those that was kind of closely aligned with the rollout of OpenAI's Canvas, which was released in 2024. All right, so Gemini Canvas was early 2025. So like OpenAI or Chat GPT's Canvas, it does kind of serve a dual purpose. One is just live, you know, document editing with Gemini. So it's more of like you're working with a Google Doc and then, you know, which is, I think in general, this is great with ChatGPT and Google Gemini. Right. More people should be using Canvas mode for exactly that. So let's say, I don't know, you're having, you know, Gemini or Chat GPT, you know, create a big, I don't know, outline for you, something like that. And there's, you know, a couple things in the middle. Wrong. What do most people do? Oh, go change, you know, paragraphs four and seven. And then what do large language models sometimes do? Well, they'll change 4, 7 and then they'll change the intro and they'll change the index and they'll change all these other things too. Right, well use canvas mode, right, because you can just highlight certain things and essentially it's co editing. But the thing I use canvas mode for which is great is well it can render code, write and render code in real time. So everything from, you know, HTML, react prototypes, web pages, infographics, quizzes, so many things, it is quite amazing. And the little tier sheet that I'm going to show you guys here at the end to rank everything obviously made with with Google Gemini. All right, and if you want to know more about Gemini canvas mode, episode 554, number 12, here we go. Live stream audience, let me know what's your vote? This IS GPT Image 1.5. This is OpenAI's latest dedicated image generation model with improved quality and speed over GPT image 1. And well, what does this unlock? It's best for marketers, advertisers, communication professionals, anyone. Right? And it is really good, right? Obviously when it comes to kind of name share, I think that nanobanano Pro took a lot of the kind of media or social media attention when it came to AI visuals. But a lot of people don't know this on LM arena text to image. GPT image 1.5 is actually better. And these are blind, you know, kind of like blind taste tests. You put in a text prompt, you get two images and you vote for which one's better. So you know, there's a lot of things that Nano Banana Pro is better at than GPT Image 1.5, but it is extremely good. Right? And if you remember the original, you know, GPT image, you know, a lot of people thought it was not that good. Aside from you. No, there's some, you know, fun, cutesy things that went viral, but aside from that, it wasn't very good. GPT image 1.5, very, very good. All right, so I'm gonna put the the numbers here again on the screen for the live stream audience as I take a sip of my water here. But go ahead. What is. You only get one vote. What is your vote for tool or update of the year? And we're going to keep this going a little faster now because I don't want this to accidentally turn into an hour long show. All right, 13 the model you go to when you just need it. Right? All right, that is GPT5.2 Pro. All right, this is again thinking mode. Don't use y', all, don't use the free version of GPT chat GPT, the GPT5.2 instance, GPT5.2 auto. Don't do it. GPT5 thinking is good. GPT5.2 Pro is literally yesterday solved some physics problem that humans couldn't solve. All right, this is good. All right, this is when you need something, right? So this is OpenAI's flagship model with extended reasoning capabilities. So in the family of models there is kind of three different tiers. There's the Instance, the Thinking and the Pro. But don't use the auto the instant you should be using Thinking. But if you have the time right, use Pro GPT5 Pro. It can be very slow. And I'll tell you this, my, my usage of chat GPT's deep research has actually gone down because GPT5.2 Pro I think is just as good. But then you can kind of tap into some other features by using this model because you can as an example use Canvas mode in there. You can use a GPT can be powered by GPT 5.2 Pro. So there's so many benefits to maybe using GPT 5 Pro where you might usually use Deep Research depending on what plan you are, you know, different limits. I get it, it's a little bit of give and take, but I mean this is human expert level performance on complex tasks. But only when you are using the Thinking mode. Obvious. But it's best for complex reasoning, financial modeling, multi step projects, professional coding, etc. Just obviously always make sure you're using the GPT5.2 thinking or GPT5.2Pro. Do not use the default version. I've talked about this many times on the show. It is the 24th best model in the world. You wouldn't want to do that, right? If you have the choice, would you want to use the 24th best, you know, dentist in your city or the first, right? Probably the first. All right, number 14. Lovable. All right, so this a lot of good updates in 2025. But if you don't know, Lovable is a platform that builds full stack web applications from just natural language. So they had a lot of improvements in 2025. The ability, I believe they had the cloud, the back end for auth everything. Right. Whereas before you kind of had to connect it to, you know, you had to duct tape kind of whatever you created in lo to, you know, some different backend so different auth services. So now they kind of have everything with the lovable cloud where it's just kind of ready to go and they also now have the agent mode at the default. So it's autonomous multi step building and it generates front end, back end database deploys and enables conversational iteration. So essentially you describe an app, get it deployed and this is best I think for non technical founders. You know, Cursor and Lovable, two very different things, right. Cursor I think is more for software engineering teams, very serious developers. I think Lovable can still be used by those people. But Lovable I'd say is probably for maybe if you're a non technical person but you've had an idea or an app or you run a company and you know, I don't know, maybe you're paying $50,000 for you know, a piece of software each year and you barely use it and you're like, like maybe we could just build something like this on your own and you know, you try to get something together, get an MVP together in an afternoon. Lovable might be the thing to look at there. All right, number 15, Manus, just the general purpose super agent. So it, it was the first kind of big name general purpose autonomous agent that can just deliver complete work products from gold descriptions. So this was launched in 2025 but you can literally have, they have the wide agent feature where you can go have like a hundred agents researching at the same time. They all spin up their own virtual computer to execute and handle complex multi step tasks independently. And it's blown up. Right, so they were actually just acquired by Meta, right. Not even a year after their, you know, public launch. So they did hit reportedly $100 million annual recurring revenue in eight months, which is an astronomical amount of revenue. I use Manus, I'm on their paid plan. It's really good. What I need to personally get better at Manus for is sometimes I just hand it off everything when I probably shouldn't, mainly because the credits. Right. So if you also like I subscriptions to obviously everything, Manus is great I think at more quickly logging into all your services. Right, that's what I am going to start using it more for. But you know, on the front end, on the research end, unless you're trying to get you know, lead information, which it's really good at by the way. But don't waste your credits on you know, typical research. You know, go do that in like OpenAI, deep research, you know, Claude, Google, etc. But it is good for data analysis, resume screening, travel planning, any complex tests that you would designate to an assistant or if you wish you had an assistant. Manus is pretty good for that. And we did go over Manus and genspark and a lot more in this episode I think was actually kind of underrated. So episode 613, which is AI agents from automation to super agents. 10 AI agents you should know in 2025. So go listen to episode 613. All right, number 16, the crack. Number 16. Chat GPT pulse. So I did still include this, even though it's currently only available to those on the 200amonth plan. I did so because I think it signifies kind of the next step of AI, which is proactive AI that just does tasks for you, even if you don't ask for it and just delivers you what it thinks you need. So that's essentially what ChatGPT Pulse is. It's a proactive daily briefing where ChatGPT researches overnight and delivers you a personalized morning update. I don't like it that much, if I'm being honest, but I do think just the number of users that OpenAI has and I'm a little bit different. I say I'm a very unique large language model user. I'll say that a lot of times I'm really trying to test things and break things. And so I was getting a lot of reports that were not maybe relevant, it was maybe relevant to the usage, but not relevant to what I wanted. So you can obviously provide feedback to Chat GPT Pulse, right? It can go through your, your email, your calendar and you know, tell you what's important and you know, oh, you have a meeting with this person and you know, does some research for you. I don't know, to me, even the more feedback I gave it, I wasn't getting something great. And probably the reason is, is because I am really good at ChatGPT and a lot of people don't know, here's a secret, lean in for the secret agent mode. You can schedule agents, right? So at least for me, I was able to much better, I was able to get much better results, especially when we still had connectors. We have apps now, they work a little different. But you know, the agent connecting to my data on a scheduled run was way better for me than a the Chat GPT pulse because I just had more control over it and I had less control over Pulse. So, you know, I, I, I think for people that are more casual users or, you know, even people who are, you know, power users, I would say I'm just trying to break things all the time and reverse engineer things. But for everyone else, I think it's great. ChatGPT pulses all right, number 17 and let me know, Livestream audience, keep getting your votes. In number 17 was the Microsoft 365 copilot agent modes. So this is an AI assistant embedded throughout Microsoft 365 with autonomous agent capabilities. So the agent mode completes multi step tasks across office apps autonomously, including the memory work IQ that it maintains persistent context and learns performance. And there's also the ability to access this in Copilot Studio. So this is really gives you the ability to delegate complex workflows across Word, Excel, PowerPoint, Outlook and Teams. And this is obviously best for Microsoft native enterprises. And we did go over this in episode 6 21. Hands on a lot of these, by the way, these episodes that I'm dropping. Probably best if you watch them because a lot of them are hands on demos. Not all of them, but just FYI, if you ever want the video version, you can always get that for free on our website@your everydayai.com. all right, number 18, we're almost at the end of the list here. Nano Banana Pro. I already kind of referenced this when we were talking About GPT image 1.5, but nano banana came out amazing, right? I think Nano Banana was the first time that we looked at images and we're like, wait, even as a former photographer, right, I have four professional DSLRs, you know, in, in, in my closet and I've taken more than a million photos over the years. Nano Banana, I could tell the difference. I could say like, okay, yeah, this is AI or Nano Banana. The average person probably couldn't. Nano Banana Pro, no one can. No one. Not if you're an AI expert, not if you're a photographer. No one can tell the difference anymore. Nano banana Pro and GPT5, GPT15 image for certain use cases are that good. But Nano Banana Pro was the first model that got to that level where it's like no one, no one can tell anymore. Even if you're trying, you can't tell. So this is best in class. Text rendering finally solved, right? Took a while, but now you can finally get that it reasons which is, is great. This is a reasoning model. It's part of Gemini 3. We went into this more in depth when we went over Gemini 3 Pro. But it can reason about an image which is completely different than, you know, a diffusion model that doesn't think. It just, you know, kind of spits out and fills in pixels. This understands what's happening in the photo, what should happen in the photo. Right? It's extremely accurate. You can also use it to you can upload 14 different references and it can combine different things. So I don't know if you had a bunch of products, right? Maybe you work in an E commerce company or something like that and you're like, oh man, we didn't get any of these into shoot, but we have individual products. Well, you can literally just combine things and then combine different, you know, products with different, you know, humans or models and then, you know, fill it in with text. It is that good. This is obviously best for people, you know, trying to do marketing assets with text infographics. It's absolutely bonkers at localized visuals, product mockups, et cetera. Number 19, this one, notebook LM updates, including nano Banana Pro in video overviews and slides. So NotebookLM actually had a ton of updates this year that weren't related to Nano Banana. They redid their studio completely. So now there's personalized and customized ones, reports that you can save templates for, which are really, really good. You can create unlimited versions of those. The AI podcast, right? The Deep Dive overviews, there's different versions of those. Again, you can create however many you want because you used to only be able to create one, right? It's such a huge unlock. Even just those things alone, just the studio. But then you throw in Nano Banana Pro, right? So now NotebookLM has video overviews and those visuals are powered by Nanobanana Pro and then they also have slides that are powered by Nano Banana Pro. Yeah, when I say, like, I don't want to be the product manager at PowerPoint. Yeah, don't. Don't want to be. Right? Not good. Because Nano Banana Pro is so, so good. Especially combined with Notebook LM and being able to run slides in there. There it is. Bonkers, right? So I'm looking, I'm looking at this one live stream audience, you know, who's going to give, you know, I'm wondering if anyone's going to give Notebook LM number 19 here at D. We'll see. Now we did go over this in episode 652. That's another underrated episode. Go listen to that one or watch that one on our website. All right, number 20. Perplexity comment. So this is an agentic browser from perplexity, like ChatGPT's Atlas. It is chromium based. So like ChatGPT Atlas, you can, you know, sign in with your Chrome profile. It'll sync everything over your bookmarks, your passwords, your extensions. Same thing with perplexities. Comment. If, if, if I'm telling you the truth, there's not a ton of blanket statements or blanket differentiators I can make between Perplexity and chatgpt Atlas. The thing I can say the big benefit with ChatGPT Atlas is it connects to your data and I think it's a little better on the Agent mode. Perplexity for things that you don't necessarily need a a graphical interface for, you know, if you don't need an actual agent clicking, you know, making 10 clicks on a page. For everything else, I think Perplexity Comment is a little faster and does a little bit better at long range tasks. In my testing, like I said, I'm constantly testing ChatGPT Atlas, Perplexity Comet and then just Chrome with the CLAUDE extension. And Perplexity Comment is usually the best for most use cases and the fastest and it does better over long range tasks. So this is great for research across many sites, price comparisons, data collection, you know, any task task that would normally require multiple browser tabs or it require you to go to a bunch of websites back to back to back to back and find all this information. Perplexity Comment, great. And we did go over five business use cases for comments in episode 614, so go check that one out. All right. Probably the one I've used least on the list. But I am bullish on it regardless because it is really good. Replit agent 3. So this is a browser based AI agent that builds, deploys and hosts applications entirely in the cloud. So now with natural language you can go straight to a deployed app, describe what you want, agent builds and ships it. And with integrated hosting you have no local setup required. So I would say that this is, I wouldn't say the more technical version of Lovable. Right. And repl, it's been around for forever, right. I would say Replit is the, you know, kind of the AI clone of replit. Not in a bad way. Right. It's just non technical. Right. You can get more technical in Replit if you need to, but with Agent 3 you don't have to if you don't want to. Right. They actually had a funny little ad a couple weeks ago that we shared in the newsletter with Shaq, right? Just literally using his voice and creating apps, you know, full front end, full backend. But what this unlocks is essentially being able to go zero to deployed in the browser with complete dev control that's accessible anywhere. So great for, you know, whether you want to learn to code, deploying simple apps, fast, rapid prototyping, you know, developing without a local environment, whatever it is. Replit Agent three. Really, really good. All right, we got a video model. There we go. Runway gen 4. 5. So Runway was technically one of OG video models, right? Before we had a VO, before we had a Sora, we had Runway. So this is actually great for professional grade AI video generation for, you know, both filmmakers and creatives. So what's new this year? Well, act one, which is impressive. So you can capture facial and body motion from your webcam and transfer that to AI characters. It's bonkers, right? Such a cool feature. So if you are trying to do something a little higher level, right? You know, trying to get the right facial expression on something, maybe you're just shooting a little ad for your company, right? And you can't get your, the avatar or whatever you're working with. You can literally act it out, the facial expression, the body expression, and it will do that. They also did add the multi motion brush brush for selective animation, which is good. They've had this single animation brush for a very long time. So that's good to get the Multi one And then. Well, what does this unlock? Well, control the AI performances using your expressions, stable characters across frames, professional quality output. And obviously the new updated, you know, going from Gen 4 to Gen 4.5, pretty big boost. So who is this best for? Marketers, creators, advertisers, filmmakers, commercial production, music videos essentially. If you're a creative professional, you should probably be using Runway Gen4.5 or at least trying it. All right, next on the list, Sora 2. So what is this? Well, this is OpenAI's video generation model with native synchronized audio. That's the, you know, that's really what was unlocked in 2025. You got to tip the cap to VO, which we're going to get to here in a second, which was the first. Right. But Sora, good. It's really good, right? The video model is very impressive and it actually launched via in app, right? So yes, it's part social app, slash Brain Rod, but still really impressive video model. So you can, you, you do have to sign up for it on via the mobile app, which is technically like a social network. But you only have to do it once and then you can just do it on your computer from there. So now you can do up to 25 second clips with native audio, including dialogue, sync to lips, sound effects and music. There's a storyboard storyboard mode which they've had since the original Sora, which again a really underrated feature of Sora, best used I think on the the desktop. Obviously they have now, the character cameos, which are just called characters now, Right? So you can upload yourself and then you can then just have yourself do anything, or you can make your character, you know, public, or you can share your character with friends or co workers and then they can use your character to. To do or say anything. Right. So there's obviously a lot of, you know, scary things that can be done. But if you're putting your character out there for anyone to use, well, you can maybe expect people to use it for bad purposes. But I think this unlocks just so many creative use cases. I'm not personally using this, obviously, I used it to test it a little bit, but I think this is best for short form content, social video, marketing clips, all those things. Things, you know. So if you're. If your company is trying to, you know, make it on TikTok, you know, with vertical video, SORA is a fantastic tool for that. Me, I could care less about social media vertical videos. I don't scroll, you know, Vertical videos. So not my thing personally, but extremely impressive. All right, and we did go over that. Sora2ai TikTok brain rot or your company's secret creative weapon in episode 623. Three. All right, two more. Number 24. Suno V5. The AI music generator. Essentially, without going into too much detail, there's a lot of new things, but the major thing is quality jump, more natural vocals, better instrument separation. You can actually separate different things. Almost think of a song now that's layered and you can control each layer with text prompts. You can also, you know, upload samples of yourself, like humming a tune, things like that. But Suno V5, again, this is one of those things where, yeah, it's probably better than, you know, 90. I'm gonna get some hate mail from this. Sorry. It's better than 99 of musicians, right? Like, essentially, if you're not, you know, on tour or on the Billboard charts or, you know, probably Suno V5 is just as good or better if you're not one of those things, right? The music, it creates bangers, right? Bangers. People didn't know this. There was actually an artist that was breaking onto all the Billboard charts and they found out later it was someone just using suno. But I think that's going to become maybe kind of the norm because the music is that freaking good. And this is an older one, but it was just a fun interview that I had with the CEO of SUNO a couple years ago. Episode 207. It's probably a cringy interview because it's so old, but it was, it was a fun one. I'll leave it at that. Go listen to it. All right. And then last but certainly not least, we have VO31. So this is Google's flagship video model and it was the first to generate synchronized audio natively in really good. So what is new in 2025? Well, the native audio generation including ambient noise, music, dialogue, sound effects alongside video, the physics understanding through the roof because it's Google in up to 4K resolution. So this is true end to end video generation without the need for extensive post production audio work. Right. The ability for, you know, beginning frame, end frame generations. Bonkers. So good. I wish I had more time of the day. I would be using VO3 1, I would be using Google Flow more. But this is best for video content creation marketing. Essentially anyone wanting complete videos without editing audio or without needing a huge team. That took way longer than I thought, y'. All, I am physically tired after this one. So many AI tools. But go ahead, get your votes in one last time. So live stream audience, I'm going to give you a second. Go ahead, vote for your favorite number. Just put the number if you want. You know, you can, I guess you can click pause on a live stream too, right? Yeah, you can click pause on a live stream and then just finish. You know, you'll be five seconds behind. You're like a delay. Like that time I had a Super bowl party with two TVs and one was delayed. And you know, everyone's screaming over in this corner and everyone in this corner is confused. Right? There you go. All right, right now, if you want the top 50, everyone that didn't make the list, make sure you repost this. All right, so now I need to quickly rank these. Even though we went very long, let's go ahead and rank them. And like I said, with our little vibe coded ranking system here from Google Geminis in the Canvas mode. All right, let's do it. Let's get down to business. Where would you put everything, y'? All? Where would you put everything? This is tough decision. You know what, I kid you not, guys. I never thought like people listen. I hear from people all the time, like at OpenAI, Google, you know, Microsoft, other labs, and they're like, oh, I saw you, you know, talked so great about this and I was kind of salty or, you know, hurt. So it's like, I don't want to hurt anyone's feelings but. But yeah, I'm just going to tell you Guys, the truth here. And just because something's a D, right, that doesn't mean it's bad. That means it still beat out hundreds of other. And by hundreds, I mean thousands of other extremely capable AI tools. And again, remember, one last reminder on this. This is for non technical people, right? This is for your everyday business leader. Like I said, there's so many other great. Right. If you're talking about certain niches, certain verticals, there's so many great tools. Right, Right. Let's go quick here. This is a long one. All right, here we go. Let's start with. Let's start in the middle. So I think lovable. Great. I'm going to put that at sea level. Also on SE level. Manis. I've been using manis a ton more lately. They've been rolling out some great updates in 2026 already that I'm loving. Like I said, I got to get a little bit better at, you know, testing out the 16 light. The normal 16 and the 16 max. Right? Sometimes I just go straight max and eat through my credits. One prompt ate through my entire month's credits. All right, let's keep this thing going. You know what? Quad code. Where do you think, y'? All. Where do you think Claude Code is gonna go? Banger S. Quad code. It is so good. All right, Canva Visual Studio or Visual Suite 2.0. That's D tier. Still made the list. Still use it technically, every single day. I do. All right, let's go here. Let's go here. All right, we're gonna go Suno on the D. Nothing against Suno. It's literally fantastic. I'm just thinking for overall use cases, I still think a lot of people are using music, but it's the best one by far, which is why I made the list. And it's extremely popular. All right, let's keep going. Let's do chat. GPT Pulse. That's going to fall on the D again. I think if more people had access to it, if it was a little better receiving feedback, maybe a little bit higher. All right, let's go. So many options. So many options. All right, let's go Gen Spark. That's going to go D. That was tough for me. Probably between D and C, but probably. Probably right there. All right, let's go. By the way, Gemini Canvas crushed this thing. It's so smooth. There was other ones online, but they were all buggy. And I'm like, I'm just gonna make my own, my little tier sheet, and I can Rearrange them. All right. Gemini 3 flash. There's a lot of things I gotta get to. All right, that's gonna be B. That's tough. It is so good, so fast. It's gonna be used by billions, literally billions of people. But I gotta put it there. All right. Gemini 5.2 Pro. Gosh, it's got to go S. It's got to go top right. It is the best model. Even though Gemini 3 Pro has become a little bit more my workhorse, my day to day, when I need something done 100 and I need that chain of thought, I need that transparency, I'm going to Gemini 5.2Pro and I'm not looking back. All right, Runway gen 4. 5. We're going to drop that down D level. All right. We're going to go chat GBT Deep research. This is tough. I'm gonna have to go to A. I can't put everything on S. That's the tough part. All right, let's go to Replit. We're gonna put that at C. Replit agent 3. So good. Deep seek. Same thing. We're gonna put that at C more than anything. That's not even necessarily the model's capabilities. It's what it's done for open source. Right. That's why I'm saying it's a top AI tool and release because it is impactful, not because I think people should be using it, but because it pushed all the other companies. Right. We got a extremely capable GPT OSS from OpenAI. Maybe not in response, but I don't know. They had to be influenced somehow by that. All right, let's do. Let's do perplexity comment. This one's tough. This one's tough. All right, I'm gonna have to do for perplexity comment. I'm gonna go B level. I'm gonna go B level with that. All right. And because it's so those. I mean, those two are neck and neck. I'm gonna also put chat GPT Atlas B level. Let's start filling in the A level. Okay. Oh, VO31, man. The fact that that didn't make the S level. It's gone. It's going A. All right, all right. Also going A. We're going to go GPT image 15. Where are we putting Microsoft copilot agent modes? I think it's going to go B level. That's going to go b. All right. Sora. Sora 2. Okay, we're gonna do Sora 2. I think maybe we're going B or C, y', all. I'm gonna go B for now. We might change that. I wanted to try to make these somewhat even, so I might change that. Oh, gosh. Gemini Canvas Mode. That's gotta go A. It's so good. Can I put it? S. Probably not. I'm looking. I got. I got Smasher Bangers only left. All right, cursor, I think cursor. We can go C. Extremely capable. Still really good. Oh, my gosh, this is hard. Okay, So I have five at D. All right, so I have Suno V5, chat GPT Pulse Runway, Gen 45, Canva Visual Suite 2 and Gen Spark on D. On C, I have REPLIT agent 3, cursor 2, deep sea family manus and Lovable. And then on B, I have Gemini 3, Flash, Perplexity Comment Chat GPT Atlas, Sora 2, and the Microsoft 365 CO Pilot Agent modes. And I have a couple left that I got to fill in. I mean, can all of these go S? Let me see. Okay, that would just be. I can do this, right? I mean, Nano Banana Pro. Gotta go S. Quad opus 45. I would catch heat for putting this A level. It's. It's a really good model. All right. It's gotta go square. Gemini 3 Pro. Gotta go S. And y', all, for the second year in a row, Notebook LM Studio updates not only going S mode, but tool of the year. Yeah. Shout out Notebook LM Team tool of the Year again. It has to be. So last year, it was just the tool of the year because it was. Was brand new in 2024. But the fact that in Notebook LLM now, again, it's grounded. All right, so what that means is it's not pulling random information from its training data on the web. It's only what you give it. Which is why if certain companies, individuals are like, oh, we're not sure about large language models, use NotebookLM and you'll be like, oh, my gosh, this thing is. I'm never going to stop using this. Right. They came out with a great mobile app. App. You can upload screenshots in the mobile app. All this studio advancements that they made. The ability to create unlimited Deep Dive podcasts, the different versions of Deep Dives, the customizable studio reports. My gosh. The videos powered by Nano Banana and the slides powered by Nano Banana. My gosh. I mean, it is almost as good as Gemini, as ChatGPT, as Claude. Right? What you can create inside NotebookLM because of these 2025 updates, it is on the same. Right. If this was by another company. Right. I think maybe it doesn't get the love that it might deserve Notebook LM because it's a Google product. And they're like, oh, well, you have, you know, Google Gemini. Well, okay, I mean, you have Notebook LM, that's powered by Gemini 3. You know, you have the, the videos, the Nano Banana, the slides. It is that good. It is our tool of the year again this year. But in our newsletter, we're going to announce the people's choice, the audience choice. So I'm going to go through and I'm going to count whichever one has the most votes from the audience, and we're going to give that the audience award. All right. That was a ton to go over. I'm sorry this turned into an hour long episode, but I don't talk about AI tools a ton. So, you know, I'm just shoving it all into one episode a year. So that's why it's a little longer. All right, I hope this one was helpful. Like I said, make sure to tune in what is. Yeah, Tomorrow we're going to be starting our Start Here series. All right. People have been asking for this for literal since 2023. You know, they're like, jordan, you have hundreds of podcasts. Where do I start? And I always have to, like, answer that individually. Well, I'm like, it's January. Everyone's trying to, you know, double down and learn about AI more. So we're going to have a start Here series. We're going to probably release two or three a week. You know, off and on. We'll see. But, you know, we're going to release them intermittently through January and February as we get done producing these. So these probably won't be live. They'll be a little. A little faster. All right. But these are great.