
Loading summary
A
Today on the AI Daily Brief 15 ways I use AI and the models I use for each. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Hello friends, Quick announcements before we dive into this Long Reads weekend episode. First of all, thank you to the sponsors of today's show, KPMG Blitzy and Super Intelligent. To get an ad free version of the show, go to patreon.com aidaily Brief and if you are interested in sponsoring the show and reaching this very excellent audience of executives, entrepreneurs, builders, hundreds of thousands if not millions of the most engaged and enfranchised AI doers in the world, shoot me a note@nlwrakedown.network this week we're doing something a little bit different for our long read slash big ideas type of episode. I realized it's been a little while since I walked through some of my current AI workflows. I have found over and over again that one of, if not the single most useful way to help other people on their AI journey is just simply to describe what you're doing with AI, how you're doing it, and which specific tools or models you're actually using. So this is not a totally comprehensive list, but this represents a lot of the stuff that I do day in and day out. And first of all, I'm actually going to start with one that isn't me personally, but is at the core of superintelligence. Agent Readiness Audits we use a custom voice agent powered by the OpenAI API to help companies figure out what their most relevant agent use cases are. Now, voice agents are of course at the very beginning of their mainstreaming and over time we're going to see many different applications take advantage of this technology. But I did want to call out since we're looking at both use cases and models that when it comes to understanding how people are working currently, in order to give better advice about how they might use agents in their work, we are using the OpenAI API now. As part of the prep for this episode, I also asked ChatGPT itself to go back through what it knows about me based on its memory of its interactions with me to put together a list of 10 ways I use AI. It came back with Podcast Production and Distribution, Strategic Content Creation, Enterprise Readiness Audits, AI Agent Marketplace, High End Advisory and Keynotes, Children's storytelling and game design. Guess one of my ways to relax is Custom Magic. The Gathering Sets. It also called out Market sizing and Financial Forecasting, Marketing analytics and Growth, Hacking, email and relationship management and creative branding and naming. Although interestingly, I will have something to say about naming at the end of this. It was a fairly good list, but I want to be both more specific and more generic at the same time. And so first of all, let's talk about editing the podcast. Now. I have at this point handed over the editing of this show entirely to my team, but we do everything in descript. Part of the value of descript, in addition to it being an inherently social and collaborative platform, is that in addition to its visual editing timeline, it also has an AI suite called Underlord. There are lots of features in Underlord that are extremely useful if simple Removing all the filler words ahs, ums, likes, etc. Removing retakes basically where it can tell that you've done something twice and only uses the final and clearest one. One that's really powerful is eye contact. Instead of using a teleprompter, you can use this to basically automatically change your eyes to look as though you weren't reading. And they have a bunch of other features as well. Now if you follow me on Twitter, you might have seen me lamenting the fact that on occasion descript will just up and drop an entire episode that I recorded. But honestly, it's a testament to how good some of these editing features are that I put up with that happening every couple of months because the rest of the tools are so good, it's still worth the pain. One other video editing tool that I did want to call out however is VEED, and believe it or not, and as testament to how single features can drive entire usage. With VEED, you can edit and review not just at 1x but at 2x or even 3x speed when I am editing a podcast and just want to go very quickly. This is an absolute godsend and it's something that is not possible easily in tools like Adobe Audition. Now one bonus for something that I should be using but that I'm not. And again, I will talk about this a little bit more towards the end. I really should be using one of the copious number of automatic clipping tools to be generating video clips that can turn into short form videos for for places like TikTok, Instagram reels and YouTube shorts. I don't do that currently, but I have taken some time to look across the various tools like Opus and I do think that it's a use case that will enter into if not my flow, maybe my team's flow at some point in the next six months or so. Now staying on the podcast theme for a minute. Another daily use is descriptions. It is pretty much standard practice for me at this point to drop an entire transcript from an episode. By the way, the transcript is auto generated by description to get the description that ends up in the show notes this to me represents a perfect use of AI for writing in the sense that it's highly functional. It's not about winning some prize for pros. I will say that I tend not to use the titles that they suggest. I found over and over again that ChatGPT has a way over adherence to conventions like putting colons or dashes into the middle of titles which just do not perform well in practice. It's also kind of lazy. The best titles are ones that find a way to be a single, clear, determined statement, not have multiple sections. Now when it comes to this copywriting for podcast use case, I'm using either 4o or the 4.5 Research Preview. I don't find a huge difference between them for this particular use case. Now what about when it comes to longer form writing? Well, first of all, I should note that in general, with one exception that I'll talk about in a minute, when I am doing longer form writing, it's with Claude, not with ChatGPT specifically now Opus 4. Although in general this has been my pattern for some time and I've just used whatever the most advanced cloud model is. Now, when it comes to posts that I'm writing for LinkedIn or just for a general audience where I'm trying to get my ideas out, I still do not use AI for those. Basically the gist of it is at this stage if I'm posting on LinkedIn, it's because something is in my craw enough that I actually want to write it myself to get it out. But there are certain types of writing that are more rote, that are more for marketing purposes where I do integrate Claude. For example, we've recently started to do an AI enterprise weekly newsletter for companies, prospects and partners that have engaged with the Agent Readiness Audit. And for that I basically drop in the transcript of each of the podcasts from the previous week and I give it a very, very lightweight prompt craft a tight email summary of key news and stories in AI last week based on the attached transcripts focus on an enterprise audience, and then sometimes I give it a little bit more guidance based on the specific content of the week. So for example, for this most recent one, I wanted to really focus on a couple of reports that I had reported on rather than some of the more headline News. There's a couple reasons why this works from an AI writing standpoint. The first is it's appropriately aligned with the stakes of the writing. Again, I'm not going for artistry here. I'm not going for even a highly viral LinkedIn post. I'm going for something competent, informative and interesting for an email, and Claude is totally sufficient for that. Second, because I am giving it hundreds of minutes of me talking, it's really already building off of the way that I talk and think in a way that it might not be if it didn't have all this context. The other thing that's important to note about Claude, and another reason it tends to outperform when it comes to writing, is that you can create custom writing styles that you can come back to over and over again. These are akin to something like a midjourney style template, but for writing instead of for images. So a couple that I've created include Tech Translator. The style summary is deliver analytical insights through conversational and authoritative communication. Then I created one for my wife's podcast, which is a true crime show. This one is called Cheeky Crime. Explore dark topics through sophisticated, witty and playful narrative commentary. But I found that sometimes that one was a little too opinionated and a little too quirky for the subject matter. So I also created Storyteller's Lens, which dials up the journalistic tone and down the playfulness without losing that playfulness entirely. And so over time, as you start to perfect these, they just become a really good shortcut that again, gives Claude an edge when it comes to a lot of longer form writing. Now, speaking of my wife's podcast, while she writes her two hour weekly main episodes literally long form by hand, which is absolutely insane. But here we are. We do often do bonus posts, Patreon posts, things like that. Sometimes I'll guest, sometimes it's her doing them for her audience. And this is another area where we'll turn to Claude for long form writing. And again, part of what makes it suitable for this use case is is that by nature of her or Maya's presentation on that show, a lot of what matters about this scripting is the exposition, not the specifics and the personality. In other words, this gives all of the details in a narrative sequence that makes sense, but she or I or whoever's presenting it is going to be able to fill in and give it personality by dint of actual presentation. Now, one thing that I will note, and this sort of bridges both writing and starting to get into a new subject of research is that There is exactly one situation in which I will sometimes turn to O3 as the writer instead of Claude or GPT4O or 4 5. If I've started with a deep research background dossier, sometimes O3 will outperform from a writing standpoint. So in this case, this was a recent bonus episode that Jesse was doing for her show. It was about one of the biggest thefts in British history, which turned out to be conducted by a group of retirees. I had Deep Research prepare a detailed background dossier, which as you can see is dozens of pages long, and then I asked it to turn that into a podcast script. And having just done all of that background research, it did a much better job of the writing than it would have in another context. Now, research is obviously one of my most frequent use cases, with ChatGPT's O3 version of deep Research being my most general deep research tool. Couple other types of pretty frequent research use cases for me. One is that I am always pitching something, and so I am very frequently collaborating with O3 to figure out how to pitch or frame some sort of background thing. And this is almost a hybrid of research and strategy. So for example, recently I wrote come up with a strong argument for how much is going to be spent building AI agents over the next five years. And specifically what I was looking for is the fact that if you look at a lot of the estimates out there of agent economics, they're around total spend by organizations, but that's different than what they're going to spend specifically building them, which for a variety of reasons which I will not get into right now, is of particular interest to me. And so, as you can see, O3 spent about two minutes searching, researching and thinking and combining that into a coherent answer. A variation on this is market research. Recently, for example, I asked for a comprehensive list of boutique strategy consulting firms, giving it the example of innisight as a comparison. This was a search I did with Deep Research because I wanted a little bit of background dossier to ground myself in an area that I was exploring. Interestingly, although I tend to use OpenAI's version of deep Research for Deep research, when it comes to short sort of daily mini research, I find myself tending more towards perplexity. For example, if I'm ever looking for random PE ratio research or something like that, I'm going to go to Perplexity for that. Today's episode is brought to you by KPMG. In today's fiercely competitive market, unlocking AI's potential could help give you a competitive edge, foster growth and drive new value. But here's the key. You don't need an AI strategy. You need to embed AI into your overall business strategy to truly power it up. KPMG can show you how to integrate AI and AI agents into your business strategy in a way that truly works and is built on trust, trusted AI principles and platforms. Check out real Stories from KPMG to hear how AI is driving success with its clients@www.kpmg.us AI. Again, that's www.kpmg.us AI. This episode is brought to you by Blitzy. If you're a technology leader, here's something that probably sounds familiar. Your organization's competitive edge is buried in legacy code that desperately needs modernization, but the resources required feel out of reach. That was the case for a global investment analysis firm. They needed to migrate 70,000 lines of complex Matlab financial algorithms to Python, algorithms that drive investment decisions for trillions in assets. Their estimate months of high cost specialized engineering work. Instead, they partnered with Blitzi. Blitzi's autonomous AI preserved mathematical precision and generated over 80% of the new code base, completing the migration with just five days of engineering time. They cut the timeline by 95% and saved 880 engineering hours. If your organization is facing similar modernization challenges, visit blitzee.com to schedule a consultation and discover how AI powered development can transform your technical capabilities. Today's episode is brought to you by Superintelligent. Specifically Agent Readiness Audits. Everyone is trying to figure out what agent use cases are going to be most impactful for their business, and the Agent Readiness Audit is the fastest and best way to do that. We use voice agents to interview your leadership and team and process all of that information to provide an Agent Readiness score, a set of insights around that score, and a set of highly actionable recommendations on both organizational gaps and high value agent use cases that you should pursue. Once you've figured out the right use cases, you can use our marketplace to find the right vendors and partners. And what it all adds up to is a faster, better agent strategy. Check it out at BeeSuper AI or email agentssuper AI to learn more. Now, since I touched on strategy, let's move over to strategy. If you listen to my episode about three things you can do to get better at AI right now, you will have heard that one of the things that I said was to treat O3 for one week as an actual strategic collaborator, not just as a tool. Now I have a strong affinity for O3, but as we'll discuss I think any of the state of the art reasoning models are going to be useful in this context as well. So how do I actually use it as a strategic collaborator in this way? Well, basically at this point I talk about pretty much every different strategic idea that I've had with O3 in some way, shape or form, and that includes things that are both very, very core to the business, which I, for obvious reasons can't show you right now. But it's also sometimes about speculating around things that are for the moment, just side quests. I was driving between where I live in the Hudson Valley in New York a week or two ago and I was randomly thinking about hypotheticals. Like if I didn't have super intelligent how would I use the podcast to interact with startups in interesting ways? One of the things that I find really useful about dumping that into a platform like O3, even though there's no real intent behind it, is that if you're anything like me, you can get that prickle of an idea or an intent and really get down a rabbit hole before you rip yourself back out and focus on the here and now. I find that dumping that all into O3, talking about it a little bit honestly, in most cases just makes it easier for me to leave it behind and focus at the real important stuff that's right in front of me. Although of course if some of those ideas stick, it'll be because there's something there, and having 03 or another reasoning model as a strategic collaborator can be a good way to figure that out. In addition to just new ideation, there's also current strategic decision making and I only wanted to flag that while I haven't fully integrated it yet, I am, at least for the moment, giving Grok4 a try as a competitor in this role for at least a week or two. I did a few tests in the first couple of days after it came out and was sufficiently impressed that I thought it was worth taking the time to parallel process and see how it does as compared to 03. Now speaking of pitch, one particular area of strategy slash writing slash information design that the reasoning models are incredibly good at is stuff around pitch. So the way that I tend to use this is not for final products or final copy or anything like that, but it's to tighten explanations to go from ramble or brief outline to more robust outlines and memos. I will note that it has a little bit of an easier time with memos as opposed to decks. I think pitch decks in terms of the training set are a bit too formulaic as compared to the pitch decks that actually work in practice, so it has a bit of a harder time there. But the reasoning models are really good at information architecture, and I'm pretty sure that you will get value if you use them as a collaborator when it comes to pitching whatever it is you're pitching now. One very strong note that I have if that is a particular use case that you are using these tools for, you have to assume some level of sycophancy is going to be the norm on your first attempt. So for example, I recently asked O3 to help improve a deck outline and improving it, quote unquote at first just meant doing a slightly more expansive of exactly the sequence of information that I had given it. That led me to say something to the effect of now use your own prerogative and don't assume that the way I've architected this pitch is just right and come back and tell me how you would change it. And invariably that's where you actually get the better support for this particular type of use. Now one thing that I did want to note because I mentioned rambling, if you use an iPhone or an Apple product, you'll note just how unfathomably offensively bad its voice recognition software is at this point. ChatGPT is really good, but for more general use, many, many people are now using Flow or Whisper Flow, Wisp, R, F, L, O W AI. I've installed it and I use it in every app that it's available, which is pretty much across the entire iPhone and it is a total game changer and I could not recommend it more. Now, what about moving outside of LLMs? I've mostly focused on Word Cell types of tasks because that's a lot of where my day in and day out is. But what about some of the image generation stuff? Well, let me tell you about two different use cases there when it comes to collateral and assets. So things like covers and episode art, Ideogram is my tool of choice. It has one high understanding and high fidelity to what you're actually trying to accomplish, which is really important when you're trying to present information in a certain way. Two, it's great with text, which again, if you're doing cover art for a podcast episode is incredibly important. And 3 it basically does the heavy lifting of prompting for you. So for example, my prompt for this one was retro futuristic, 1980s glitchy red computers, arcade scene, cyberpunk title how AI companies are using AI. It turned that into a retro Futuristic digital art piece depicting a 1980s arcade scene overtaken by glitch effects. The central focus is a row of vintage red CRT computers displaying distorted interfaces with how AI companies are using AI prominently displayed on the larger screen in a neon green pixelated font. Static and digital noise, blah blah blah blah blah. You get the idea. It's about five times as long as my original prompt, and because it exposes it, it gives you the ability to go back and edit it and remix it. So if you like some parts of it but not others, you can generally get it to a place where you want it to be. Still, there are other use cases that I have for image generation where I still am on midjourney, and that's basically anything where I either want just real deep interesting creativity. For example, you can see I'm doing some artistic experiments here for a project where I was trying to combine Renaissance Vitruvian man style Leonardo da Vinci notebook art with modern technology. But then another time that I tend to turn to midjourney is if I'm just generating cool backgrounds for say, presentations. I recently did a keynote that had like 150 slides and had a bunch of different sort of semi abstract styles. Honestly, when it comes to sheer fun, I still love midjourney. It consistently produces just the most interesting and visually arresting things. Although sometimes interesting and visually arresting isn't just accomplishing what you want. Which is why I think it's great to have so many of these different models at your fingertips. Another pretty frequent AI use case for me is vibe coding, specifically as a way to explore new feature ideas for super intelligent. At this point we have a soft ban on describing new feature ideas. You just have to vibe code it up. We found that one it gives the person who's coming up with that feature idea a much better ability to actually refine it and decide if they actually really like it before they share it with everyone else. And two, when they do share it with everyone else, it's way easier for them to just show rather than tell. Now one thing that I did want to flag that I think LLMs are phenomenally bad at is naming. Maybe it's because people are terrible at naming things as well, but Good Lord, are LLMs just absolutely, phenomenally, almost hilariously bad at naming things? They're either too cringy or too long, or too long and too cringy. I have never had an LLM come up with a name for anything side project, feature idea, anything that I actually thought was even remotely usable. So I don't know if that will ever be a real use case, but I wanted to call it out, especially because ChatGPT noted that this was a frequent use case for me and it's only frequent because of how bad it is and I keep trying just to see if it will continue to stay that bad forever. Now. I also wanted to share one area where I think I am behind again from that three things you can do to be ahead episode. While I have spent a bunch of time experimenting with and playing around with automations from platforms like Lindy and Nan and Zapier, I do not currently have a set of automations in deployment and that's absolutely stupid. You have my full permission to come absolutely castigate me if in six months I don't have some automated or agentic setup that takes my published content, turns it into short form videos, pumps it out across all the social channels in a way that is again completely automated and agentic and out of my hands. It's so obvious that this is something I need to be doing, but it's a great reminder, I think, that even for someone who spends all their time on this, there is a hurdle and a barrier to getting things up and running and a real new systems thinking that's required. That's going to run up against inertia and just normal human time pressure. But I wanted to plant that flag. It is an area where the capabilities have outstripped what I'm using them for and that is to my and the audience's detriment. The one other note that I wanted to flag ever since O3 came out I have been very very adherent to OpenAI. But I will note that Gemini just keeps getting better. The models are improving, the interfaces are improving, the tools like AI Studio are improving. I'm finding myself because I use Google workspace like Docs and Sheets for all my non AI stuff more frequently, experimenting with and testing the integrated features that Google puts across those tools. And I wouldn't be surprised again if you dip in in six or 12 months to see that my general Gemini usage had gone up significantly. For now though, that's the current status. Certainly when it comes to video, if I ever do have uses, VO3 is the model that I'm using. But it'll be interesting to see in a few months where the balance of usage is between Gemini, grok, Claude and OpenAI. So friends, those are 15ish, maybe more. I don't think less ways that I use AI. The models that I use for each and even some areas where I think I should be using AI more. Hopefully this is a helpful little journey and I'm interested to see if any of these mirror or match the way that you guys are using these tools. For now though, that is going to do it for today's AI Daily brief. Until next time, peace.
Host: Nathaniel Whittemore (NLW)
Date: July 13, 2025
In this Long Reads weekend episode, Nathaniel Whittemore (NLW) dives deep into the multitude of ways he personally integrates AI into his daily and professional life, breaking down both specific tools and broader use cases. He not only details his own automation and content workflows but also touches on areas where current AI tools excel, fall short, or continue evolving. The episode is practical, reflective, and offers listeners a granular look at modern AI-assisted work.
Timestamps: [03:23]
Timestamps: [05:01–11:30]
Timestamps: [11:31–14:27]
Timestamps: [14:28–20:48]
Timestamps: [20:49–28:15]
Timestamps: [28:16–33:49]
Timestamps: [33:50–35:30]
Timestamps: [35:31–40:15]
Timestamps: [40:16–42:33]
Timestamps: [42:34–44:08]
Timestamps: [44:09–46:11]
Timestamps: [46:12–48:02]
On Accepting AI’s Flaws:
“Honestly, it’s a testament to how good some of these editing features are that I put up with [Descript] dropping an entire episode every couple of months...” [07:04]
On Forming AI Writing Habits:
“The best titles are ones that find a way to be a single, clear, determined statement, not have multiple sections.” [13:33]
On Custom Writing Styles:
“You can create custom writing styles… These are akin to something like a midjourney style template, but for writing instead of for images.” [17:45]
On Pitch Collaboration:
“You have to assume some level of sycophancy is going to be the norm on your first attempt...” [33:28]
On Apple’s Voice Recognition:
“If you use an iPhone… you'll note just how unfathomably, offensively bad its voice recognition software is at this point.” [34:18]
On AI Naming:
“Good Lord, are LLMs just absolutely, phenomenally, almost hilariously bad at naming things?” [43:14]
On Falling Behind (automation):
“You have my full permission to...castigate me if in six months I don’t have some automated...setup...” [45:07]
On the Evolving AI Landscape:
“For now though, that's the current status...But it'll be interesting to see in a few months where the balance of usage is between Gemini, Grok, Claude and OpenAI.” [47:25]
| Task / Workflow | Tool/Model(s) Used | Notable Details | |---------------------------|----------------------------------|------------------------------------------------------| | Voice agent audits | Custom, OpenAI API | Strategic client insights | | Podcast editing | Descript (Underlord), VEED | AI-powered, time-saving features | | Short clip generation | Opus, etc. (not yet used) | Aspirational, for social video | | Writing show notes | ChatGPT 4o/4.5 | Functional, but avoids AI-titles | | Long-form copy & styles | Claude (Opus 4, latest) | Uses custom style prompts | | Research (deep) | O3 deep research, ChatGPT | For backgrounders, strategic preps | | Research (quick) | Perplexity | Daily factoids, numbers | | Strategy prep/ideation | O3, Grok4 | Strategic “collaborator” for ideas/outlines | | Voice transcription | Flow / WhisperFlow (WISPR) | On-device speech recognition | | Cover and asset design | Ideogram, Midjourney | Ideogram for detail/text, Midjourney for creativity | | Coding/prototyping | In-house tools, LLMs | “Vibe coding” to demo product/feature ideas quickly | | Naming | GPT/Claude/others (none satisfy) | Consistently disappointing results | | Automations | Lindy, Nan, Zapier (not yet) | No full pipeline in production yet | | Other LLMs | Gemini, Grok | Testing and expanding scope |
NLW’s episode is both a roadmap and candid reflection for anyone using or planning to integrate AI into creative and operational workflows. He’s transparent about the strengths and limits of current tools and models—and honest about simply not having it all figured out. The blend of practical insights, workflow tips, and a realistic look at what’s still hard makes this episode especially valuable for executives, content creators, and anyone trying to keep pace with AI’s rapid advance.