Transcript
A (0:00)
I've been teasing for a while that I made some major podcasting workflow changes on the Audacity to Podcast and asked you to guess. Now here's what I did. Thank you for joining me for the Audacity to Podcast. I'm Daniel J. Lewis. I've been podcasting since 2007 and I started the Audacity to podcast in 2010 and I started the episodes doing them in a particular way and some somewhere along the way. And I don't know exactly when this changed, but I changed how I prepared for the episodes as well as how that affected how I recorded the episodes and that affected how I then produced the episodes. And more recently I've made a major change back to an original way I was doing it. I've also made some other workflow changes that I wanted to share with you because these could inspire your own podcasting workflow changes and improvements. And I have been teasing this for a little while asking you to guess what it was. And a few of my listeners did guess correctly what kind of changes I've made since late 2025. And even though I'm recording this in 2026, I wanted to give enough time to really see how I liked these changes, how they were working for me, and if anyone noticed, especially if anyone complained if they could tell what the changes were. So let's start with number one, Chapters and Transcripts. But not just chapters and transcripts in general, because I was doing that before. More specifically chapters and transcripts with POD Chapters before POD Chapters, which is my own product. Please try it out on your next episode over@podchapters.com Before Pod Chapters, I was spending 30 to 60 minutes across about five different tools, even on two different computers in order to do what PodChapters now does for me in about 30 to 60 seconds, it creates the transcript for me. Or I could bring my own transcript if I really wanted to, as long as it's VTT or srt. But it generates the transcript for me, which does a lot better and faster of a job than my other computer does, which my main computer can't do transcripts very well. It's very slow at transcripts, so I'd have to go over to my other computer to have it do the transcript for me and then bring that back to my main computer. It was cumbersome and so many apps to use. I know 5 isn't a big number, but for such a simple thing going between five different apps 30 to 60 minutes and some of the apps were clunky slow just didn't work very well, had massive frustrations for me. So I started building POD Chapters because I wanted to improve my own podcasting workflow. When I first started creating it, I wasn't planning to turn it into a product, but once I started making it work, I realized, wait a minute, other podcasters could actually benefit from this, too. And the podcasters who have signed up for Pod Chapters are loving it, and they're using it episode after episode. And it's saving them so much time, too. And it saves me so much time. What used to take me 30 to 60 minutes now takes 30 to 60 seconds. And that's not an exaggeration. It is so fast for me to use. I'm loving it, even just for myself, and other podcasters are loving it, too. So I'd love for you to try POD Chapters on your next episode. There's a free trial, so that gives you enough time to try it on your next episode. You can bring your own transcript to save on the AI credits, or you can have POD Chapters transcribe your episode for you in the proper format, and then it can generate the chapters for you, and then that transcript that it provides, you can download that, or you can use the transcripts or the Chapters hosted by PodChapters. And that's going to provide an ability to do some cool stuff in the future. Some little tricks I have up my sleeve, and one of the tricks I had up my sleeve that's been a significant productivity and workflow boost to me with Pod Chapters is that I've wanted something for years and have asked other developers and apps to try and build in this feature, and no one's done it. So I finally did it myself. And then. And that is that when I record my episodes, I know my outline, and I want my outline to be my chapters. So I know what I want my chapters to be. I just don't know where they should be in my audio. So I wanted a tool that would allow me to paste my own outline without timestamps, and it would find where that outline exists in my episode and then turn that outline into. Into chapters. POD Chapters now does that, and it does it so fast so well. I just go in sometimes and tweak by half a second or so, just to be more precise on things. But from dropping my episode into POD Chapters to downloading the episode back again, and it's fully chaptered and transcribed and ID3 tagged, and all of that is about 30 to 60 seconds of work, I love it. This has Been a huge boost for me and for other podcasters, too. So I'd love for you to try it. Try it free, in fact, over at podchapters. Com. And number two is the bigger change, the thing that I've been hinting at for months because I wanted to see if anyone noticed what was different and they could actually guess exactly what I had changed. And that is I went back to outlines instead of scripts. I don't know exactly when I did this along the way of the Audacity to podcast, but I basically started scripting my episodes. I would write my show notes ahead of time as an article, and that would help me to formulate my thoughts, figure out my transitions, get all my links together, all of that stuff. So it really helped with the preparation of the content. And that way, when I press stop on my recording, I already had my show notes done and they were in article format. And I felt like I. I presented the content very well because I essentially rehearsed it by writing it down. And since I speak the way I write generally, and generally write the way I speak, they were very close to each other. But that's also where a little bit of laziness and perfectionism at the same time somehow snuck their way in. And it started getting cumbersome to write the script ahead of time. And there were a lot of ideas that I had that I just didn't want to write the script, or even just in recording the script, I would be talking along. I would stumble over a way I say something, and because I had it written down, I wanted to say it that particular way so I'd have to go back and say it again. And that actually created more edit points than if I just spoke off of the outlines. And part of the reason that I kept doing this, even after being on a very long hiatus from the Audacity to Podcast, is because getting personal here, my confidence took a really big kick from multiple bad things that have happened in my life that I don't need to go into here. But it really made me doubt my ability to communicate. I even remember when my unwanted and unfortunate divorce started that first podcast movement. I was basically a walking zombie. Walking around, I still kind of wanted to go, and especially because I wanted to shine the light on my friend Dave Jackson, because I got to introduce him as he was being inducted into the Podcast hall of Fame. So I was really excited to do that and support my friend that way. And the rest of the time, I was like a zombie. I couldn't talk very well. I Stumbled over basic concepts and I was like, broken. And that carried over for a while. And even when the emotions rebuilt, still the confidence was gone and the ability to communicate as smoothly. And I'm not saying this to say I'm like some master communicator or anything like that, but the confidence was gone. The feeling of I know what I want to say and how I want to say it was gone. And so I started relying on the script to help me as I was getting back into podcasting. And it was then easy that I could have the script in front of me look into a teleprompter and read that script pretty much verbatim. And I would occasionally go off script, but since I write generally the way I speak, and generally speak the way I write, it didn't sound too much like it was scripted, but it did still sound somewhat like it. And so I wish I actually wrote down what episode I changed. I think it was around episode 410 of the Audacity to podcast, where I decided I'm going to go back to outlines instead of a script. Because for one thing, I was starting to feel more confident and I was starting to feel like the script was kind of confining me somewhat. And I felt like I had the energy again to kind of ad lib, but not completely ad lib. It's still prepared, but to be more off the cuff, more dynamic, more engaging at a human level, like, I'm having a conversation with you. And I felt like the script was starting to limit that and I wanted to get back into that because that's how I was when I started the Audacity to podcast, not how I was when I started the Ramen Noodle. My first podcast that was totally scripted, and the first nine episodes in two years took so long to do because I was scripting them to perfection. But that aside, switching back to outlines, I could feel the energy was different. And that's what I heard from several of my listeners. A couple of them did guess that I was maybe not using a script anymore. But what all of them who guessed said is, your energy sounds different. You sound more passionate, more energetic, like there's more of something of you behind the episodes. And that's true because it is me. Now I am still using a teleprompter. Like I promoted teleprompters in my episode about teleprompters a few episodes ago, but now all that's on the teleprompter in front of me is my outline, maybe a couple of reminders. Here and there of some certain things I want to remember, like an episode number or, or URL or a particular point I want to make, or maybe a quotation I do want to read. But I'm looking at just my outline as I'm actually looking directly into the camera and I see my outline. And I like being back to outlines instead of scripts because it reminds me so much of the passion for public speaking, which was something else I missed during those several years of hiatus. I enjoy public speaking. I've enjoyed it since I was a teenager when I started public speaking. Even though I was super nervous back then. I still get nervous when I publicly speak. But it's something I really enjoy. And this feels like I'm back into that passion again and I have more energy. And what's really been interesting to me to discover is my editor, John Buchinis, has said that he doesn't have to edit me as much when I'm now talking from an outline instead of from a script. And I know for one reason that that's because I don't have the verbatim thing in front of me, so I don't have to worry about reading it perfectly or getting my timing just right for how the sentences connect together. Also, when I'm speaking kind of off the cuff, or ad libbing or working from an outline, really, if I start going the wrong direction in a sentence, instead of having to stop, figure out what I wanted to say and start over again with a different sentence, sometimes I just roll with it and go with a different direction of a sentence like that. So I really like this. And if you've been doing your podcast episodes from scripts, maybe you've been doing it long enough that maybe you're ready to cut that umbilical cord and you could start podcasting, recording your episodes without a script, that doesn't mean you don't have notes. That doesn't mean you don't plan it what you want to talk about. But give it a try. If you haven't yet, maybe it will boost your confidence to try it and see. Wow, this actually is fun. I feel totally different. And I feel totally different as well when I'm talking from a script instead of from the outline. And right there, I'll give a little side note, right there is where I just did one of those things where I meant to say I feel a lot different talking from an outline, but I used the word script first, so I just ran with it and I then switched how I was going to say that, and I said the script and then I fixed it by then continuing on with the sentence. And that kind of thing is how we communicate in real life. And I feel like even though I don't see you, I feel like I'm communicating at a more personable level with you. Do you agree? Do you feel like the episodes sound or feel different? Do you feel maybe more connected to me based simply on the way I've been communicating differently in the last several episodes? I'd love to hear from you, send me feedback@podcastfeedback.com Audacity or reach out on pretty much any social network as the Daniel J. Lewis One of the other reasons that I made this change is for my personal schedule wise because I have John Buchinis editing my episodes for me. That has been one of the best investments for me in podcasting is having someone else do that because I do not enjoy editing my own episodes. That's why my first podcast only had nine episodes in the first two years of its existence. When I have to write the script before I can press record, that means it takes me longer to press record, which means then it takes me longer to get that episode to John to edit. So if I can get the episode to him quicker, he can edit it at a more convenient time. And I'm still working on my personal schedule to try and give more margin for things and do things at a consistent time each week. But now that I can talk from an outline instead of having to write the whole script ahead of time, it means I can jump into the episode quicker. But then what about the article that goes along with it? Or as many of us would call it, the show notes? Well, that's number three. My episode articles, and I've always tried to take the approach of writing them like an article or maybe kind of like a blog post, because there are a lot of personal pronouns in there. The me, the my I, and stuff like that. I've started using AI for this a lot more now. Please hear me out. I am still not a fan of using AI to create the content for you. You create your own content, but you can use AI on your creativity to repurpose, to reformat, to restructure, to do things with that. POD chapters. That's what it does. It takes your transcript of you and your co hosts speaking and it does cool stuff with that. Turning your spoken content through your transcript into chapters. That's AI doing that. So it's not making AI slop, it's making stuff from your original thought and creativity. And that's What I think AI should be used for, not to create things for us, which I really artificial things then, but to do stuff with our own creativity, what we put our own brain cells into making. So I've started using AI to turn my transcript and my outline into the articles and blog posts, but not just simply giving it my transcript and saying, turn this into an article. I do more beyond that. And part of this is has been built with OpenClaw and continues to be managed by OpenClaw. Or maybe at some point I might switch to Hermes agent. And by the way, I'm still looking for your ideas of how you use any kind of agent like that. Openclaw, Hermes, Paperclip, anything like that, where you can run software locally to do things on your computer using an AI agent. I'm going to do an episode about different ways you can use that in podcasting in the future. So if you want that episode to come out, send me your ideas or even just your thoughts, suggestions, questions that I can cover in that episode. Go to podcastfeedback.com audacity for that. So, because of what I've been doing with openclaw, I've discovered new ways that I could make this better. And the first thing was that I took the transcripts from all of my latest episodes. In fact, all of the episodes that have transcripts with them, which at this point is, I think, maybe more than 50 episodes, which all of those transcripts combined together would make a document which would be way too big to feed into any AI engine out there. An LLM, a large language model. It's not that large of a language model to hold that much content in it. So if you hear people talking about, yeah, just put all of your transcripts into an AI and have it do something with that. Well, it's going to miss some of that content because these things have certain input limits. Instead, what I had it do, and using openclaw to do this, is I had it systematically go through every episode's transcript and analyze how I talk, recognizing that, yes, many of those older transcripts are me basically reading a script, but still the script was written the way I write, which is very close to the way I talk. So it went through episode after episode, one at a time, analyzing how I talk, certain figures of speech I tend to use, certain phrases I repeat over and over, my patterns, certain things like that. Just the way that I communicate as well. My tone, my language, my intelligence level, to say for how I communicate. Am I communicating at a collegiate level or at a third grade level or anything like that. So it analyzes all of that episode after episode, and it starts building out a voice document or a tone of voice document that describes how I communicate. And I have currently openclaw, but maybe in the future Hermes agent or maybe something else, we'll see. Because I might switch things around, but I have it. Every time I publish an episode, it goes looks at the transcript and updates that tone of voice document with any new patterns that it recognizes. And by now, most of the time it doesn't see anything that's all that different. It's adding just an occasional note here or there to recognize a certain phrase that I repeat sometimes, or maybe a new pattern that it hasn't detected before, but it's not making a whole lot more. But the point is now I've got this tone of voice document that I take that document, I combine it with my transcript and with my outline, and I ask the AI then to use all of this stuff together to turn my transcript into an article, put in my voice, in my tone of voice from my first person perspective. And the results are really good. They're very, very close to what I would have written myself. I don't know these days how effective the systems are that try to detect what is AI or what is not. If you dare look at my content and say, oh, you have an EM dash. You're using AI because there's an EM dash. Hey, I've been using EM dashes since the 90s. I still remember the code back when I was on a Windows computer. Hold down alt and on the number pad type the number 0151. That's an EM dash. I've been doing that since the 90s. I used em dashes before they were uncool. I guess maybe they were never cool. So I know how to use an EM dash. So just because you see an EM dash in any of my content does not mean that it was written with an AI. But now the way that this is working for me is great because as a single dad, homeschooling my son and running my own business, my time is very valuable. And I really don't want to spend it on stuff like rewriting my content into an article format. That's the kind of thing that AI is great at doing because it is my content, not the AI's content. It's not AI slope, it's my slop. But I wouldn't even call it slop. It's my content that the AI then is restructuring. And not just restructuring on its own. It's restructuring with the guidance of my tone of voice, how I speak, in addition to giving it the actual outline that I use so it knows not to make a whole bunch of sub points that didn't exist or give me a whole new outline, but it recognizes my outline just like POD Chapters does. And it then uses my tone of voice in the writing and uses my content. It is simply reformatting my content into an article. And I think it does a much better job than these systems that automatically take your transcript and turn them into articles or blog posts, because those things are missing the tone of voice. They're missing some of these other things that I've learned how to do with the AI and my tool for that, by the way that I'm using. It's not something I've made for myself yet. I've thought about maybe making something like this, but then again, there are so many other tools that try to do something like this that again, it would be an education issue for another product to try and explain to people why my tool does things radically different than others and why they should use my tool instead. So what I'm using is Magi. It's my favorite AI toolbox. I am a paying customer of Magi. I'm also an affiliate. So if you visit the audacity2podcast.com Magi that's M A G A I that is my affiliate link. I do earn a commission if you sign up through there. And I do highly recommend Magi because it is a super toolbox of AI tools. You've got GPT in there. You've got Opus, you've got Gemini, you've got Deepseek, you've got Minimax, you've got Kimi, you've got all of these language models as well as image and video models. So these are the three major changes I made to my podcasting workflow. And it just feels so much better to me. Number one, I'm using POD chapters for chapters and transcripts. Number two, I'm back to speaking from the outlines instead of using scripts. And number three, I'm making the episode articles with my outlines and a specially trained AI. If you'd like to check out these links like to POD Chapters or to Magi, then please visit the notes. A simple tap or swipe away or click the link from the chapters. And now that I've given you some of the guts and taught you some of the tools, it's time for you to go start and grow and maybe even consider changing your podcasting workflow for your own podcast for passion and profit. I'm Daniel J. Lewis from the audacitytopodcast.com Send me your feedback or podcasting questions@podcastfeedback.com Audacity and thanks for listening.
