
Hosted by The FIR Podcast Network Everything Feed · EN

Neville and Shel dig into a provocative Harvard Business Review article that argues most marketing teams are structurally unprepared for the speed and scale that agentic AI now enables. The bottleneck, the authors contend, isn’t the technology; it’s the operating model. Neville and Shel connect the piece to conversations FIR has been having for the past year: AI as orchestration rather than automation, professionals shifting from supervisors of tasks to directors of systems, and 2026 increasingly framed as “the year of the agent.” At the center of the Harvard piece is the idea of a “brand code” — a machine-readable knowledge system that lets specialized AI agents continuously create, adapt, test, and optimize marketing in real time. Communications urgently needs its own equivalent: a “narrative code” containing executive voice profiles, message hierarchies, sensitive-topic guardrails, and escalation rules. Whoever builds it first, he warns, will inherit the agentic stack, and if marketing gets there first, comms will be stuck with a system never designed for crisis, controversy, or stakeholder complexity. The episode also includes some concrete examples and early thoughts on Hermes, Wispr Flow, and where human judgment still has to win. Links from this episode: Redesigning Your Marketing Organization for the Agentic Age The Year of the Agent: What it means for the future of communications Google Summary: The Year of the Agent: What it means for the future of communications If you work in PR and you’re unsure how AI agents will help you, this should help. The next monthly, long-form episode of FIR will drop on Monday, May 25. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Shel: Hi, everybody, and welcome to episode number 513 of For Immediate Release. I’m Shel Holtz. Neville: I’m Neville Hobson. Over the past couple of years, we’ve heard countless conversations about how AI is changing marketing and communication. Most of those discussions tend to focus on tools — faster content creation, better personalization, workflow automation, synthetic media, analytics — all the things AI can supposedly do more quickly and at greater scale than humans. A new article in Harvard Business Review published last week takes the discussion somewhere much bigger. Its argument is not simply that AI will improve marketing productivity. Its argument is that AI may fundamentally redesign how marketing organizations themselves operate. The article is called “Redesigning Your Marketing Organization for the Agentic Age,” and the authors argue that most marketing teams are structurally unprepared for the speed and scale AI now enables. The reasoning is interesting; we’ll look into this in a minute. AI has already accelerated software engineering and product development dramatically. Products, updates, campaigns, and features are being developed and shipped much faster than before. But marketing organizations, they argue, are still largely built around sequential workflows, siloed teams, approval chains, meetings, handoffs, and coordination-heavy processes. So even when AI speeds up individual tasks, the organization itself still moves slowly. In other words, the bottleneck isn’t necessarily the technology, it’s the operating model. What struck me reading this article is that in many ways it feels like the continuation of conversations we’ve already been having on FIR over the past year. About a year ago, Shel demonstrated some of the early agentic AI capabilities we were beginning to see emerge — systems that could move beyond simple chatbot interactions and actually take actions across workflows, tools, and platforms. At the time, it felt experimental, slightly futuristic, and maybe just a glimpse of where things might be heading. Since then, we’ve repeatedly returned to related themes on the podcast: AI as orchestration rather than just automation, and managers becoming directors of systems rather than supervisors of tasks, to name but two. Recently, the wider communications industry has been framing 2026 as the year of the agent, a fundamental shift from generative AI, which creates content based on prompts, to agentic AI, which acts autonomously to achieve long-term goals. The rise of such autonomous agents requires a focus on agentic orchestration, with professionals acting as AI engineers who guide, manage, and audit these digital employees. As we discussed on this podcast last year, communication departments will adopt a hybrid structure where humans focus on high-level strategy and creativity while AI agents handle high-volume procedural communication tasks at machine speed. We’re already seeing a marked impact on marketing and public relations. The Harvard piece explains how companies such as HubSpot and AWS have begun putting this model into practice. They say organizations are achieving measurable gains, with marketing materials adapted up to 98 times faster, unit costs reduced by 80%, and click-through rates increased up to 17 times. Research from BCG has demonstrated these benefits at scale. Organizations embedding agentic AI into marketing workflows, the research has found, can achieve up to a threefold increase in ROI, campaign speed, and content volume. That’s why this Harvard article feels so interesting to me. It doesn’t contradict any earlier conversations; it complements them. It takes many of the ideas we’ve been discussing conceptually and places them inside a concrete organizational model. The authors propose something they call an agentic marketing organization — essentially a system where humans and AI agents work together continuously across multiple layers of activity. At the center of this idea is what they describe as a brand code: a machine-readable knowledge system containing brand strategy, customer insights, messaging frameworks, business rules, governance structures, and operational guidance that both people and AI systems can understand and act upon. Once that foundation exists, specialized AI agents can continuously create, adapt, test, distribute, optimize, and report on marketing activity in real time. It’s a vision of marketing that starts to look less like a department and more like an operating system. But what really caught my attention wasn’t the technology itself so much; it was the shift in the role of the marketer. Because beneath all the platform architecture and workflow diagrams is a much deeper question: if AI increasingly handles execution, what becomes the real value of marketers and communicators? The article argues that value shifts away from production and toward judgment — setting intent, evaluating outputs, interpreting signals, shaping governance, and guiding how the system evolves. And that raises some fascinating questions for communicators. But first, Shel, your demo of those early agentic capabilities was about a year ago now. As I mentioned earlier, it felt experimental and slightly futuristic then. So what’s changed since then? Shel: It feels like ancient history now. If I were to look at that, I’d probably shake my head and say, “my God, that’s pretty primitive.” The way it worked was, it took a screenshot of every site it visited and then acted on the screenshot. So it was a very slow and tedious process. The video that I shared, I edited out all of the waiting time for it to go through all of this, because it showed you everything. And those days are long gone. That was clearly a demo. I don’t remember which of the AI models offered that — I think it was Anthropic — but it was just tedious and not all that functional. It did what it was supposed to do in the end, which was to create a spreadsheet with the information I’d asked for. It was some open-source spreadsheet that it used. I ran a similar exercise just last week using Claude Cowork. And this was for a piece somebody in our sustainability department wrote. It was about two projects that had achieved world-first certifications for zero waste, which is kind of a big deal in the construction industry. It’s one of the biggest contributors to landfills and the like, the industry is. So I’m looking to place this article. And what I did was, I told Claude Cowork that I wanted four subage...

Most agency owners think they’re doing their team a favor when they quietly absorb the painful, tedious, or time-consuming work. They’re likely not. In this episode, Chip Griffin and Gini Dietrich look at the sacrifices owners make on behalf of their teams and why those sacrifices often create more problems than they solve. This isn’t about the occasional tactical sacrifice, it’s about the systemic ones: the conscious decisions to absorb entire categories of work because you’ve decided your team would find them too difficult, too unpleasant, or too much of a burden. Gini admits she’s guilty of it herself, sharing that a new COO sat her down with a list of tasks she’d been handling and told her she shouldn’t be doing any of them. The jobs weren’t glamorous, but they weren’t the owner’s job either. Chip extends this into two areas where owner sacrifice tends to do the most damage: new business development, where owners keep proposals and pitches entirely to themselves thinking they’re protecting team time, and org chart design, where flat structures are usually not a deliberate choice but the result of owners absorbing management responsibilities no one else wanted. Both patterns block team growth and overload the owner at the same time. Gini describes a practice she returns to every quarter, sorting her task list into three buckets — things only she can do, things she enjoys but probably doesn’t need to do, and things she absolutely should not be doing. The third list gets delegated immediately. Chip puts it like this: for everything on your plate, ask yourself why you are the one doing it. If there isn’t a good answer, stop doing it. [read the transcript] The post ALP 304: Stop making sacrifices your agency doesn’t need you to make appeared first on FIR Podcast Network.

In this episode, Chip is joined by Karen Swim and Michelle Kane of the That Solo Life Podcast for part one of a special crossover episode exploring the practical effects of AI on agencies, solos, and the communications industry. Karen and Michelle share their view that AI is no longer optional. Practitioners who resist it risk falling behind, while those who embrace it can dramatically expand their capabilities. The conversation goes beyond basic content creation, exploring how AI can elevate strategy, reinvigorate professional skills, and free up time for deeper, more creative thinking. Chip, Karen, and Michelle also discuss the importance of treating AI like a new employee — providing context, voice, and guidance to get the best results — and address common concerns around ethics, privacy, and copyright. They encourage communicators who haven’t revisited these tools recently to dive back in, as the technology has advanced rapidly and shows no signs of slowing down. [read the transcript] The post CWC 113: How AI impacts PR agencies and solos (featuring Karen Swim and Michelle Kane) appeared first on FIR Podcast Network.

In this episode, Chip and Gini open with the analogy of Canadian doubles, the tennis format where two players face one. If your team outnumbers the prospect, you don’t project strength, you project awkwardness. But the conversation goes well beyond headcount. A little preparation goes a long way in making sure every seat on your side is justified. You’ll want to match expertise to whoever the prospect brought, which requires actually knowing who’s coming. Gini described a recent pitch where she reverse-engineered her attendee list based entirely on who was showing up from the prospect’s side. That’s not logistics, it’s strategy. And whoever is in the room during the pitch needs to be the person doing the work after the contract is signed — not a handoff to a team with no context and no ownership. Both Chip and Gini are emphatic that the meeting itself should not feel rehearsed like a school play. Agency owners who show up prepared to have a real conversation before pitching solutions will stand out. Harder for many owners is knowing when to keep quiet. Interjecting while a team member gives an imperfect answer undermines their confidence, signals to the prospect they can’t be trusted, and makes them rely on you. The debrief after the meeting is where the coaching happens. [read the transcript] The post ALP 303: Preparing for your agency’s group presentations and pitches appeared first on FIR Podcast Network.

While there’s no evidence that business leaders are outsourcing the most important decisions to AI, there are reports that many executives are relying on AI to make many — in fact, most — of their decisions. The implications for communications could be huge. Links from this episode: AI Is Changing More Than Work, It’s Rewiring Executive Decision-Making Inside the C-suite: How AI is quietly reshaping executive decisions AI and the future of human decision making C-Suite Executives Dominate AI Decision-Making as Strategy Becomes Priority Decision-Making by Consensus Doesn’t Work in the AI Era How AI Is Transforming the Way Executives Lead Leadership at a Turning Point: How AI Is Shaping Executive Decision-Making Can AI Make Executive Decisions? The next monthly, long-form episode of FIR will drop on Monday, May 25. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Neville: Hi everybody, and welcome to episode 512 of For Immediate Release. I’m Neville Hobson. Shel: And I’m Shel Holtz. The inspiration for this week’s report came from a post Brian Solis wrote recently. In it, he argued that AI isn’t just changing work — it’s rewiring how executives make decisions. Once Brian put that in my head, the trend started standing out in other things I was seeing. I’ll summarize the numbers and what they mean for communicators right after this. The numbers Brian pulled together are honestly alarming. A Confluent study of UK private sector leaders found that 62% of executives now use AI to make the majority of their decisions. That’s not some — it’s the majority. 70% say they second-guess themselves when AI disagrees with them, and 46% say they rely on AI more than their own colleagues. On the U.S. side, SAP’s research found that 44% of C-suite executives would reverse a decision they had already planned to make based on AI input. 74% place more confidence in AI advice than in the advice they get from family and friends. Meanwhile, McKinsey reports that 92% of companies plan to increase their AI investment over the next three years, but only 1% — 1 percent — describe themselves as mature in deployment. The money to pay for AI and a sort of blind trust in its abilities are racing ahead of the internal competence to use it. Now, I want to be clear before I go on. I’m not anti-AI, Neville — you know this. Anyone who listens to the show knows I’ve been beating the drum for AI as a tool for communicators and for business in general for a long time. AI as a thinking partner, a research assistant, a stress-tester for ideas — that’s enormously valuable. But there’s a meaningful difference between using AI to inform a decision and using AI to make the decision. And Brian puts this well: AI is becoming the new executive influencer. The problem is that it hasn’t earned that role, at least not yet. So let’s talk about what this means for those of us in communication, because the implications are everywhere. Start with employee trust. The implicit deal between an organization and its workforce is that the people at the top got there because they have judgment and experience and pattern recognition that the rest of us don’t have — or at least they’ve been able to employ it really well and get noticed by the people who promote you into those leadership decisions. That’s the story leadership tells, and it’s the story employees buy into. Now imagine the all-hands where the CEO announces a major restructuring, and somewhere in the Q&A, or worse, on Blind or Reddit a week later, it comes out that the decision was essentially handed to a chatbot. What happens to confidence in leadership? What happens to engagement? What happens to the social contract that says, follow me because I know where we’re going? You can’t credibly ask people to bring their full selves to work, as they say, while you’re outsourcing your own judgment to a language model. Now extend that to external stakeholders — investors, customers, regulators, the board. They’re paying, and in a lot of cases they’re paying a lot, for executive judgment. If a strategic call goes sideways — and you know that happens — the explanation that the AI suggested it isn’t going to land well. It’s going to sound like an abdication, because it is an abdication. And from a crisis communication standpoint, “we trusted the algorithm” is one of the worst defenses I can imagine. I don’t expect that anybody’s going to say that, but it doesn’t mean it’s not going to come out. Just ask anyone who’s worked an aviation incident, a financial services failure, or a healthcare AI misfire. Imagine the reaction when either the leader tells people, or they learn through a third party, that the afflicted stakeholder hears, “Well, that’s the decision the AI told me to make.” And there’s a third implication that I think communicators need to surface inside our organizations: the erosion of dissent. I find this particularly interesting and disturbing. Confluent found that 65% of leaders say decision-making has become less collaborative since adopting AI. The Harvard Business Review just ran a piece arguing that consensus is dead in the AI era. That may be — but debate isn’t consensus. Debate is the friction that exposes bad assumptions. It’s what didn’t happen at that auto manufacturer — I think it was Volkswagen with their emissions standards. They didn’t have the psychological safety to feel safe in dissenting against the decisions being made. In this case, we’re not even looking forward at the leadership level in some cases. If AI is pushing aside the colleague who would have pushed back, whatever process your organization had for dissent just stops functioning. And when dissent dies, so does the early warning system communicators rely on to spot reputational risks before they get out of control. So what do we do? A few things. We push for governance — and if you already have a governance model, push to revisit it. Your governance needs clear declarations of which decisions AI informs versus which ones it actually makes. We coach our executives to talk publicly about how they actually use AI, with appropriate humility, before the question gets asked for them. We build the internal narrative that human accountability is non-negotiable, no matter how good the model gets. And we keep reminding leadership that machine confidence isn’t the same as strategic clarity. Brian’s right: AI is a test of leadership. It’s also, increasingly, a test of communication. Neville? Neville: Well, just to set my position clear on this, too — I’ve been a drum-beater for AI as a research assistant, as a useful tool, since GPT first came out. The initial kind of hysterical enthusiasm was tempered over time, but I use the tool every single day in what I do for work, or for pleasure for that matter. So it’s something I believe strongly in. But I’ve got this, how could you say, in the back of my mind always — this thought that I don’t accept blindly anything the AI assistant tells me. If I’m researching something, for instance, I’m going to make a recommendation about something, let’s say, or I’m writing a report or even something relatively simple like an article for the blog. If I felt I wanted to say this and it’s telling me that, that’s a simple decision: I’m either going to follow it or not. Typically when that happens, I’ll ask it questions to further that angle. But this is something else, what Brian writes about. And The Register — I’ve read their piece — tempered with a bit of hysteria, it seems. I mean, thi...

The entry-level talent pipeline is being entirely restructured. If agency owners don’t figure out what role a young professional actually plays in an AI-assisted agency, they won’t just struggle to hire today. They’ll have no one to promote in five years. In this episode, Chip and Gini dig into what’s happening with entry-level hiring right now, and why the answer can’t be to stop hiring junior staff altogether. The conversation covers why the old model of routine work is gone, what needs to replace it, and why agencies that don’t solve this problem soon are setting themselves up for failure. The episode opens with an observation from Gini: every presentation she gives to college classes lately surfaces the same anxiety from students. Nobody’s hiring at the entry level because AI can handle the work those roles used to cover — news releases, media lists, social drafts, basic research. How can they find jobs today, and get the on-the-job training they need to move forward in their careers? Chip frames the problem as a junction of circumstances: the rise of AI, economic uncertainty, and a higher education system that hasn’t evolved with the workforce reality. Colleges discouraging AI use while their graduates are about to enter workplaces built around it is, as he puts it, the same mistake as banning calculators in math class. The students coming in aren’t unprepared because they’re less capable, they’re underprepared because the institutions that trained them weren’t keeping up with the times. Chip and Gini agree that entry-level hires aren’t obsolete, but the role must change. Instead of being the lowest rung of the ladder, new professionals need to come in already functioning like managers — just managing AI tools and processes instead of people. That requires more on-the-job training, better-documented processes and SOPs, and a genuine commitment to learning and development that most agencies still don’t have. There’s more than one upside, though. Better documentation and SOPs don’t just help entry-level hires do their jobs — they make your agency more efficient, reduce owner dependency, and, for those who want to sell someday, significantly improve the value of the business. Their closing argument is not to avoid entry-level hiring because the old version of the role is antiquated. Rethink what the role is, invest in the systems that support it, and get comfortable assigning junior people with responsibilities that would have felt premature five years ago. The alternative is a mid-level talent shortage that will be very hard to fix. [read the transcript] The post ALP 302: Rethink entry-level hiring to succeed in the AI era appeared first on FIR Podcast Network.

The policies are clear and well communicated. The guardrails are firmly established. Every last employee has been trained. And someone in your organization still releases a public document riddled with AI-generated errors. What went wrong has nothing to do with technology and everything to do with internal culture and accountability. In this long-form April episode, Neville and Shel examine a company that seemingly took all the right steps yet still had to apologize publicly for a court filing riddled with hallucinated citations. Also in this episode: Gartner predicts that, by 2028, 75% of employees will rely on an internal chatbot to get the news that matters to them. How will internal communicators need to rethink their role to ensure everyone knows and understands what they should in order to achieve strategic alignment? One of the promises AI executives have made is a leveling of the playing field, giving lower-level employees the opportunity to excel and rise through the ranks. According to one new study, exactly the opposite has been happening. PR hacks have been accelerating the pace at which they churn out press releases and pitches. That has raised the bar for what it takes to earn a journalist’s trust (and journalists do still rely on press releases, according to a survey of reporters). Apple’s announcement of its CEO transition offers communicators a clinic on how to announce a new top executive. “Slopaganda” from Iran has proven remarkably effective, which means it is undoubtedly coming for your company or clients soon. In his Tech Report, Dan York outlines big changes coming with WordPress’s next update. Links from this episode: Elite law firm Sullivan & Cromwell admits to AI ‘hallucinations’ Sullivan & Cromwell law firm apologizes for AI ‘hallucinations’ in court filing Letter re: In re Prince Global Holdings Limited, et al., No. 26-10769 Sullivan & Cromwell Just Put Every Firm on Notice. And S&C Advises OpenAI on Safe AI Use. An AI Screw-Up By… Sullivan & Cromwell? LinkedIn search results for Sullivan & Cromwell AI AI, Trust, and the Reinvention of Corporate Communications: Inside Gartner’s 2026 Playbook Does your intranet still matter in an AI-first workplace? Chatbots in Internal Communications: Game-Changing Wins How AI Chatbots Are Redefining Internal Communications? The future of internal communication: How AI is changing the workplace High earners race ahead on AI as workplace divide widens Sarah O’Connor: One early view about AI was that it would share… How AI is forcing journalists and PR to work smarter, not louder What journalists want from AI-assisted PR pitches Journalists Trust Human-Written Pitches Over AI Journalists Reject AI-Generated Press Releases As Untrustworthy What communicators can learn from Apple’s CEO transition announcement Tim Cook to become Apple Executive Chairman; John Ternus to become Apple CEO Iran’s Meme War Against Trump Ushers In a Future of ‘Slopaganda’ Iran’s ‘slopaganda’ team uses AI Legos to flood social media Slopaganda wars: how and why the US and Iran are flooding the zone with viral AI-generated noise Slopaganda Comes of Age Alberta separatist leader unconcerned about influence of YouTube ‘slopaganda’ videos Links from Dan York’s Tech Report WordPress 7.0 Source of Truth – Gutenberg Times WordPress 7.0: Real-Time Collaboration Arrives in Core WordPress 7.0 Release Party Updated Schedule The next monthly, long-form episode of FIR will drop on Monday, May 25. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Shel: Hi everybody and welcome to episode number 511 of For Immediate Release. This is our long-form episode for April 2026. I’m Shel Holtz in Concord, California. Neville: And I’m Neville Hobson, Somerset in England. We have six great stories to discuss and share with you this month and to delight and entertain you, we hope. Topics range from the consequences of not following company guidance on AI use, chat bots, employee use, and the workplace divide, using AI to work smarter, what we learned from Apple’s CEO transition announcement, and the future of slopaganda. Lovely word, that one, show. Plus, Dan York’s tech report. But first, let’s begin with a recap of the episodes we’ve published over the past month and some listening comments. In the long form episode 506 for March, published on the 23rd of March, our lead story was on Anthropic’s view that AI will destroy the billable hour, a topic we’ve talked about before on FIR. We also explored digital monitoring of employee work, Gartner’s prediction that PR budgets will double next year, the escalating misinformation crisis, and Cloudflare’s prediction that bot traffic will exceed human traffic by 2027. That’s next year, by the way. On LinkedIn, you’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write them. In FIR 507 on the 30th of March, we rejected roundly that idea and looked at the actual trends in using AI for writing. And that prompted some comments from listeners, right? Shel: Yes, it did. Starting with Susan Gosselin, who’s actually with a client of mine back in my consulting days. She writes, there are many types of writing that I think AI is great for interpersonal communications, summaries, et cetera. But for marketing writing, that’s another thing. There are issues of copyright to consider and what you’re feeding into the channel....

The next two “Circle of Fellows” episodes will offer something different from our panels of the last several years. We welcome Dianne Chase, a veteran communicator and former IABC chair, to the discussion. While Dianne is not a Fellow, she did recruit six Fellows to write all but one of the chapters for her new book, The 7Cs of the New Communication Compass. (Dianne wrote the seventh chapter.) The book, which has five stars on Amazon, “offers both a guiding framework and a practical roadmap for mastering strategic communication in complex environments,” according to its description. “If you are a leader, manager, educator, public official, influencer, or anyone striving to make an impact, this book is an essential and thought-provoking read. It distills communication excellence to foster collaborative results and organizational effectiveness.” The book’s Cs include Collaboration, Connection, Compassion, Cohesion, Community, Congruency, and Calibration. For the first of these conversations, Dianne will join Shel Holtz, Ginger Homan, Jane Mitchell, and Brad Whitworth to discuss Connection (Brad’s chapter), Compassion (Dianne’s chapter), Congruency (Jane’s chapter), and Calibration (Ginger’s chapter). Join us for this very different “Circle” at noon EDT on Thursday, April 23. Participants in the live stream can ask questions and share comments, observations, and experiences, and become part of the discussion. If you’re not able to join us, you can listen to the audio podcast later or watch the YouTube replay. About the panel: Dianne Chase helps organizations and leaders harness the power of strategic communication to navigate crises, build trust, and drive positive change. With over two decades of experience in journalism and corporate communications, Dianne has developed a unique approach for training and consulting clients that combines crisis management expertise with the art and science of business storytelling. Dianne is an award-winning media, journalism, and strategic communication professional with profound expertise in communication disciplines, most notably crisis communication, issues and reputation management, media training, and executive communication. She is one of two people in the world accredited in the powerful GENIUS Business Storytelling methodology, created by international communications thought leader, Gabrielle Dolan. She is former chair of the International Association of Business Communicators, and author/editor of The 7 Cs of The New Communication Compass. Ginger Homan, ABC, SCMP, IABC Fellow, counsels senior leaders seeking to bring out the best in their people and brands. Her award-winning communication model for driving transformation has been used to change behaviors, align cultures, and build thriving communities worldwide. Her work with senior communication professionals has enabled them to align their department goals with business goals, achieve measurable results, and expand their influence. Founder of Zia Communication, she is a seasoned speaker, coach, and workshop facilitator. Her clients include Walmart, the Walmart Foundation, the Walton Family Foundation, T.D. Williamson, CITGO Petroleum, Phillips Seminary, and MOSAIC. IABC, PRSA, and SMPS have honored Ginger’s work on the local, regional, and international levels. A past chair of IABC, her volunteer work has been honored with three IABC International Chair’s Awards for leadership, and she is a recipient of the Leadership Tulsa Paragon Award for work in her local community. Jane Mitchell’s career began at the BBC in London on live TV programs. She moved on to producing award-winning films and videos for public- and private-sector organizations and to developing groundbreaking employee engagement programs. Since 2006, when she formed her own consultancy, she has guided organizations (some of which have experienced cultural trauma) in embedding values and ethics by understanding culture and leadership, and their link to high-performing, sustainable organizations. She has worked with Top 100 companies worldwide and is a regular conference speaker. Jane has been a member of IABC since 2008 and has served on local, regional, and International IABC Boards. In 2021, she was Chair of the (virtual) World Conference and became an IABC fellow in 2022. She is based in the UK and now spends the majority of her professional time as a Non-Exec on company boards and Employee-Owned Trusts. Brad Whitworth, ABC, SCMP, IABC Fellow, is a pre-eminent thought leader, lecturer, and author in organizational communication. He has led global internal and executive communication programs at HP, Cisco, Hitachi, PeopleSoft, AAA, and MicroFocus. He holds an MBA from Santa Clara University and undergraduate degrees in journalism and speech from the University of Missouri. Brad lives in California, a wine country, and he grows Pinot Noir on his property. A former broadcaster, Brad has made more than 300 presentations to executives, communicators, and university classes worldwide. Brad is a past board chairman of the International Association of Business Communicators and a Fellow of the association. He is one of the authors of The IABC Handbook of Organizational Communication and the new IABC Guide for Practical Business Communication: A Global Standard Primer. He chaired the Global Communication Certification Counsel in 2021. The post Circle of Fellows #127: The 7 Cs of The New Communication Compass, Part I appeared first on FIR Podcast Network.

Employees have long found ways to use software tools to get the job done, even when those tools are not approved. It’s called Shadow IT, but ever since generative Artificial Intelligence hit the scene in 2022, employees have adopted a new version: Shadow AI. The company approves Microsoft Co-Pilot, but employees opt to use their smartphones or personal laptops, along with their personal accounts with ChatGPT, Gemini, Claude, Midjourney, or whatever best suits their needs. For most companies, this is a problem that needs to be addressed through repeated policy announcements and vigorous crackdowns. One company, though, took a different approach. In this short, midweek FIR episode, Neville and Shel outline what the company did and how communicators might advocate for a version of this approach to aiding in AI adoption and speeding up productivity gains. Links from this episode: The Hidden Demand for AI Inside Your Company Shadow AI Threat Grows Inside Enterprises as BlackFog Research Finds 60% of Employees Would Take Risks to Meet Deadlines FIR #419: Is Shadow AI an Evil Lurking in the Heart of Your Company? The Rise of Shadow AI is a Double-Edged Sword for Corporate Innovation The next monthly, long-form episode of FIR will drop on Monday, April 27. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Shel Holtz: Hi everybody, and welcome to episode number 510 of For Immediate Release. I’m Shel Holtz. Neville Hobson: And I’m Neville Hobson. There’s a quiet tension playing out inside many organizations right now. On one side you have leadership teams, IT, legal, and compliance, all trying to put structure, governance, and control around how artificial intelligence is used at work. On the other side you have employees who’ve already moved on. They’re not waiting for official tools. They’re not sitting through pilot programs. They’re not asking permission. They’re opening ChatGPT on their phones. They’re using Claude in a browser tab. They’re experimenting quietly, often invisibly, finding ways to make their work faster, easier, and sometimes better. And in many organizations, this shadow AI behavior is still being treated as a problem — something to restrict, monitor, or shut down. It’s a topic Shel and I discussed on this very podcast in episode 419 nearly two years ago, and it hasn’t gone away. Neville Hobson: In fact, recent data suggests it’s accelerating. A study last November by Blackfog and Sapio Research found that nearly half of employees surveyed in the UK and US are using unsanctioned AI tools. Even more striking, 60% said they would take security risks with those tools if it meant meeting a deadline. So this isn’t fringe behavior — it’s become normal. An article in the Harvard Business Review this month argues that instead of treating unauthorized AI use as a compliance issue, organizations should see it as a signal — a sign that people are already finding value in these tools, even if the organization hasn’t caught up. We’ll explore that idea in just a moment. Neville Hobson: The article calls this the hidden demand for AI inside your company. And when you look at it through that lens, the picture changes quite dramatically. Because instead of asking, “How do we stop this?” you start asking, “What are we missing?” The piece goes further than theory. It looks at what one organization actually did when it recognized this dynamic: BBVA, a Spanish multinational financial services company with more than 125,000 employees. Rather than clamping down on shadow AI use, they moved quickly to provide a secure enterprise environment. But more importantly, they didn’t try to control everything from the center. They took a different approach. They identified and empowered what they call “champions” and “wizards” — the people already experimenting, already curious, already building things. They created a network, a community of practice, a way for ideas, use cases, and practical solutions to spread peer to peer across the organization. Neville Hobson: And the results, at least as reported, are striking: thousands of employees actively using AI tools, thousands of internally created applications, and measurable time savings of hours per person every week. But perhaps the most interesting part isn’t the numbers — it’s the philosophy behind it. The idea that successful AI adoption doesn’t start with a perfectly designed top-down strategy. It starts by recognizing that innovation is already happening, just not where leadership expects it. So the question becomes: do you try to control that energy, or do you find a way to harness it? And that opens up a much broader conversation, one that goes well beyond technology. It touches on leadership, trust, and culture — on how change actually happens inside organizations. And, importantly for communicators, on how you surface, legitimize, and guide behavior that may already be happening under the radar. Neville Hobson: Because if employees are already using these tools — and most evidence suggests they are — then silence or restriction alone isn’t really a strategy; it’s a gap. So in this conversation, we want to explore that gap. What shadow AI really tells us about organizations today, whether the BBVA approach is something others can realistically replicate, and where the risks still sit, because they have not disappeared. And we should be clear: BBVA may be an outlier. It’s a highly data-mature organization with strong leadership alignment. Many organizations don’t have that foundation. So the question isn’t just whether this works — it’s whether it can work anywhere else. And what that means for the future of work, and for the role communicators play in shaping that future. Shel? Shel Holtz: Well, a few thoughts, starting with the fact that BBVA has the financial resources to provide a secure environment for those tools that employees are using. There are many organizations whose IT budgets are razor thin and don’t have those resources, so they would need to figure something else out. But I think there’s a caution here worth raising. The numbers from Blackfog are real, even if the framing from the Harvard Business Review is optimistic: 34% of employees using free versions of tools when paid, approved versions exist; 58% of unsanctioned users on free tiers with no enterprise protections. The reframing from threat to signal doesn’t eliminate the exfiltration risk — it reframes how we need to respond to it. Shel Holtz: Communicators should be careful not to let the BBVA-style narrative become an excuse to ignore governance. The right frame is: harness the demand, don’t suppress it, and build the governance at the same time. Employees using unsanctioned tools and putting secure data and company information into them — that’s a governance risk, and I don’t think we can ignore it. I mean, I think what BBVA did is great, and I think they baked it into some governance while looking at a new approach they could afford to take. But for many organizations, governance is still a requirement. Neville Hobson: Well, I agree. It’s important and it’s not to ignore by any means. I think, Shel, you fleshed out a little bit the survey that I mentioned, which is actually useful to have that level of detail. But the big question for me is: if this is the picture in many organizations, according to that survey — compared to data previously — this is getting worse, or rather, it’s happening more frequently. People are just going ahead and using what works for them as opposed to what’s the official thing. What is that a symptom of? Maybe a lack of trust? It’s probably a mix of things. And to me, the communicator’s role here seems to be to try and help people on the one hand understand what the tools can do for them, and on the other hand to help the organization understand that we need to address this issue. People aren’t using the approved ones. They’re doing stuff on their own, and that isn’t good. Neville Hobson: You mentioned security risks. The Harvard article goes into some detail about that, as indeed do the people ...

When bad actors use AI tools to clone a musician’s voice and upload synthetic versions of their songs, they can then file copyright claims against the original artist’s content — and win, at least initially. That’s because the systems platforms used to validate copyright claims are automated and configured to treat whoever files first as the rightful holder. The result: musicians like Murphy Campbell, a folk artist from North Carolina, lose both revenue and control of their own creative identity. The same mechanism works just as well against any organization that publishes audio or video content online. In this midweek episode, Shel Holtz and Neville Hobson break down how the scam works, why it matters to communicators, and what you should be doing right now — before an incident forces your hand. Links from this episode: AI Cloned Her Voice, Then Claimed Her Songs ‘This Is Not Me’: Inside the AI Scams Driving Musicians Crazy A Folk Musician Became a Target for AI Fakes and a Copyright Troll A traditional musician became a victim of AI imitations and a copyright aggressor ‘AI slop’: Emily Portman and musicians on the mystery of fraudsters releasing songs in their name The next monthly, long-form episode of FIR will drop on Monday, April 27. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Neville Hobson: Hi everyone and welcome to For Immediate Release, this is episode 509. I’m Neville Hobson. Shel Holtz: And I’m Shel Holtz. And today we’re going to talk about something else that communicators need to worry about. I think we need to develop a worry list for communicators. This one starts with a tale about a folk singer from the mountains of Western North Carolina. She’s named Murphy Campbell. She plays banjo and dulcimer and records old Appalachian ballads, some of them written by her own distant relatives. And she posts videos of herself performing in the woods. She has about 7,800 monthly listeners on Spotify. And she is, as Shelly Palmer put it in a recent column, exactly the kind of artist the copyright system was designed to protect. In January, some of her fans started messaging her about songs on her Spotify profile that she had never uploaded. Someone would have taken her YouTube performances, run them through AI voice cloning tools, and posted synthetic versions of her songs under her name on streaming platforms. These fake tracks, to put not too fine a point on it, were really bad. Her dulcimer sounded like — and these were her words — a warbled metallic mess. Her voice had been deepened and auto-tuned into what she called a bro country singer. But here’s where it gets interesting for those of us in communications, because that’s not the end of the story. It didn’t stop at impersonation. Whoever uploaded the fakes through a legitimate music distributor called Vydia (V-Y-D-I-A) then filed copyright claims against Campbell’s original YouTube videos — the very videos the AI had been trained on. Because YouTube doesn’t use humans to review initial copyright claims, Campbell stopped earning revenue on her own content. That revenue started going to the person who had filed the copyright claims. She described herself as being in a weird limbo where “I’m telling robots to take down music that robots made.” Shelly Palmer called this a reverse copyright scam, and he confirmed, speaking to other content creators off the record, that this is more common than he might have believed. Now, I know what you’re thinking — music streaming platforms, artists, what does this have to do with me? And the answer is everything. Because the mechanism that elbowed Murphy Campbell out of earning royalties for her own music will work just as well against any organization that publishes content on platforms with automated enforcement systems. That is virtually every organization that has a YouTube channel, a podcast feed, or any kind of public video or audio presence. So here’s the structural problem as Palmer frames it. The copyright system we have was built on a foundational assumption that the first entity to register a claim is the rightful owner. That assumption held when human creativity was the bottleneck. It breaks completely when AI can generate a synthetic version of any content in seconds using any voice. Think about what your organization puts out there publicly — executive speeches, earnings calls, thought leadership videos, branded audio, training content, podcasts, content marketing pieces. Every one of these is a potential training data set for someone who wants to clone your voice, your leaders’ voices, and then upload a synthetic version through a low-cost distributor. We’re talking about something that costs $25 to $90 a year. Then they file a claim against your legitimate content before a human ever reviews it. Neville Hobson: (pause) Shel Holtz: That means the system is going to see them as the first one to file that claim and assume they are the legitimate copyright holder. Now, Rolling Stone confirmed that this isn’t an isolated case. Paul Bender, Veronica Swift, Grace Mitchell — these are just a few of the artists who have faced the same attack. One musician even ran an experiment he called Operation Clown Dump, uploading fake content under his colleagues’ names across platforms. His success rate was 100%. So what do communicators need to do? First, audit your public content footprint. Do it now, before an incident forces you to. Know what you’ve published, where it lives, and what revenue or visibility is attached to it. Second — and here’s something that’s new for a lot of communicators — register your copyrights. Formal registration is the prerequisite for meaningful legal recourse in the United States. Third, build a rapid response protocol for platform disputes. The organizations that survived these attacks quickest were the ones who knew who to call and knew what to say. And fourth, have this conversation with your legal team today, not after something goes wrong. Murphy Campbell eventually got Vydia to withdraw its claims, but only after her story went viral. Most organizations won’t have that option. Your story won’t go viral. The bad actor doesn’t need to win permanently — they just need the automated system to act before you do. And that is the lesson, and it’s one we’d better learn from musicians before we have to learn it the hard way. Neville Hobson: Extraordinary, isn’t it, Shel? I guess you could call it a new phenomenon, only in the sense of the speed with which this can be done. I must admit, I’m astonished that the system is such that the first person to file the copyright claim is assigned ownership. Maybe that’s similar here in the UK — every jurisdiction is different, of course — but that’s rather unsettling. It obviously goes back to a time when people weren’t exploiting the syste...