
Hosted by Srini and David · EN

Hosts: Srini Annamaraju & David Royle.Guest: Ravi Ramchandran.Welcome to episode 8.AI agents are getting easier to build. That’s the exciting bit. The risky bit is that organisations can now create weak, badly governed automations before leadership has worked out what “good” actually looks like.In this episode, Ravi joins Srini and David to pull the conversation out of buzzword-land and into real work. He walks through a practical example of building an agent that turns meeting transcripts into status reports, then digs into what matters underneath: prompt discipline, guardrails, safe experimentation, risk metrics, and why handing people tools without changing operating practice is asking for trouble.The conversation moves from macro AI noise to enterprise reality. How should leaders think about the 70-20-10 split of routine, experimental, and visionary work? Where does human friction still belong? And how do you encourage innovation without creating a quiet flood of low-quality AI output across the firm?What we coverMacro AI reality check - Why the sensible middle matters more than the hype-or-panic cycle.Productivity is starting to show up - Early signs of measurable uplift are emerging, even if the landing is still messy.The 70-20-10 work model - How to cut routine work and create more room for experimentation and higher-order thinking.Innovation becomes everybody’s job - The barrier to building has dropped so far that innovation can’t stay in a corporate side room.A live agent example - Ravi demonstrates how meeting transcripts can be turned into weekly status reporting.Why prompts are not enough - One decent output is not the same as a repeatable capability.Risk metrics for the AI era - Traditional productivity measures are no longer enough.A seven-day build plan - Ravi shares a practical way to identify, scope, and build useful agents.ChaptersAI noise vs real enterprise adoptionWhy productivity gains are starting to matterThe 70-20-10 model for redesigning workInnovation becomes everybody’s businessLive demo: agent for weekly status reportsPrompting, grounding, and hallucination riskGuardrails, policy, and engineering practiceRisk metrics and trust in productionA seven-day framework for useful agentsTop-5 TakeawaysTools alone do not transform organisationsAgents need boundaries, not vibesAI risk is now operational riskSafe experimentation needs leadership air coverFrameworks beat random enthusiasmWho it’s forEnterprise Leaders in all functions inerested in AI adoption. Help Spread the Word Enjoyed the episode? Follow us!Template TakeawaysRavi has kindly shared these two templates he walked us through for general open access. Please feel free to download them from this Google Drive folder. https://drive.google.com/drive/folders/1yKGryaEQ4lM8hLSf1il3jrqbZj4XgHrt?usp=sharing

Hosts: Srini Annamaraju & David Royle“Evals are the weak link in enterprise AI adoption.”And we say it like it is in our Maven cohort Lightning Lesson. Enrol here or see the recording - or join the waitlist for the paid 4-part course (tba): https://shorturl.at/lA9igThis episode is a proper grilling on AI Evals: what they are, why boards should care, and why “ship it now, eval it later” is how you end up with a quiet disaster. We also do a quick sweep on vendors going more “enterprise-native” (less benchmark theatre, more workflow reality).What we coverEnterprise AI news: vendors shifting from benchmarks to enterprise workflowsOpenAI’s Enterprise report highlights UiPath as the “plug-in hybrid” of automation: deterministic RPA meets GenAI via connectors (and why that blend might win)What evals actually are: accuracy, citations, groundedness, hallucinationsVendor reality: some push AI first and worry about evals later, others oversell eval tooling. Error analysis still mattersEvals as the connective tissue between value, risk, and operations. Proactive, not post-mortem-after-the-horses-boltedThe EDSO “four hats” operating model (Echo, Delta, Sigma, Omega) and why boards need the Omega translation layerMaturity and scaling: small firms can fuse hats, even one-person pods for bounded scopesAgentic future: “checker agents”, Delta agents writing eval harnesses, humans steering fleets of agentsWhy SMEs lag, and how eval expectations will percolate through supply chains Chapters00:02 Intro: Episode 7, cold UK afternoon, messy middle of enterprise AI00:56 AI news: enterprise context is the new battleground02:45 OpenAI Enterprise report headlines10:16 UiPath, hybrid automation, and the “plug-in hybrid” analogy12:53 The grilling starts: what are evals?17:02 Is AI risk being exaggerated to sell governance tools?19:45 Evals as connective tissue, and why proactive matters21:55 The EDSO roles and what “good” looks like25:21 Maturity levels and how smaller firms scope it26:58 Checker agents and agentic operating models28:58 Business case problem: cost vs avoided disaster32:14 Evals in SMEs and supply-chain pressure33:26 Close: “survived the grilling”TakeawaysEvals are not paperwork. They’re how you keep the value chain connected to operations without risk blowing up later.Don’t let vendors sell you “tooling-as-a-substitute-for-thinking.” You still need human error analysis and clear accountability.Treat EDSO as hats, not headcount. Start bounded, prove value, then scale.Evals is becoming a career lane (think “AI eval controller” the way finance has controllers).The agentic world will add “checker agents” and automated harness-writing, but humans still steer the system.Who it’s forCIOs, CDOs, CAIOs, Heads of Risk, and anyone trying to ship enterprise AI without quietly lighting their control environment on fire. Also, anyone building a real career edge around AI trust and operational quality.

Hosts: Srini Annamaraju & David Royle.“The AI bubble is the wrong fear.”The real threat sits inside your own walls: shadow AI you don’t see, boards that confuse risk aversion with risk management, and leaders trying to govern a technology they don’t actually understand.We unpack why mid-market boards are exposed, how shadow AI reveals the truth about how your org really works, and what an actually realistic 12-month AI plan looks like.And yes—why people, not models, are now the biggest AI risk vector.The conversation revolves around a recent paper that David authored, a link to the post that has the details is here. What we coverBubble noise vs fundamentals - Valuations swing wildly, but enterprise AI maturity rises daily. We explain why it has nothing to do with the technology reshaping your org.Shadow AI as diagnosis - It’s not a tooling problem but a symptom of mismatched expectations. Boards: from passive listeners to owners - Why literacy is step zero, and why chairs need to move fast. Risk aversion trap - The boards that “get it” flip from “should we?” to “how quickly, safely, and visibly can we?” 90-day governance playbook - Inventory → Validate → Govern. Top-down vs bottom-up AI - How grassroots use cases and board-led operating models collide. 12-month reality check - You won’t be AI-first in a year. But you can be an AI-literate, AI-safe, AI-enabled organisation in 12 months. Explainability anxiety - Why boards demand transparency from AI they never asked of spreadsheets or humans. The uncomfortable truth - The biggest AI risk isn’t the model. It’s your people. Evals preview - Why audits, trust contracts, drift checks, and forward-deployed evaluators will soon be board-level concerns.ChaptersAI bubble vs enterprise fundamentalsShadow AI as a symptomBoards falling behindRisk aversion vs risk management90-day governance planA realistic 12-month AI horizonThe real AI risk: peopleIntro to enterprise evalsTakeawaysShadow AI is a mirror - reveals gaps in culture, process, and leadership direction, not tooling.Boards must lead, not observe - Active literacy and ownership are key.Governance is the stabiliser. Inventories, validations, guardrails, and oversight reduce drift & exposure.Explainability is contextual. Set boundaries, not magic expectations.People are the attack surface. Don't miss non-malicious misuse.12 months = foundations. Literacy, safety, and one high-value use case per function. That’s the win.Who it’s forBoard members, CEOs, COOs, CIOs, CROs, and mid-market operators needing a grounded, real-world view of AI risk, governance, and organisational maturity. Help Spread the Word - Enjoyed the episode? Follow the show, leave a review, and share with a colleague grappling with shadow AI, governance gaps, or board-level AI decisions. Want to join as a guest or sponsor a future episode? Get in touch!

Hosts: Srini Annamaraju & David Royle“AI kills jobs” is the wrong headline. The real story is structural: org pyramids flatten into diamonds, managers run fleets of agents, SMEs unlock backlogs without hiring sprees, and skills go modular with micro-credentials. We break down what changes now—and how to lead it without face-planting.What we coverJobs vs. roles: Why the entry-level layer thins, the manager layer thickens, and how to redesign spans of control when agents do the doing.Agents on a spectrum: Start with human-in-the-loop, graduate to AgentOps. Where to set autonomy today, what to monitor, and how to keep audits, drift checks, and safety rails sane.Backlog > headcount: Use AI to attack the work you never had people for—deterministic, high-volume tasks that finally move the needle.Operational resilience: Outages and dependency chains aren’t hypotheticals. We outline layered BCP/DR for an agentic stack so one failure doesn’t cascade.Early-career paradox: Apprenticeships still matter—how to select, coach, and rotate juniors in a world with fewer traditional entry roles.Skills that rise: Cognitive prompting, judgment, people leadership—and why short, role-tied micro-credentials beat semester-long generalities.SME timing & tactics: Where mid-market buyers actually are on the curve, what to build vs. buy, and how to avoid “pilot purgatory.”ChaptersJobs headline vs ground truthFrom pyramid to diamond orgsAgents, autonomy, and HITL → AgentOpsManaging hybrid teams (humans + agents)Resilience playbook for outages and dependenciesEarly-career design: apprenticeships, reverse mentoringMicro-credentials and fast upskillingWhat SMEs should do this quarterTakeawaysJobs aren’t vanishing; roles are morphing. Plan for fewer juniors, more AI-enabled managers, explicit oversight of agent fleets.Governance is the unlock. Treat agents like teammates with performance records, audits, and clear escalation paths.Resilience is strategy. Design for failure before agents touch critical workflows.Upskill in sprints. Tie micro-credentials to roles, not buzzwords.Who it’s forOperators, CTOs/CIOs, and line leaders who need practical steps to re-shape teams, govern agentic workflows, and build real resilience—especially in SMEs.Help Spread the Word:Enjoyed the episode? Follow the show, leave a quick review, and share with a colleague wrestling with agent governance or workforce design. Interested in joining as a guest or sponsoring a future episode? Get in touch.

In this new Enterprise AI Field Notes deep dive, Srini Annamaraju (aka, 'the tech guy') and David Royle (who's 'the business guy') take the story past design into delivery — from the Target Operating Model (TOM) to the everyday reality of running AI inside the enterprise.Through the lens of a real Bank's AI copilot rollout, name changed to "Albion Bank", they map how real transformation happens inside the Enterprise AI Honeycomb — a connected system of data, models, patterns, platforms, and guardrails that must all work in harmony.💡 What we cover:Why do most AI “pilots” stall before production — and how to stop the fade?How data decisions shape every downstream fork in the journey?What “brains, behaviour, and nervous system” really mean in AI design?How to build hybrid platforms that stay compliant and fast?What does it take to shift from ad-hoc prompting to disciplined LLMOps?Why security, governance, and economics are the body’s immune system and heartbeat?Srini breaks down the technical scaffold — how the ten cells of the honeycomb connect to deliver measurable ROI.David probes rigorously from the business side — questioning trade-offs, accountability, and real-world friction.Together they turn AI from a keynote fantasy into a hard-nosed operating reality.

AI dazzles in demos but stalls in the enterprise. In this deep dive with a new section on rapid-fire questions by Srini Annamaraju, the resident tech strategy expert on the No Effing AIdea podcast, David Royle , reveals why — and how a smarter Target Operating Model (T.O.M.) can finally bridge the gap between ambition and adoption. This episode connects TOM blueprints to the people dimension: the rise of AI workflow architects, the tension between agents and humans, and the messy middle where governance meets innovation.💡 What we cover:Why enterprises confuse AI projects for AI infrastructureHow to design a T.O.M. that balances guardrails and greenlightsThe rise of new ‘hot’ AI roles — and which ones will fade fastWhat happens when test kitchens meet board-level controlWhy scaling AI isn’t a tech problem — it’s an operating model oneDavid brings creative clarity and a rare mix of strategic and operational design chops. Srini brings decades of enterprise GTM and technology experience. Together, they unpack what it really takes to make AI run the enterprise — not just impress it.

Episode 2 – Enterprise AI Field Notes: Traps, Hard Truths, Hallucinations, Shadow AI, and the 3 AI RoomsHosts Srini Annamaraju and David Royle are back for episode two – and yes, the feedback is in. Some said we were a bit too serious last time. We’ll try not to become a Sunday love songs show, but we’re working on upping the “gags per minute.”This week’s conversation covers:Listener reach & feedback: Almost 100 plays already, with listeners tuning in from the UK, US, India, and even Slovenia. The appetite is real for discussions that go beyond hype and get into the messy middle of enterprise AI.Event notes from Big Data London: A buzzing show, but still very tech-heavy. We debate whether AI conversations are stuck in the IT lane, and why that’s a problem when the real impact is business-wide.NBER study on ChatGPT usage: 700M weekly users. Surprisingly, 70% of usage is personal rather than work. Heavy skew toward under-26s. We unpack what that means for adoption inside enterprises.US tech investment in the UK: Nvidia and OpenAI committing eye-watering sums (hundreds of billions over time). A rare bit of good economic news for the UK, with national implications for jobs, productivity, and independence from US/China dominance.Enterprise field news:Citi experimenting with agentic AI for wealth advisors, using Claude and Gemini inside secure workspaces.FT analysis showing CEOs hype AI on earnings calls, but get risk-heavy and muted in regulatory filings.JLR cyberattack fallout: £3.5B revenue hit, no cyber insurance in place. Knock-on effects on suppliers and supply chain.The “three rooms” where AI decisions get made:C-suite (value, governance, risk)Technical teams (architecture, data quality, safe design)Operations (ongoing management, compliance, usage quality)Traps to avoid:The Whac-a-Mole Trap – hallucinations never disappear, they just reduce.The Origami Trap – clever prompts aren’t a moat; without guardrails, they fold fast.The IT-Only Trap – AI left to technologists will fail; business P&L owners need to lead.The Corporate DNA Trap – over-automating risks erasing what makes your org unique.Shadow AI is real: Even if companies ban AI tools, staff use them on personal devices. Risks around leakage and compliance multiply.We close with a look ahead:How frontier model labs (OpenAI, Cohere, Mistral, etc.) are approaching enterprise go-to-market.Real use cases from our own client work – what’s working, what’s not.Next steps for listeners:Got topics you’d like us to cover? Message us on LinkedIn. The more specific, the better – we’ll dig in and bring field notes to the next episode.

Episode 1 — No Effing AIdea! - AI ethics, job disruption, GenAI “failures,” the AI bubble, India SMB adoption, coding risks, consultants falling behind, biotech breakthroughs, and AI in education — all collide in our first episode of No Effing AIdea!Welcome to the first episode. We are David Royle and Srini Annamaraju.We set the tone with a stark cold open: new Stanford data shows a 13% drop in jobs for 22–25-year-olds in AI-vulnerable roles since ChatGPT launched. Then, in our Reality Check, we unpack the last two weeks of enterprise AI news with a fresh lens:MIT’s “95% failure” GenAI claim — and why that’s too simple.Why the so-called AI bubble might actually be good for business.Reliance & Meta’s $100M JV bringing enterprise AI to India’s SMBs.AI coding tools: 30% faster, but 2x more vulnerabilities.Big consultants left behind by in-house AI adoption.Stanford’s autonomous AI lab slashing drug discovery timelines.Khan Academy’s Khanmigo AI tutor bringing hope to 180M learners worldwide.Our Deep Dive asks: why does AI suddenly get its own moral panic when cloud, ERP, and digital never did? We explore what “ethics” really means for enterprises today:How ethics shows up on the P&L — as fines, lawsuits, and PR disasters.Where ethics must live in the AI stack to avoid “governance theatre.”The trade-offs leaders underestimate — speed vs. trust, open vs. proprietary.A pragmatic three-step ethical readiness checklist for 2025.Finally, in The Paradox Box, we tackle three dilemmas from the field:Boards demanding ROI and revolution at the same time.Compliance vs. engineers in the race for velocity.Customers rebelling against brilliance.Listen in for pragmatic tactics, not theatre — and a candid take on why AI ethics isn’t an afterthought. It’s the seatbelt that lets enterprises drive faster.