Embracing Digital Transformation
Episode: The Rise of AI PCs: A New Era in Computing
Host: Dr. Darren Pulsipher
Guest: Dan Salinas, COO, Lakeside Software
Date: October 28, 2025
Overview
This episode delves into the transformative potential of “AI PCs”—personal computers equipped with Neural Processing Units (NPUs)—and the profound shift they are catalyzing in how organizations and individuals run artificial intelligence workloads. Dr. Darren Pulsipher talks with Dan Salinas from Lakeside Software about the technical innovations, security and privacy enhancements, cost savings, and future possibilities made possible by the rise of AI at the edge. They also discuss practical use cases in the public sector, implications for IT management, and the broader economic and social impacts.
Key Discussion Points and Insights
1. Personal Stories and Setting the Stage
- Origin Stories: Both host and guest open with lighthearted personal anecdotes, establishing rapport and underscoring the episode's focus on people at the heart of technology.
- Dan reveals he has quadruplets, and they joke about the parallels between managing a large family and overseeing tech organizations.
- Darren shares he has ten children, highlighting the complexities of managing chaos—useful experience in tech leadership.
- (01:07–04:35)
2. What Are AI PCs? Defining the Platform Shift
- Definition: AI PCs leverage NPUs to locally accelerate AI and ML workloads, moving capabilities from the cloud to the individual device.
- “It’s basically being able to use the compute power of a PC at the edge to run these advanced AI models…instead of spending all your money in the cloud, you can leverage the power of a distributed device.” — Dan Salinas (04:53)
- Difference from CPUs and GPUs:
- NPUs are optimized for AI tasks, providing higher efficiency and lower power usage compared to CPUs/GPUs.
- “GPUs take a lot of power…fans spinning in your laptop…NPUs are tailored specifically for that type of workload.” — Dr. Darren (06:46–07:09)
3. Privacy, Security, and Edge AI
- On-device, Private AI:
- The shift to local LLMs means data no longer needs to leave the device, greatly enhancing privacy and security, especially in public sector and sensitive environments.
- “Do you want your data to be in the cloud, shared…or do you want to have something private to yourself? From a security perspective, without an NPU, you wouldn’t really be able to efficiently run those.” — Dan Salinas (07:41)
- Cost and Efficiency:
- Running AI at the edge reduces dependency on costly cloud resources.
- “Being able to do it on the edge… you’re going to be able to do the processing much cheaper.” — Dan (10:26)
- The economics mirror the earlier shift from centralized mainframes to distributed client devices.
4. Scaling Out: AI Everywhere in the Organization
- AI at Scale:
- Organizations deploying AI PCs to every employee could tap into thousands of NPUs/GPU-powered endpoints, fundamentally changing workload distribution and performance.
- “All of a sudden I have a thousand NPUs out there running…that means I could have a thousand or two thousand large language models running close to where the data is being generated.” — Dr. Darren (14:03)
- Management Challenges:
- New tools and methodologies are required to orchestrate and monitor fleets of edge devices running sophisticated models.
- “There’s gonna need to be tools built… system management tools… it's conquered ground a little bit…like SETI@home.” — Dan (14:32)
5. Emerging and Invisible Use Cases
- Real-time Applications:
- Language translation, object recognition, and even “cohort” models for collaborative AI within organizations.
- “I could be talking English, you could be talking in a totally different language…language models could translate this for you on the fly.” — Dan (15:30–16:38)
- Invisible AI:
- Enhanced audio/video in conference calls, lower latency, improved battery life, and seamless security scanning.
- “If you’ve ever been on a conference call…with kids screaming in the background…and they can’t hear it, that’s AI in the background filtering that out.” — Dan (22:35)
- Tangible Benefits:
- Battery life, network efficiency, lower cloud costs, real-time malware detection, and zero perceptible performance hit despite powerful background AI.
- “You can do that analysis right there locally…not destroy the performance of the machine.” — Dan (25:17, 25:41)
6. Future-Proofing IT and the Next Processing Frontier
- Specialized Units Beyond NPUs:
- Expect continued specialization (vision processing for cameras, etc.)—potential for more types of co-processors for even more narrowly defined tasks.
- “I could definitely see specific workloads optimized on those devices or single purpose devices versus general purpose devices.” — Dan (27:04)
- Cyclical Tech Patterns:
- History repeats: mainframe to client/server to cloud to edge—a cycle shaped by efficiency, cost, and where data is generated.
- “Nobody knows where it goes in the end, but certainly we've seen these cycles in technology …the same will be with AI.” — Dan (28:14)
Notable Quotes & Memorable Moments
-
“Maybe we should call it Genie…a personal genie…A personal gen AI running on your laptop means that your data is not leaving your laptop. That's huge.”
— Dr. Darren [08:23–08:44] -
“Doing [AI] in the cloud is expensive...so being able to do it on the edge, you're going to be able to do the processing much cheaper.”
— Dan Salinas [10:26] -
“There isn’t enough electricity…power and data center capability to support where computing models are going.”
— Dan Salinas [13:14] -
“All of a sudden I have a thousand NPUs…that means I could easily have a thousand or two thousand large language models running close to where the data is being generated.”
— Dr. Darren [14:03] -
“AI is not bringing computing to an end, it’s unleashing even more, it sounds like.”
— Dr. Darren [28:06] -
On Security vs. Usability:
“Traditionally there’s this sort of line between protecting the user…and giving them performance…If we can make it more effective, you can be secure and be able to do what you want.”
— Dan Salinas [25:41]
Timestamps for Important Segments
- 01:07–04:35: Opening stories; host/guest backgrounds & analogies to tech team leadership
- 04:53: What is an AI PC? How does it differ from current PCs?
- 06:19–07:09: What are NPUs vs CPUs/GPUs? Why efficiency matters
- 07:41–08:44: Private, on-device AI; conceptualizing a “personal genie”
- 10:26–12:04: Cost, economics, and the paradigm shift from cloud to edge
- 13:14–14:03: Macro trends: Energy constraints, distributed processing
- 14:32–15:22: Managing organizational scale; SETI@home analogy for cooperative AI workloads
- 16:14–17:35: Real-time universal translation and speech AI
- 19:11–22:13: How Lakeside’s Systrack leverages NPUs and trends in customer adoption
- 22:35–25:17: Invisible AI: practical user impacts in battery, performance, security
- 27:04: Anticipating future, more specialized AI hardware
- 28:06–29:33: Technology cycles, future jobs, and generational impact
Closing Thoughts
Dan and Darren agree that AI PCs with NPUs mark a foundational change for organizations and IT leaders, not just in terms of user-facing capabilities (like voice assistants), but the invisible, systemic improvements in security, cost, and efficiency. The push toward edge computing will enable fresh applications, democratize innovation, and reshape IT management for years to come.
For more info:
