Podcast Summary: Embracing Digital Transformation
Episode #317: AI, Data, and the Future of Infrastructure
Host: Dr. Darren Pulsipher
Guest: Eran Kirzner, CEO and Founder, Lightbits Labs
Release Date: January 21, 2026
Overview
In this dynamic and forward-looking episode, Dr. Darren Pulsipher welcomes Eran Kirzner to explore the evolving intersection of AI, data, and IT infrastructure. They discuss how recent trends—from AI’s explosive growth to infrastructure modernization—are transforming public sector IT and beyond. Kirzner’s unique background traversing compute, networking, and storage gives him rare insight into what’s driving change and what’s next for data-driven organizations.
Key Discussion Points & Insights
Kirzner’s Origin Story & Superpowers
-
Eran Kirzner’s Background:
- Began at Motorola Semiconductor as a CPU architect, then moved into startup networking (Wintegra, acquired by PMC Sierra), then led development of the first NVMe controller.
- Assembled deep expertise across compute, networking, and storage—rare in the industry.
- Quote:
"Most silicon engineers get stuck into CPU or into network controllers... You've had the opportunity to handle the whole compute domain, which is compute storage network." (Dr. Darren, 06:04)
-
Highlight: Kirzner’s rare experience gives him a holistic view of the data center, now focused on delivering end products and interacting directly with customers.
The AI Explosion and Its Infrastructure Impact
-
AI’s Tipping Point:
- Although AI has been around for years, real adoption took off after ChatGPT’s release in late 2022.
- "Even though it's been around a long time, it's really only been the last three years since ChatGPT launched... and now we're seeing an explosion of AI." (Dr. Darren, 07:02)
-
How AI Shifted Workloads:
- Lightbits Labs sees three customer segments riding the AI wave:
- Large E-commerce: Fraud detection, analytics, AI processing at massive scale (think Black Friday surges).
- Financial Sector: Low-latency, mission-critical apps for banks, fintech, and insurance.
- Cloud & Neo Cloud Providers: The “Neo Cloud” focuses on GPU-based, AI-optimized workloads; these new entrants target inference and training at scale.
- Lightbits Labs sees three customer segments riding the AI wave:
-
Macro Trends:
- Move from VMware to open-source/bare-metal or Kubernetes-based containerized environments.
- Kubernetes becomes the standard orchestration platform; AI workloads are the new default.
"Kubernetes kind of become the default orchestration platform and the workload become AI workload." (Eran, 10:01)
Evolving Data and Compute Patterns in AI
-
Training vs. Inference
-
Training:
- Large, throughput-driven data flows; entire datasets brought in for prolonged training jobs (weeks/months).
- Data moved in “big chunks” (e.g., 128k blocks), with regular checkpointing for resilience.
- Throughput and data reliability are critical.
-
"Training... is a game of how fast you bring the data in. The training process may take weeks or months... it's about throughput and readability." (Eran, 12:32)
-
Inference:
- Small, interactive transactions; many concurrent sessions; latency is paramount.
- Requires managing multi-tenant, multi-context environments with possibly multimodal data.
- Infrastructure must support low-latency, high-concurrency use cases.
-
"Inference is totally different. Inference, you have the model. And now it's very interactive... latency is the king." (Eran, 15:22)
-
-
Hardware Adaptation for Inference:
-
GPUs are optimized for large batch processing (training), not numerous tiny tasks (inference).
-
Industry is shifting towards hybrid/heterogeneous environments: CPUs, GPUs, custom silicon (NPUs, VPUs).
- CPUs may become more important for inference due to efficiency and cost.
-
“As inference becomes more cost effective... we are going to see, fast forward maybe six to twelve months from now, custom silicon, custom devices." (Eran, 18:27)
-
The orchestration layer will need to smartly route workloads based on their needs—e.g., text vs. video inference.
-
-
The End of One-size-fits-all Clouds?
- Cloud providers historically deployed generic hardware, but the new AI-driven era demands more specialized, heterogeneous setups.
"What you're talking about here is specialized type of setups... So do you see that we'll move into more heterogeneous type of configurations?" (Dr. Darren, 19:33)
“I absolutely do. I think... people just want to get it to work... but as they start focusing on economics, they’ll shift to purpose-built environments.” (Eran, 20:18)
- Cloud providers historically deployed generic hardware, but the new AI-driven era demands more specialized, heterogeneous setups.
The Future: Agility, Software-Defined Everything, and Storage Innovation
-
Rise of Software-Defined Infrastructure (SDI)
- Flexibility and elasticity are now crucial for cost and futureproofing.
- Building rigid, appliance-based solutions leads to obsolescence and poor economics—data centers now rival semiconductor fabs in cost.
"If you’re building it in the wrong way... You get state of the art GPUs, guess what, 18 months later, it's obsolete already." (Eran, 23:45)
- SDI is necessary to shift workloads fluidly between compute, storage, and networking.
-
Storage Paradigm Shift
-
Historical transitions:
- Spindles → Flash → NVMe
- Focus shifted from sequential to random, small chunks with NVMe.
-
With modern AI:
- CPUs currently “broker” all GPU data, prefetching as needed.
- Eran predicts need for direct interfaces between GPUs/XPUs and the storage/memory layer, possibly bypassing the CPU for certain workloads.
-
"Maybe we need a direct interface between the GPU, the XPU and the storage and the memory... and start to build kind of a new data structure, a new kind of interface." (Eran, 27:13)
-
GPUs aren’t built for filesystems or distributed object stores, so intelligent controllers may emerge to bridge this “data gap.”
-
-
Lightbits Labs’ Innovations
- Lightbits developed a new protocol (nbtcp) with backing from companies like Intel, Cisco, Micron, and Dell.
- Currently providing solutions for current GPU-era workloads and working toward optimal, efficient inference solutions for the near future.
- Forward Look: Promises new developments in the next 6–12 months, especially around more direct data paths and removing the CPU from data movement.
Notable Quotes & Highlights (with Timestamps)
-
On the Domain Shift in Cloud:
"Now there is kind of a new domain, a new segment of tier 2 cloud provider... called Neo Cloud which focus on, on GPU base, focus on AI. And this is where we are playing." (Eran, 10:03)
-
On Data Patterns in AI Training:
"Training... it's a game of how fast you bring the data in... The training process may take weeks or months... it's about throughput and readability. Super important." (Eran, 12:32)
-
On Inference Patterns:
"Inference is totally different... Now latency is the king. Before, it was throughput... Now... it's the latency, the interactive session." (Eran, 15:22)
-
On the Coming Shift to Heterogeneous Infrastructure:
"As inference becomes more cost effective... we're going to see, fast forward maybe six to twelve months from now, custom silicon, custom devices... a combination of GPU, CPU and custom devices." (Eran, 18:27)
-
On Why Agile, Software-Defined Infrastructure is Essential:
"You need to build your environment in a way that's more sophisticated, agile, elastic cloud... and in my vision should be software defined." (Eran, 24:00)
-
On Storage Innovation for AI:
"Maybe we need a direct interface between the GPU, the XPU and the storage and the memory... and start to build kind of a new data structure, a new kind of interface." (Eran, 27:13)
-
On Lightbits’ Value Proposition:
"We always feel that we are a partner of our customers; we’re working with them together on the requirement... Compute, storage and networking... not just a point solution." (Eran, 32:07)
Key Timestamps for Important Segments
- [01:04]: Eran Kirzner’s origin story and career evolution
- [07:02]: The “AI explosion” and ChatGPT’s inflection point
- [10:01]: Macro trends: Kubernetes, open source, Neo Clouds
- [12:32]: Deep dive into AI workload patterns: training vs. inference
- [18:27]: Hardware architecture shift: heterogeneous compute for inference
- [22:48]: Why software-defined, agile infrastructure is necessary
- [25:41]: The future of storage for AI and direct GPU-to-storage concepts
- [29:09]: Lightbits’ role in the evolving infrastructure ecosystem
- [33:10]: How to reach Eran and Lightbits for advice and solutions
Memorable Moments
- Dr. Darren joking about ‘superpowers’
"Every superhero has a background story... what's your origin story?" (01:04)
- Eran reflecting on forks-in-the-road
"...I don't know how my life would be if I was Intel or in Motorola, but I chose Motorola." (02:38)
- Candid discussion on the risk of obsolescence
"...building a data center today become more expensive than building a fab... 18 months later, it’s obsolete already." (23:45)
Conclusion
The episode paints a vivid portrait of a digital infrastructure world in flux—where AI is dramatically altering demand, infrastructure must become agile and specialized, and software-defined everything is the only route to future-ready architectures. Eran Kirzner and Dr. Pulsipher provide both technical depth and big-picture foresight, making this conversation essential listening for anyone invested in the next era of digital transformation.
To learn more or connect with Eran and his team: lightbitslabs.com or via LinkedIn.
