Episode Overview
Title: How 3 CEOs Use AI to Run $10B in Companies | This Week in AI
Date: April 2, 2026
Host: Jason Calacanis
Guests:
- Jeremy Frankel — CEO & Co-founder, Fundamental
- Victor Riparbelli — CEO & Co-founder, Synthesia
- Nick Harris — CEO & Co-founder, Lightmatter
In this in-depth roundtable, Jason Calacanis gathers three pioneering CEOs whose companies are shaping the AI revolution across enterprise data, video generation, and hyper-scale computing infrastructure. They discuss how their companies leverage AI at scale, the impending transformations in enterprise workflows, the technical bottlenecks of today’s AI infrastructure, and what’s next for real-time media and data processing.
Key Discussion Points & Insights
1. The Next Frontier in Enterprise AI: Tabular Data’s “ChatGPT Moment”
[02:20] Jeremy Frankel (Fundamental):
- Fundamental has built Nexus, a “large tabular model” or LTM, aiming for a ChatGPT-like breakthrough but for structured enterprise data (spreadsheets, databases etc.).
- Unlike traditional LLMs which excel at unstructured data, LTMs are specifically designed to work with rows and columns—critical for sectors like banking, healthcare, and logistics.
“LLMs really mostly solve unstructured data issues—text, audio, video, coding—but they really didn’t impact structural data... That part of enterprise AI has never had its ChatGPT moment.”
— Jeremy Frankel [02:35]
How LTM Differs from LLMs
- LLMs are autoregressive, making predictions based on word/sentence order; this isn’t desirable for tables, where column/row order shouldn’t alter output.
- LTMs eliminate this positional sensitivity, aiming for deterministic outputs critical for high-stakes predictions (fraud detection, forecasting, etc.).
“If you change the order of your columns, you shouldn’t want a different output… That’s what we’ve focused on.”
— Jeremy Frankel [04:23]
Use Cases:
- Credit card fraud detection, ride-hailing ETAs, retail demand forecasting—all improved by higher-fidelity tabular models that move beyond older pre-LLM ML approaches.
2. AI Video’s Evolution: From PowerPoint to Real-Time Interactive Media
[07:09] Victor Riparbelli (Synthesia):
- Synthesia’s model: empower business users (think PowerPoint users) to transition seamlessly from slides to highly-produced AI-generated videos—making corporate communication more engaging and accessible.
- Initial hypothesis: AI video wasn’t ready for Hollywood, but business video generated at scale was an immediate addressable need.
- Sits at the crossroads of voice, video, and avatar models—now moving toward real-time interactive video agents.
“There was a very real use case in taking all the world’s PowerPoint users and enabling them to communicate in video, as opposed to slide decks or documents.”
— Victor Riparbelli [08:21]
Why OpenAI Abandoned Sora & Why Focus is Core
- Even OpenAI had to “learn the lesson of focus” by dropping video model development to concentrate on code generation, heeding the market’s strongest immediate demand (see Anthropic's laser B2B focus as successful contrast).
- Video, however, remains central as the next leap for interactive communication and “native” media formats for the AI era.
“Doing too many things at once is rarely a good idea... OpenAI's cutting all the side quests and focusing on the market that's really going to matter.”
— Victor Riparbelli [09:53]
3. Infrastructure Revolution: Photonics & The Race for AI Data Center Scale
[12:10] Nick Harris (Lightmatter):
- Lightmatter is pushing the Moore’s Law frontier by replacing copper with photonic interconnects (fiber optics) in AI data centers, drastically upping bandwidth and reducing energy use per computation.
- With compute scaling flatlining, enabling “big chips” and networking those chips together is now the dual track for advancing data center performance.
- Just launched a chip (M1000) with bandwidth on par with the entire transatlantic Internet backbone.
“We just announced a chip with Qualcomm: with each glass fiber, we're packing 16 wavelengths of light, pushing 1.6 terabits over a single optical fiber… It’s like 1600 houses’ worth of Internet.”
— Nick Harris [15:21]
Data Center Design & Bandwidth Economics
- Copper cables force racks to be tightly packed; photonics let racks be kilometers apart, drastically improving scalability and efficiency.
- Practical analogy: can send the entire Netflix catalog across the Atlantic in seconds.
“We're building chips that have just an obscene amount of bandwidth and it’s all needed to drive AI scaling.”
— Nick Harris [17:41]
4. The Cost, Playability, and Personalization of Real-Time Video
[18:20] Victor Riparbelli:
- Envisions a future where video is no longer a static broadcast medium, but an interactive, real-time experience (think: live negotiation with avatar agents, personalized learning, or participatory entertainment).
“What does video look like if you were to reinvent it in 2026 with all the new primes we have? ... Real-time video, real-time diagrams, real-time interactive avatars—almost closer to a game or a website.”
— Victor Riparbelli [19:08]
- Bandwidth and inference cost remain primary bottlenecks: real-time video interactions are orders of magnitude heavier than static video, requiring advances like Lightmatter’s to become ubiquitous and affordable.
- Examples: Personalized Disney movies, interactive corporate training.
“The more we can reduce these [inference/bandwidth costs], the more accessible this becomes. That’s the core of Nick’s work—very exciting.”
— Victor Riparbelli [20:44]
5. Hardware Platforms, Custom Silicon & The AI “Arms Race”
[24:51] Jason & Nick Harris:
- Hyperscalers (Amazon, Google, Meta) are investing massively in custom chips (e.g., Trainium, Inferentia, Tensors), driven by staggering infrastructure costs ($100B+) and need for optimized AI workloads.
- Nvidia remains dominant due to CUDA and robust software ecosystem, but hyperscalers are building in-house for cost and control.
“When you're spending [over $100B a year]… developing your own custom silicon is almost a rounding error.”
— Nick Harris [25:13]
6. Data Movement: Tables Rival Video for Scale & Complexity
[28:00] Jeremy Frankel:
- It’s not just video that’s data-intensive: tabular datasets (billions of rows, hundreds of columns) can quickly dwarf the context window of any LLM.
- This data requires millisecond response times (e.g. fraud detection, IoT data streams), making high-bandwidth, low-latency infrastructure just as critical for tabular AI as for video.
“With a table of 10 million rows and 100 columns – that's a billion cells, orders of magnitude more than the largest LLMs can even take in... so moving data faster and lowering cost is essential.”
— Jeremy Frankel [28:31]
Notable Quotes & Memorable Moments
-
“You can taste the singularity. At this point, I can't even imagine the answer. End of this year is going to be shocking.”
— Nick Harris [00:54] -
“Even a company like OpenAI… still had to learn the lesson of focus. Anthropic focused on codegen B2B—no voice models, no video models, and that’s clearly paid off.”
— Victor Riparbelli [09:49] -
“Hyperscalers are becoming very heavy-duty infrastructure players. From cement, to energy, all the way to chips... There’s just so much money in this space.”
— Nick Harris [26:15] -
“The idea of doing a video call for an hour with someone across the world… that was an absolutely ludicrous idea 10 years ago. In a few years, we’ll be generating content in real time, live, within your subscription.”
— Victor Riparbelli [22:16]
Timestamps for Important Segments
- [02:20] Introduction to Fundamental & Large Tabular Models (Jeremy Frankel)
- [07:09] Synthesia’s journey & the future of AI-generated video (Victor Riparbelli)
- [09:45] Focus and lessons learned: Why OpenAI cut Sora (Victor)
- [12:10] The photonics revolution in AI data centers (Nick Harris)
- [17:26] Bandwidth analogies: petabits and the future of AI networking (Nick)
- [18:20] Real-time, interactive, and personalized video: the new business interface (Victor)
- [24:51] Custom chips: The hardware arms race among hyperscalers (Nick)
- [28:00] Data scale for tabular AI and why infrastructure matters (Jeremy)
Flow and Tonality
- The discussion is passionate, future-focused, and highly informed, blending technical depth with real-world strategic considerations.
- Each CEO speaks in practical, grounded language, often using analogies or business examples (“PowerPoint users”, “Netflix in 10 seconds”, “video calls as science fiction”).
Summary Takeaways
This roundtable crystallizes the current and coming waves of the AI transformation:
- Foundational models are branching beyond unstructured data: enterprise tables and business process automation are the next AI frontier.
- AI video is evolving into a versatile, real-time medium, converging with the tools of corporate learning, communication, and entertainment.
- Infrastructure bottlenecks—notably bandwidth and compute—are being addressed through photonics, custom chips, and massive “hyperscale” engineering.
- Personalization and interactivity will drive the next generation of enterprise and consumer digital experiences, but only if costs (bandwidth, inference) can be radically reduced.
For anyone tracking the future of tech, enterprise, or AI, this is required listening—or, for now, reading.
