Podcast Summary
Reshaping Workflows with Dell Pro Precision and NVIDIA RTX PRO GPUs
Episode: Live from GTC: Supercharging Next-Gen Video with Yaron Inger of Lightricks
Date: March 19, 2026
Host: Logan Lawler
Guest: Yaron Inger, Co-founder and CTO of Lightricks
Episode Overview
This episode, recorded live at NVIDIA’s GTC 2026, spotlights the transformative impact of Dell Pro Precision workstations paired with NVIDIA RTX PRO GPUs through a focused conversation with Yaron Inger, CTO and co-founder at Lightricks. The discussion centers on the evolution and real-world application of Lightricks’ LTX AI video model, especially its transition from LTX1 to the groundbreaking, open source, multimodal LTX2. Yaron reveals how these powerful workstations enable advanced AI video workflows, and underscores the importance of open source in democratizing creativity for studios and creators globally.
Key Discussion Points & Insights
Lightricks’ Evolution: From Mobile Apps to AI-First Company
- Background
- Lightricks has evolved over 13 years—from popular mobile apps like Facetune to an AI-first creative company.
- In the past three years, the company shifted its focus to foundational AI video models, culminating in the release of LTX2.
- LTX2 is open source—accessible to all, with over 5 million downloads on Hugging Face ([00:40]).
"In the last three years we completely transitioned to become an AI-first company... The latest version is LTX2... ranked number one in the world doing video and audio. So multimodal. And it's also open source which is great because everybody can just download and run it."
— Yaron Inger, [00:40]
LTX2: A Major Leap Forward
- Gen Over Gen Improvements
- LTX2 features massive scalability: more parameters, broader data, and—for the first time—audio support.
- The model is truly multimodal, with audio and video generation tightly integrated.
- Audio-to-video capabilities enable new creative workflows like generating music videos with character lip sync and expressions driven by the audio track.
- Efficiency: LTX2 can run both in the cloud and on consumer hardware, like the Dell Promax T2 workstation with RTX 6000 GPUs ([02:20]).
"We scaled everything... we also added the audio support which is amazing because who nowadays creates a video which is without an audio... the audio really drives the expression of the characters."
— Yaron Inger, [02:20]
- Open Source Commitment
- LTX2’s weights, inference code, and training code are all open source—enabling full customization, local inference, and fine-tuning.
- Enterprise studios can run LTX2 fully on-premises, crucial for IP protection.
- The model is highly efficient and compressed, supporting both on-prem data center GPUs and consumer GPUs ([02:20]).
"We open source not just the weights, but also the inference code and the training code which allows you to fine tune the model... The ability to run locally on Prem is also something that's very, very important because it means that you can run it in your own network."
— Yaron Inger, [02:20]
Workflow Innovations and Creative Control
- Key Features
- Retake Workflow: Modify part of any clip (AI-generated or real)—critical for personalized ads or nuanced content tweaks.
- The system preserves lip sync, voice, and emotional tone—even with zero additional training.
- "Video to video" transformations and depth map generation expand creative control.
- Features are driven both by internal development and an active open source community ([05:16]).
"One really great workflow is called Retake, which allows you to take an existing clip... and just modify part of that clip. ...The model knows how to fill in the blanks really amazingly well and also preserve the audio and the voice signature of the speaker with zero training."
— Yaron Inger, [05:16]
- Community Impact
- Open source approach empowers community-driven extensions and integrations, rapidly expanding available features and use cases ([05:16]).
Real-World Hardware Synergy
- Dell + NVIDIA RTX PRO Workstations
- LTX2 performs smoothly on Dell Pro Precision series with RTX 6000/Blackwell GPUs, enabling high-end AI video tasks even for local users.
- Flexible workflows allow mixed hardware use: e.g., low-res creation on consumer GPUs, upscaling in the cloud for efficiency and scalability ([02:20], [06:55]).
How to Get Started
- Access & Documentation
- Recommended entry point: LTX.io, which aggregates downloads, documentation, and links to Hugging Face and GitHub.
- Both API and open source documentation are available ([07:16]).
"It would be best to go to LTX IO. From there go and download the model. All the links are there. Tagging face GitHub. We have online documentation for open source and for the API."
— Yaron Inger, [07:16]
Notable Quotes & Memorable Moments
-
"I'm not a creative by any stretch of the imagination... but it [LTX2] definitely rivals anything... you guys are kind of blowing them out of the water."
— Logan Lawler, [01:37] -
"We have many control features like that, many video to video features where you can take human acting and transform it to a different video. Use depth maps to generate videos from."
— Yaron Inger, [06:18]
Key Timestamps
- 00:40 — Yaron introduces Lightricks, LTX, and the open source approach
- 02:20 — What’s new in LTX2: scaling, audio, multimodal generation, on-premise support
- 05:16 — Favorite workflows, Retake feature, and the value of creative flexibility
- 06:55 — Best ways to get started with LTX2 and find resources
- 07:16 — Where to download and access docs
Conclusion
This episode provided an insider’s look at the evolution and possibilities unlocked by Lightricks’ LTX2 video AI model, made possible by robust hardware from Dell and NVIDIA. The open source strategy, combined with high-end workstation capabilities, is democratizing access to cutting-edge video generation and empowering creators and enterprises alike.
To learn more or try LTX2 yourself, head to LTX.io.
