Loading summary
A
And this has been a huge missing part of the quantum computing community. Access to OpenAI models to really use the latest in AI technology to help us accelerate how we get to these useful quantum applications.
B
Welcome to the Nvidia AI Podcast. I'm Noah Kravitz. A quick note before we begin. You can now watch the AI podcast in full video. Check us out on the Nvidia YouTube page. And of course, if you prefer the audio only feed, you can still get us wherever you get podcasts. Nick Harrigan is here. Nick is a product marketing manager for quantum computing at Nvidia, and we're here to talk about the state of quantum computing, what AI means for quantum computing, and really all kinds of things that sound like science fiction until you hear Nick explain them. So, Nick, thank you so much for joining the AI podcast. Glad to have you here.
A
Thank you for having me. Excited to talk about it.
B
So maybe we can start with the basics. Can you kind of give an overview for us of what quantum computing is and kind of the state of play of things right now?
A
Yeah, absolutely. So quantum computing is a new kind of way to build computing technology. So everything we have today in computers, which is incredible, what you can do with computing today is fundamentally based on a transistor, a special kind of switch that can be 0 or 1. Quantum computing kind of asks, what if you. Your switch was a quantum mechanical object, something that obeys quantum laws of physics, can do very strange things. And what if you rebuild how you compute based on that? And so if you do that, it turns out if you can build such a device and you can integrate it into a supercomputer, you can start to solve problems with computing that we just wouldn't have even thought of as being addressable. They were just too hard or outside of the scope of computing. So it's really a new kind of technology that augments our existing kind of GPU supercomputers with whole new capabilities.
B
And is it that. And you're going to have to correct me here as we go, but to kind of break it way down, is it that quantum allows for faster computation or more computations at once, something different? How does it work in that regard?
A
Yeah, so in some application areas, it might be that it can perform some things faster than a conventional computer can, but that really undersells the difference. In many cases, it's so much faster that the problems just were not tractable at all. It wasn't that, you know, today's technology was a bit slow. Maybe some future generation would be in Some cases, quantum computing can give us a kind of exponential or even like a very strong polynomial advantage over a normal computer, meaning that as you make the problem you're trying to solve just a bit bigger very quickly, it becomes just impossible on any of a kind of computing device. But an important caveat is that quantum computers or quantum hardware isn't necessarily better for all kinds of applications. There are some very specific areas we know where it can be really transformative, but crucially, those are areas that we really care about.
B
And so what's kind of the, the current state of the art with quantum. Are, are people using quantum computers to start to solve some of these problems? Is it more in the R and D stage? Where are we at with things?
A
Yeah, so we're at a really exciting point because today people are building quantum hardware. They've been doing that for a while, but we're really seeing an inflection point where we're transitioning from kind of experiments or sort of demonstrative systems to the larger scale kind of systems that you need and that you can kind of integrate with supercomputing to start to solve some of these really promising and important problems. Things like developing new drugs, simulating and developing new kinds of materials. Those are all things that we can't quite do today with quantum computing. But the kind of progress we're seeing, like literally this year, what people are starting to work on, really kind of brings those into focus. And we think it might not be too long before we can build systems capable of realizing that promise, before we
B
get deeper into some of the applications. Kind of looking forward to what are the current challenges to building quantum systems?
A
Yeah, so when you try and build a quantum processor, as we call it, a QPU quantum processing unit, that unit uses instead of bits like you'd use in the normal computer, it uses what we call quantum bits or qubits. They're very, they're very difficult to control and to, to, to keep going. Basically they experience a fundamental kind of noise and you have to continually kind of try and correct them. And that process is called quantum error correction. And it's really important because all of the big applications people talk about for quantum computing kind of assume that your qubits are not noisy. And so one of the big challenges in building useful quantum hardware that can do useful quantum accelerated supercomputing tasks, one of the important things is to be able to master quantum error correction. And so that's a big challenge. And that's actually something at Nvidia that we're Working towards. Because a huge part of performing quantum error correction is a classical. A conventional kind of algorithm or computation you need to run called a decoder that kind of enables the quantum error correction. So there are a lot of challenges we still face to scale up quantum computing. And in fact, some of those challenges are ones that can seem very familiar to the kinds of advances we already do in. Already know about in supercomputing and classical computing. Right.
B
So kind of along those lines, how is the advent of AI shaping the development of quantum. And I mean, can you use AI to help with things like error correction?
A
Yeah, so that's a huge. There's actually many areas where it increasingly looks like AI is going to really unlock progress in quantum computing. It's going to allow breakthroughs. And a key one of those is quantum error correction. Okay. So when you do quantum error correction, if you sort of double click on it a little bit, what happens is your qubits are noisy. You can't just. The trick with qubits, which I haven't explained at all, but you can't just look at them, because if you look at qubits, you destroy the kind of quantum information in them. You have to isolate them for them to work correctly.
B
Okay.
A
So you have to be very measured and purposeful and kind of restricted in when and how you interact with them.
B
Okay, yeah, go on. Can I back up a step?
A
Yeah, you can back up.
B
Tell me about this hole. If I look at it. Yeah, yeah, yeah, yeah.
A
So. So the way a qubit works. So let's talk about a qubit a bit, because it will make quantum error correction clearer. So the way a qubit works is that instead of just having a 0 or 1, like in a transistor, if you think of it like a switch, it's zero or it's one. You can kind of have what's called a superposition of the two. Now, people like to say that means it's both a 0 and a 1. It would be great if it was that simple, but it's much more weird than that. It's hard to explain, but it's a kind of combination of the two. But it's a very delicate combination and it's fragile. And if you go in and you touch the qubit or something bumps into it or interacts with its environment, you'll destroy that delicate superposition that you will have engineered correctly to do your quantum computation.
B
Okay.
A
And so quantum error correction seeks to do what seems impossible, which is to look at those qubits to find out if they're correct, if they've got errors in them. But at the same time, you. You don't want to touch them or look at them. And so the geniusness of quantum error correction, when it was discovered, it was a turning point, because before that, people thought quantum computers would be just too noisy. You couldn't build them. And then some very clever people discovered in the 90s that actually, if you have a lot of qubits, you can link them all together in a special way. You can what we call, entangle them, and you can look at some of them and you destroy those ones, you sacrifice them. But in return, you learn just enough about the other ones through the kind of links between them to learn where the errors are, but without having disturbed them enough for it to really matter. And so that's the way quantum error correction works. But to do that process, what you end up having to do is to look at some of your qubits. You get information from them, you get data, and then you have to do a kind of Sherlock Holmes calculation. You have to process that data and infer, retrodict where the errors must have been for you, you to see that data and then go in and do some corrections thousands of times every second. You have to keep doing this or it will fall over. And that inference algorithm, the Sherlock Holmes algorithm, is the decoder, and that's very hard. It needs to process terabytes of data. Like I say, you have to do it thousands of times a second. You have to get the data out and back in very quickly, or you end up with a backlog and you build up and it all falls over. And so decoders are one example of a task in quantum computing that we think AI or we are seeing that AI can have a really big impact.
B
It sounds like it, yeah. Are there things that are stopping, or maybe a better way to phrase it? Researchers who've been working on quantum. Are there hesitations about using AI in the process? Are there specific roadblocks or hurdles that have to be overcome? How does the whole. Because the way you describe it and describing you have to do this same process over and over again very fast, is like, oh, AI is great for that. But what are some of the challenges specific to bringing AI into quantum?
A
So there are challenges, and it's really important that we figure out those challenges, because the example I just gave you was just one. There's other areas where we think AI is going to be really important, so calibration. So if you have to keep tuning your quantum hardware, which sounds Similar to quantum error correction. You keep trying to fix it, but it's a little different. And it's also hard to do.
B
This is a naive question. Sorry, Nick. But the hardware itself.
A
Yeah.
B
Is it materially quite different than, you know, what I. Transistor based computer hardware?
A
Yes. So it is. So like a quantum processor is a kind of entirely new kind of hardware. So the people who build these, and Nvidia does not build quantum processors, but we work with a huge number of partners that do. In fact, almost everyone trying to build it in one way or another, we work with them. They're trying to build something entirely new. They try and utilize existing, like techniques as much as they can. But it's a new kind of technology. And an important sort of caveat of that is like, whereas if the transistor we really settled on the transistor is the way to build a computer or the way to build a bit, there's lots of different ways people are trying to build qubits. And so what's really important if you think of, for example, building AI tools to help with quantum computing, is that you can kind of accommodate all those different approaches. And so that kind of leads into one of the biggest challenges that researchers face with AI tools is just getting access to them.
B
Yeah.
A
So they need very open tools because they might need to retrain or fine tune those models for their specific kind of hardware. Because there's different approaches and just generally speaking, like, having open models really opens access to this whole broad quantum ecosystem.
B
Yeah, absolutely.
A
And so there's lots of different tasks they'd like to use them for. And I'd say one of the biggest challenges is just those open models being available.
B
Right. So you alluded to this a moment ago, and I wanted to kind of double click into some of how it works first. But let's talk about some of the applications you mentioned. Drug discovery, material science discovery. What are some of the industries or even more specific applications that it looks like are really going to benefit first from quantum computing?
A
Yeah, so there are lots of different application areas. People have ideas how quantum computing will be transformative across lots of industries. So you can list things out like, you know, pharmaceuticals, materials development, financial services, logistics, but across those applications, some of them are ones that we believe earlier generation quantum computers will be able to handle. Some of them, at least today, feel like they might be a little bit further along. And so if you're looking for like the first useful applications, they tend to fall in the area of things where you're trying to simulate A system that's already quantum. So for example, if you're trying to develop a new drug, you might be trying to simulate how some part of a biological cell will interact with a molecule. That's your candidate drug. And so that interaction, understanding it well enough to see if it like maybe sticks, attaches, does what the drug's supposed to do is deep down a quantum system, you're simulating molecules and electrons. And in those kinds of problems there's, if you like, very low hanging fruit. It's like an easy win for a quantum computer. So we expect the sort of earlier, probably smaller quantum computers, relatively speaking, to be able to work really well on those applications, but those devices are still a little way away. Or at least those devices still need to crack quantum error correction and be fault tolerant and be able to deal with those errors.
B
Sure.
A
And another angle to look at this from is that, you know, we do know some applications for quantum computing, but there are many more we just don't know about yet.
B
Yeah.
A
And so that's actually another area where AI looks to be really promising, is actually helping researchers discover new applications. And there's a kind of, there's a kind of deep philosophical way in which you might think that would be the case. So quantum mechanics, quantum computing is very unintuitive to a human. So we don't think quantum mechanically as far as we know. Our brains aren't, you know, at least at the level where we think they're not quantum mechanical. And so it might be that an AI deep down, and AI is a great tool to kind of understand the deeper patterns in quantum algorithms and be able to build or even just compile applications onto quantum processes in ways that might be sort of a bit more mind bending for humans to do do
B
to go deeper down the mind bending for a second. When you say that, you know, humans don't think quantumly.
A
Yeah.
B
Can you articulate kind of what that.
A
Yeah. So I mean, maybe a good analogy is when people started to try and parallelize algorithms for GPUs, you had to think in a very different way. You didn't just take something you were doing on a CPU and say, I'll parallelize it. You have to think about whether the problem fit that kind of hardware or how you could get the problem to fit that kind of hardware. And so if you like quantum processors, they're similar to that, but in a much more esoteric way. So, you know, it turns out that the way that you can get advantage on a quantum processor is to find a way to write your problem such you can put in a big superposition where you do lots of calculations, seemingly all at the same time, because it turns out that's what you can do in a quantum process. But at a final step, you very crucially have to make it such that although you've got all that kind of extra, you know, super parallel computation, when you look at the answer at the end, you can't see all of those, you know, everything, like I said, you destroy all the quantumness. So you have to orchestrate your application such that something persists. You get some of that power in the middle of all the superposition that can kind of exist at the end, even when you collapse it all. And you have to think in a very quantum way to understand how a problem can survive that process and come out much better off.
B
How did you learn how to think in a quantum way?
A
I never did. Well, I mean, you. I got. So far, it's.
B
It's very clearly it's all an evolving process.
A
It's evolving process, but like, just through familiarity. So people who get very good at writing quantum algorithms, they just do it through like an extreme amount of exposure, which is why it seems promising for AI, because of course, that's where I can really shine. When you can train it on something much more quickly or with a much wider data set than the human might be able to, and then have it learn and learn in the same kind of way in some sense. So it very much feels like a problem that, you know, you would think an AI could be very good at discovering new quantum applications.
B
So, Nick, Nvidia has a family of open models for quantum. I believe they're called ising.
A
Yes, that's right. This is really exciting.
B
Tell us about it, Tell us about it.
A
So this is the first set of open models specifically for quantum computing.
B
So first period.
A
First period, yes. First set of open models in the use cases that are there specifically trained for bespoke for the quantum computing workloads where researchers really need them. And this has been a huge missing part of the quantum computing community. Access to OpenAI models to really use the latest in AI technology to help us accelerate how we get to these useful quantum applications. And so Nvidia Ising at launch has got two sets of models in it. It's got models for doing calibration. That means for tweaking quantum hardware very quickly to correct any kind of imperfections in the way things are aligned or the hardware is set up, you need to continually calibrate it. That's a Visual language model that looks at the output from a quantum computer and decides what the correction should be. And then we also have ising decoding that runs the decoding algorithms you need for quantum error correction, that really crucial task that kind of lets you continually correct the errors that are kind of fundamental to qubits and quantum computing. And so this really marks a change, I think, in how quantum research is going to be conducted.
B
Yeah, that's amazing. What is the throughput like, what's the amount of data like with quantum as compared to, you know, we know with AI, we're talking about more and more data all the time, but what's it like with quantum?
A
So with quantum, the demands like they may not be as much as you might be useful when you talk about things like MV link, so like traditional, you know, data transfers. But the task is quite different here. You're trying to ultimately get data from a normal kind of supercomputer, a GPU supercomputer, to an esoteric kind of quantum system and the control systems for that. So it's a different kind of problem set. But what you need to do is you need to be able to process 10 terabytes of data a second, which is demanding in that environment. And also you need to be able to do that with latencies that are submarine, like microseconds, which again is maybe not a lot compared to what people are used to for NV Link. But it's hugely important in this situation and much more challenging and you need those kinds of performances because things like quantum error correction are really demanding. And if you can't hit those requirements, you end up just with a quantum processor that doesn't work. And so it's a really exciting time. So we can look forward to that straight away. I think we're going to see a lot of quantum developers being able to draw on AI much more than they could before. And of course, in their hands, we expect them to build on this and really it to act as a platform where they're going to do exciting things. But even looking further ahead in the future, first of all, in Nvidia Ising, we'll be adding a lot more functionality, so there'll be more open models to create come. But also it's exciting to think where I might help beyond where people are even thinking about today. And so you could think of tasks like algorithm development. I talked about discovering new quantum applications, but also in some extent, the sky's the limit.
B
Yeah.
A
And you might also think of things like even trying to model the Quantum hardware. So there's a lot of work where people are trying to simulate how quantum chips behave, behave to understand them better and perfect the designs even more. And there might even be areas where AI can help in that respect. And if you even go beyond thinking of AI as just a tool for developing quantum computing, you can think about how might quantum hardware and AI work together.
B
Sure.
A
And deeper down the line, we expect there's a lot of exciting stuff there. But maybe even a little closer to now, you might see people starting to use earlier quantum processes to generate data. So it might be data about molecules, like very highly accurate molecular data from like a, you know, for pharma or for materials, generating data to then train an AI.
B
Right.
A
So it might be that quantum processes are an incredible source of otherwise effectively impossible to obtain data that you can train an AI on. And then see the kinds of transformations we've seen in, in things like biology with like open models at the moment. See that kind of hugely accelerated by access to training data thanks to quantum processes.
B
It is mind bending. How, how far out are you when you talk about these kinds of types of things? How far out are we. We looking?
A
Yeah, that's the question. Yeah. Everyone wants to know, right? How far until we get a quantum computer? And like we. So we don't build quantum hardware radius. So a lot of our partners are really working hard to make that timeline as short as they can. You know, they've got these roadmaps and they're super exciting and they're always trying to make them shorter. And we, Nvidia are also trying to do that. So we don't know when it will be. But we know that the more advances we make, the more that we can bring AI, for example, as a tool to quantum developers, the much shorter that timeline is going to end up being.
B
Yep. The quantum computers scale. Does the technology scale the way we're used to?
A
Right. So that's, that's really one of the exciting things about what's happening this year is that people are starting to really face down those questions about how can you scale this hardware? So, you know, today people have been building relatively incredibly impressive, but relatively smaller systems and they need to scale that up. They need to scale it up because if you want to do this quantum error correction, like I told you before, you have lots of qubits and you sacrifice some of them. So you have an overhead of qubits that you kind of need more than you thought. And you need a lot you can need depending on how you're Doing it, you could need like thousands, tens of thousands, hundreds of thousands, millions of qubits. That's just all the numbers. So. But you need a lot. And so scaling is critical and there are challenges to doing that. But a key part of solving those challenges is taking advantage of what we can already do in the state of the art with supercomputing. So one of the scaling challenges is controlling all of that quantum hardware using classical algorithms, doing the quantum error correction, doing the control that you need. And so we're working really closely with partners so they can leverage the state of the art in accelerated computing to make that scaling trivial and build upon, or easier and build upon all of the successes we've seen in scaling it already.
B
So this might sound like kind of an odd question, I don't know, but are there any situations where you wouldn't want to bring AI and quantum together? Like, would there be, you know, things that are being worked on now in quantum, Quantum war, for whatever reasons? It's just like, oh, AI is not something that can be useful here.
A
Well, I mean, anywhere you can bring in AI, you will want to. Because obviously, of course, in the powerful tool, but there are definitely problems in quantum computing where, you know, you need, you need accelerated computing. Like, you need something to support it that might not necessarily be an AI model.
B
Yeah.
A
So one example would be simulation. So people are trying to simulate quantum devices and quantum algorithms to understand them before we have them, as you're saying. And traditionally people have been doing this using GPU accelerated software like our CUDA Q platform, which just one of the things it does is lets people simulate quantum devices. The other thing it does is lets people control hybrid quantum classical systems altogether. It's a platform for that future quantum and AI and AI supercomputing working together. But it might even be that AI can be useful in those situations. So I think the kind of point to make there is, even in areas where we think traditionally AI might not have been useful for quantum computing, it's time well spent to see if we can find ways to use it. Because what we've seen so far is that where we can find use for it, it actually can be like a huge deal.
B
Yeah, it sounds like.
A
Yeah.
B
On the developer side, are there. I'm sure there are, but are there benchmarking tools and other methods that developers are using to kind of track and compare the speed of quantum systems and that sort of thing?
A
So when it comes to AI for quantum specifically, there are not kind of benchmark suites in the ways that anyone who's used to AI will be familiar with. So there aren't big existing benchmarking systems for how AI helps quantum, but we are working on that as well. So when we released Nvidia Ising, we also released a benchmark specifically for calibration. So one of the tasks that the models in the family does, and that was a carefully curated kind of benchmark that took into account all the nuances of that problem. And so it's great that also our model is the top of the leaderboard in that, but it was not designed for that to be true. And so that's another thing we hope we can do as well with these open models is also bring the language the community needs to start to understand where this AI is really going to be useful and how much more useful it can be.
B
The importance of open models kind of across the board, just growing, growing, gathering momentum. Yeah, it's fantastic to see. Where did the name come from?
A
So Ising.
B
Yes.
A
So Ising is the name of a kind of model, somewhat confusingly in physics, like a physics model.
B
Okay.
A
And it's named after a physicist that developed it. And the reason it's kind of relevant to us is it's a model that makes everything simpler. So it's a, it's a simplified model that people use to study a lot of physics. And so it seemed kind of fitting that we are building models to make the physics behind or the development of quantum computing simpler. And so it seemed like a great fit. And most scientists, I think, will understand the name when they hear it.
B
Yep, yep. Awesome. Love it. So who are the models targeted at initially? Who's actually grabbing these and starting to use them right now?
A
Yeah, great question. So these models are really helping people building quantum computing hardware. So QPU builders have been waiting for this tool. They can take these models out the box. They already come pre trained, they can start using them, they can very quickly start to bring AI into their workflows. But also because they're open and because we provide a cookbook of recipes to help them do this and data, they can start to retrain them or fine tune them and they can really make them work specifically for their kinds of system, train them with their proprietary data and they can go to town really on bringing AI to what they do.
B
Yeah, how do, because you mentioned being able to train it for their specific system. How do standards work in the world of quantum? You know, because you're talking about the, I mean just the state of the, the qubits themselves and then things like you know, the different ways that hardware makers are approaching trying to figure out the best way to make systems. Is it the kind of thing where, you know, standards similar to what we see in classical computing like, exist and are evolving, or is it a different way to think about it?
A
Yeah, so it's a very. It's early days quantum, of course. And because it's so diverse at the moment, because there's so many different kinds of qubits still people are trying to build, and it's not obvious, by the way, whether one of them will win. It's not necessarily a race. There might be that quantum computing uses different kinds of qubits for different parts of the machine. We don't know yet. But it also means it's kind of difficult for standards to emerge, maybe in a way we're used to with classical computing. But one of the things that really will help that, I think is having a powerful platform that people can use to start to build to integrate their qubits into existing supercomputing. Because this is how we see quantum computing evolving. It's not going to be like you have a whole new kind of supercomputer. It's going to be that supercomputing as we see it today, starts to draw on these quantum processes within that framework. And so we provide CUDA Q so software platform MVQ link, which is a hardware architecture for integrating quantum and classical computing, and of course, Nvidia icing and those together, I think, really start to define a framework in which standards will make more sense.
B
Yeah, no, that makes a lot of sense. You mentioned or mentioned, you spoke in depth, of course, about using AI to help with error correction and then the decoder algorithm as well. What other areas of quantum in particular do you see AI really being able to help with?
A
Yeah, so, you know, there are a lot of areas we think AI is
B
going to be really important.
A
And we are already seeing work with our applications development. So we talked about algorithms. You know, if you're trying to build a quantum computer that's useful as quickly as you can, there's two things you can do. You can make the quantum hardware, like, bigger and better.
B
Okay.
A
But you can also make the applications you want to run on that hardware less demanding. And where you meet when suddenly you've got enough hardware to run the thing you want, that's when you can be useful. So there's also a lot of work you can do in optimizing existing applications or the algorithms that perform them, or even discovering entirely new ones. And AI is Being very, very promising at that. So we've looked at generative models that can be used to start to build quantum applications. Just like an LLM will sort of build a sentence by taking the next word that should go in the sentence, you can train an LLM on the way, the kind of way that quantum applications look when you run it on the quantum computer. Like what sort of thing, what sort of gate or specific piece of hardware do you call after each step? And it can kind of in the same way as you might build a sentence from words, learn how to build up an application, something that will run on a quantum computer and produce a desired effect by putting those primitives, those gates, in the right order. And so those kinds of generative approaches to writing or even just compiling quantum applications looks like a really exciting area of research that we think is going to explode.
B
I'm getting ahead of myself here, but I mean, can we look, possibly look forward to things like Claude Code or Nemo Claw for quantum?
A
Yeah. So there's a sense in which that already is kind of happening. So in our Nvidia Ising open model family, we have Ising calibration that helps people calibrate their quantum hardware. It's a vlm, it's a visual language model. But ultimately, to use that, to automate the use of that model, you want to run an agent. So there's a whole agentic workflow for calibrating quantum processes that uses a VLM to look at what measurement results you get out, see where things need tweaking, and then go and do the actual tweaking itself. So agentic workflows are probably going to be really critical in controlling quantum hardware in ways that are just beyond the capabilities of humans or perhaps even other methodologies.
B
Nick, I feel like I've learned so much, and also I just barely even am beginning to scratch the surface of knowing what questions to ask, let alone what applications to help to ask an AI bot to help me conceive of that we can deliver in the future. But the super cool thing is that it's all actually happening, right? Because it sounds so almost mystical when you talk about the state of the qubits and you can't look at them in the data and all of that, but it's happening and it's happening quickly as far as these things go. And just incredible for folks like me who want to learn more when they're done listening, are there places online you can direct them to catch up with the latest?
A
Yeah, absolutely. So for developers, head to build.Nvidia.com get started with our Nvidia ising open models for calibration for quantum error correction indic coding. Also check out Cuda Q. You can download Cuda Q. You can get it from GitHub, you can get it from everywhere you normally get software you can start to develop for hybrid quantum classical systems. And yeah, it's really a great time to start experimenting with this and an exciting time to accelerate research.
B
It's the wonders never cease. It's incredible. Nick Harrigan, thank you so much for taking the time to join the podcast and give us kind of a. It's more than an overview, but an overview of just incredible things to come.
A
Yeah, you're welcome. Thanks so much.
Guest: Nick Harrigan, Product Marketing Manager for Quantum Computing, NVIDIA
Host: Noah Kravitz
Date: April 14, 2026
In this episode, host Noah Kravitz and guest Nick Harrigan take listeners on an insightful journey into the intersection of artificial intelligence and quantum computing. The discussion breaks down foundational concepts in quantum computing, explores how AI is accelerating progress towards practical quantum applications, and unveils new AI models—like NVIDIA Ising—designed to help solve longstanding challenges, such as quantum error correction. The conversation offers both high-level context and deep technical insight, emphasizing why open AI models are vital and how the quantum revolution is rapidly becoming real.
This episode paints an exciting—if "mind bending"—picture of how AI is set to accelerate humanity’s path to useful quantum computing, surmount core technical barriers, and unlock brand new scientific and industrial possibilities. The creation and release of the NVIDIA Ising open model family is poised to empower a wider segment of quantum researchers and developers, seeding the next wave of breakthroughs at the intersection of AI and quantum.