Transcript
A (0:00)
Can you define ebm for us?
B (0:01)
EBMs are naturally non autoregressive. There are no sequences of tokens and that's what makes it fundamentally different. Like imagine you're trying to navigate the map and you have a left brain to navigate. You sort of allowed to choose one direction of the time and sometimes you take the wrong turns just because you hallucinate. Like there might be a hole in the road and you're just gonna fall and you might see this hole but you cannot turn back because you're autoregressive. LLM EBM going to have the bird view all the time. So if you see there's a hole, you're going to choose a different route.
A (0:48)
Eve, welcome to the show.
B (0:51)
Hi, thanks for having me.
A (0:52)
Great to have you on. For people who don't know, you are the founder and CEO of Logical Intelligence. Tell us what Logical Intelligence does.
B (1:01)
So Logical Intelligence does a few things. First of all, we see ourselves as a foundational AI company. So we work in both with EBMs and LLMs. So everything they built in house we prototyped on LLM initially and we're building EBM at the same time and that sort of gets plugged in in the long term. We focused on correctness of software and hardware as a product because I believe there is a lot of issues with AI being placed in mission critical systems today. Like you know, can we do the code gen, can we do the chip design? And the answer is yes. Yes, people use LLMs today, but very few actually questioning of how these results are actually correct. Does it make sense what it produce? And it seems like there's a big gap on market today having deterministic AI, verifiable AI. So we trying to fill that gap.
A (2:11)
The place my brain goes first is why does correctness or whether something makes sense, why does that matter if it works?
B (2:23)
Actually let me ask you a question back. So speaking of correctness, I don't know. Well, imagine there's AI driving a car and you are in that car and that car is an LLM and someone tells you like 20% of the time it's going to hallucinate and you might end up in wrong place. How would you feel about it?
A (2:46)
Well, I think in my, in my case I'd be like, wow, that's kind of interesting. I'm curious where it takes me.
B (2:55)
Let me give you another example.
