Transcript
A (0:00)
Hi everyone. Welcome to another episode of Bits and the Interview. My name is Steve Ehrlich. I'm the head of research at Sharplink and also your host. We've got a really exciting show for you today, but before we do, just a couple quick disclaimers. Nothing that you see or hear on the show should be construed as financial or investment advice. For full disclosures, please see unchained crypto.com bitsandips and before we dive in, let's just take a very brief moment to hear from some of the sponsors who make the show possible.
B (0:32)
If you've been loving Bits and Bibs, don't forget that the show is transitioning to its own feeds on X YouTube and your favorite podcast player. If you're not already subscribed to Bits and Bips on its own channels, go there now and hit that subscribe button so you can keep up with our twice weekly livestreams and Macromist crypto breakdowns. Bits and Bips will only be on the Unchained feed for a few more weeks, so subscribe today to be ready for launch. Can get all the links@unchained crypto.com Bits and Bips.
A (1:01)
All right, so today's show really deals with the intersection of two very, very hot topics right now in crypto and in broader tech landscape, prediction markets and AI. And we've got the perfect guest to discuss all of it, Ben Fielding, the CEO and co founder of, of Gensys. So welcome Ben.
B (1:21)
Thank you. Great to be here.
A (1:22)
Yeah, great to have you here too. And we're bringing you on as you are launching, I think like your first, I guess, quote unquote mainnet application for this decentralized AI platform that you built. It's a prediction market built on top of the op stack layer 2 on, on Ethereum. And I want to get into kind of all of that. But before, before we do, since this is the first time on your show, I'd love for you to just briefly introduce yourself and your company.
B (1:52)
Absolutely. Cool. So yeah, as you said, I'm Ben, co founder and CEO of Jensen. My background originally, before Jensen was actually in machine learning research. So I started my PhD back 11 years ago now in 2015, just as deep learning was starting to become viable on real devices. So, so it was a few years after something called Alexnet was released which proved that you could accelerate deep neural networks on GPUs for computer vision tasks. I joined a computer vision department initially to do applied machine learning research. So take these models that could be used in these computer vision contexts and apply them to new contexts. I looked at things like diabetic retinopathy. You take images of the retina and you detect whether somebody has diabetes, this skin cancer detection within images of lesions and things like that. And so I was applying these models to those situations. But something became clear to me really quickly, which was that machine learning back then, these deep learning models were being handcrafted, and they didn't need to be handcrafted. You could actually automate the process of generating a deep neural network for a specific task. And very quickly I focused my entire research on that problem. It's an area called neural architecture search, and it involves optimizing the structure of a deep neural network while you train that network. That now is a kind of area called AutoML. It's automating the process of creating machine learning models. Then there's one key piece to that which really kind of stuck out to me and led into what we do with Jensen, which is that the techniques I used for my research then are embarrassingly parallel. So I used evolutionary algorithms to optimize the structure of these deep neural networks. Those evolutionary algorithms can be run on very distributed devices because they don't depend on each other as, as they train and improve. Machine learning, on the most part in the world, as it currently stands, is done in a vertically scaled way. It's not embarrassingly parallel. I can't split up the training of a deep neural network right now across many different devices because it just does not train in that way. But I've seen and used the techniques, I did research on the techniques to do it in an embarrassingly parallel way. And so the research I was doing, I could do that over GPUs in people's homes if I wanted to. It was very, very possible to distribute it like that. And what became clear to me through my research was A, I was constrained in the resources I could access. I just couldn't get access to enough to do as big kind of training runs as I wanted to. And B, these techniques would scale in a way that the centralized techniques don't scale. And so if I could access GPUs all over the planet, I could scale my techniques. You can't necessarily scale the way that models are trained in the standard world. And so this showed to me that we had the tech to scale it. We just needed to be able to access very, very distributed hardware in order to scale machine learning. And so that was my kind of early experience of how machine learning could scale. From my research, the Only other thing is I previously founded a company before Jensen as well in the data privacy space. And so I've kind of seen what happens when people give their information to companies, companies, their data to companies, and those companies use that data behind the scenes. I tried to build technology that would alleviate some of those kind of discrepancies between individuals and companies. And that was my company prior to Jensen.
