Loading summary
Rid
The more that I look at the landscape for tooling, the more convinced I am that the infinite canvas will play a pivotal role in how we interface with AI.
Steve Ruiz
How do you do human plus AI plus multiple AIs and multiple humans collaborating in a single document in real time.
Rid
So I wanted to dig into why the canvas is such a powerful primitive and all of the hidden complexities that come with designing a great tool.
Steve Ruiz
There are very few companies, products, anyone working on that problem, that we are in, like, massively uncharted territory, but also like some of the most ambitious stuff happening in software.
Rid
Welcome to Dive Club. My name is Rid, and this is where designers never stop learning. This week's episode is with Steve Ruiz, who is the founder of TLDraw, where he spent over three years and $5 million building the perfect canvas that powers a lot of the startups that we've studied on this show.
Interviewer
So we're going to do a deep.
Rid
Dive into Canvas, UX and all of the things that unlocks for the future of AI. But one of my favorite parts of this conversation is just hearing about Steve's journey, because it starts in a place that you might not expect.
Steve Ruiz
So I studied fine art in university undergrad, and then again for my masters. I can still paint and draw. I mean, it's something that I was really interested in not just as a, like a craft, but also a career and kind of an industry of like, how does visual culture get made? How does art get made? What are the lives of artists? Like, how do I participate in that, but also how do I, like, support that kind of between university and grad school, Like, I did a lot of art writing, so I was like, doing art reviews and journalism around different exhibitions at the time in Chicago. I did two years at University of Chicago, which is a very kind of conceptual program. So it wasn't just about the craft. It was really very, very academic. During university, I met a lovely woman who is now my wife. She got a job at Cambridge. So I moved to the uk. We got married and I got a studio and I was doing my art in England. You know, creative careers are really, really fragile. So after maybe six months of being in my studio, or maybe a little longer than that, about a year of being in my studio, things aren't really moving the way that I want them to, and I'm not even sure if I wanted them to move the way that I wanted to anymore. When I was in Chicago, I was doing like, basically working for lawyers for part of the week and then working my Studio for the other part of the week. It was like a high paying day job, so so to speak, like to be doing legal research, basically something that would probably be being replaced by ChatGPT right now. You know, hey Steve, here's a legal question. Go to the law library and figure it out. England has different laws, which is not, honestly was not something that I thought about before I landed. I was like, oh yeah, I'll just be able to keep doing like, you know, this type of thing and have this like kind of analytical side that I use for three days a week and then maybe like the creative side that I used for five days. I wasn't able to work in that, that type of, type of role. So I was just, I had a lot of like excess analytical mental energy going on. Right. Like those were always kind of two things that I, I had kept in balance and stuff. And so I'm like, you know, I don't want to go back to school. I don't want to like retrain to do something that I wasn't super passionate about anyway. I never really tried to combine these, these different parts of myself. I'd always been very defensive about getting a creative job because I thought that that was something that I. In my studio for work, which is true. You only have so much kind of creative juice during the week. So anyway, I was like, you know what, maybe it's time like let me close my studio and let me try and find a job that lets me take advantage of both things, both parts of myself. I guess the obvious place for that when we were in Cambridge was publishing and design within publishing. Learned how to use Adobe InDesign because I knew that people were going to teach that at these publishing houses. Started going out for interviews and eventually got a job interview, got a job. Now I was working in Adobe InDesign all day and I was like, all right, cool, this is a good start. You know, I'm employed but let me, let me keep going into design. What kind of designer do I want to be? My head was in publishing because that's again like the kind of the opportunities that existed in Cambridge. And I got really into ebooks. Actually. Ebooks are fantastic. They're like HTML and css, like cracking open ebooks and like figuring out how they work and how they were typeset, how they were packaged and distributed and all that stuff. So I got really into ebooks. Then I was like, you know what, this is all basically just web design anyway, right? Like we're, I'm not HTML and CSS I'm talking about, like, moving things around. And I had a little bit of, like, technical HTML, like, skill from just making, like, WordPress websites for other artists myself, like, during college. So I had, like, a little bit of that technical knowledge. This was like 2016. @ the. @ the time, some really interesting things were happening. The role of the designer within web design, like, had emerged. That really wasn't there, like, 10 years before. It was much more kind of hybrid, at least as I understood it. And that there was this kind of, like, product design thing that was happening. The really good designers were where they were building stuff that actually worked. You know, this was when, like, Uber and Airbnb were starting to leverage prototyping tools. Like Origami was out. Framer, very early Framer versions were out, you know, with the IDE, after the JavaScript library. And I was like, oh, okay, this is. This is what's really specialized. Because I knew that I needed some sort of specialization. But I also was like, I don't want to get into this in a way that is boring or in a way that is, like, dead end. Like, I would really want to get into this. That turned out to be the kind of the. My wedge or my way in was through prototyping. So it was mostly through Framer Classic, or what became Framer Classic, which now is. Doesn't even exist. So it was. It was through early versions of Framer before it was a website design tool. It was like a prototyping tool with code. Code, like code on the left, screen on the right. It was like a little IDE. I was using CoffeeScript and it was trying to be as simple as possible for designers to say, like, here's my sketch file. Let me label my layers. And then now I can say, like, okay, layer.
Rid
This button. It's a trend that I've.
Interviewer
Noticed too, where a lot of the designers that I look up to, they have these origin stories that are tied to either Framer Classic or Origami. And if you got in early with those tools, it really was a differentiating factor. And also it. I mean, you had to have an appetite for complexity back then to be able to learn that, which I think says a lot about the level of, like, agency and determination that you even have as.
Steve Ruiz
A designer. Highly influential. And I hope that there's a parallel today for people who are kind of coming into the industry. But that, yeah, that was. That became my thing. I was like, all right, this is going to be the thing that I'm going to get really good at because no one's good at this. Everyone seems to be into it, but no one can do it. The typesetting jobs that I have I don't really like. But you know, I think my contract is up in six months. Let me try and find a full time job in six months doing this thing that I find interesting. I started contracting after like three months on the side and then, yeah, got my first full time job over the summer and then eventually got a normal startup job, I guess within.
Rid
Six months. Real quick message and then.
Interviewer
We can jump back.
Rid
Into it. I saw a scroll stopping tweet the other day. The creators of Tailwind are working directly on paper to train the output to be perfect. They even invested in the company. So just think about the possibilities for a second in the future. You could design something in paper and then just right click and copy the perfect Tailwind as if the creators themselves wrote it by hand. Or maybe you take an existing code component and import it into paper to make edits directly on the canvas. And this is going to totally change how we Design and deliver UIs for the web. And it's just another reason why I'm betting big on Paper as the next great design tool. You can try it out today. Just head to Dive Club Slash Paper. If you're like me, then you know adding motion to your designs is the easiest way to make them.
Interviewer
Feel premium. The thing is, I'm not.
Rid
A motion designer, but that's why Jitter's new AI Brainstorm feature is a.
Interviewer
Game changer. I just drop in my.
Rid
Design and then get instant motion ideas that I can tweak, refine and make my own. It's seriously so easy to animate your work with Jitter. I cannot recommend it enough. Just head to Dive Club Jitter to try it.
Interviewer
Out today. Okay, now on to the episode. I want to zoom ahead of the story a little bit because you have this art background. You make the determined effort to learn prototyping, break in that way. And it didn't take you that long to start working on a.
Rid
Canvas SDK. So how the heck did.
Interviewer
You get to the point where you're like, I'm going to start a canvas company?
Steve Ruiz
All right? So that's the wild thing is that like my first job in tech was 2017. I started working on teal draw in 2021, right? So I like learned to code in 2017. And then I was working on some pretty gnarly dev tooly type stuff. Like within four years of that, then I had the startup. You know within five. So it was, it was really, really fast career, so to speak, because I was kind of in this world of, of being the designer who prototypes like that. That was my role. It was my role when I worked in an agency, it was my role when I worked in product. And eventually I ended up working for Framer, doing kind of education and content for designers, like trying to make more Steves in a way. Throughout all those roles, like I was just really, really aggressively learning and also like speaking a lot about it and writing a lot about it and like teaching, running workshops and things like that. So my own knowledge just expanded really quickly, especially once I was trying to like teach other people. I was like, okay, now I gotta really know how REACT works. I really have to know how this, this other thing works. I think if you're like a junior developer, you kind of get like kind of slow ball, like rolled into the industry a little bit just by saying, all right, here's a small ticket or here's something easy that you can do or whatever. And you don't have as much of a opportunity for like really ambitious projects because all of the prototyping that I was doing was just really ambitious short projects that I would get very quick feedback on and then kind of move on to the next one. You know, if I, if I cast myself back to 2003, I was working at a pizza place in Chicago and my job was to cook the pizzas and cut them. Cut the pizzas, right? And put them in a box and put the box ready. In Chicago you can cut a piece, a pizza, square cut, right? I don't know. Do they do that where.
Interviewer
You'Re from? I grew up in St. Louis. Square cut.
Steve Ruiz
Was everything. Same here, right? But sometimes, you know, these uncultured people would order pie cut pizzas and so we'd have to do that. It was so easy to get it wrong because like, you just get used to cutting it one way or another. But it was like a special request. Hey, pie cut, you know, like, oh God. Okay. Well, we got this request once for a pie cut pizza with 11 slices, but you, you cut it into 11 equal slices. I'm like looking at this pizza and I'm like, I don't know, I don't know how to do this and get your protract. Yeah, I called someone else, you know, they come over, they're like, I don't know how to do this either. And suddenly we have like all the cooks. The whole office was like sitting around this pizzas being like, wait, Would you? No, no, it can't be that. It has to be equal. And we just, you know, three pizzas later, we managed to, to get this thing just by luck. And then I was like, I was always like, yeah, how would you have done that? Like, there must have been a way to, to, to cut it. Years later, I asked this question on Quora when Quora was out, like in 2008 or something. I was like, hey, here's the thing that happened to me how, you know, you guys are all mathy, you know, people on Quora, it was early Quora, when it was good. How would you have done this? And there it became one of the most popular questions on Quora. Yeah, yeah. And so that question just sort of like always lived in my head of these like, little, little like visual tricks, kind of visual computation things. But it wasn't until much later when I could kind of code, that I kind of found my way back to this stuff. How do shadows work or how did like, splines work or these other things in the open source work? That. That's the type of thing that I started working on since it was all visual. I was posting on Twitter and people were liking it, like, oh, here's how to do isometric stuff, or here's how to do splines, or here's all this. This is mostly happening kind of by now, kind of during COVID I'm doing prototyping for Play was great team, great product. I'd also worked on a project in the summer. It was like 2020. So Covid between Framer and Play. I knew I had a couple of months before I would find my next job. After Framer ended, I'd become kind of also obsessed with state machines and state chart. What I mean by state machines is there's only so many states that a user can be in, right? They can be logged out. They can be logged in. For each one of those states, they have these certain events that can happen while they're in that state. And then, you know, at the end you have a list of, here are all the states. Here are all the events in my system that can happen. So it's this, like, recursive way of describing where are you in the application and like, what can happen. As soon as I learned about it, I'm like, oh, this would be so good for designers and so good for talking about how a thing works, right? Kind of a cheesy Steve Jobs quote of, like, it's not just what it looks like, it's like also how it Works right? Like that sounds great, but I don't really have any tools for designing how it works. And so I had built an app called State Designer. And while I was working on that, I needed arrows. And I needed these arrows to like point to what was going to be next. You know, like if I'm clicking on the sign in events and I want it to like go to the next thing in this graph. But the thing was that the graph needed also to be like responsive. Thinking back to it, no, it didn't need to be responsive, but.
Interviewer
Either way. But you know, you said that and I was like.
Steve Ruiz
Wait, really? I just, I just felt like it should be. And so I didn't know where the boxes were going to be. It wasn't like I could just draw an arrow once and just reuse that arrow because, you know, things might be in a different place. I didn't know ahead of time where the things were going to be. And so I needed like, come up with a way to draw a arrow between these two boxes. You could do it in a like really naive way, which, you know, looked terrible. It's just a straight arrow. And I'm like, but I got to be able to do better than that, you know, like, if you have a box over here and you have a box over here, like what's a good, you know, I would just draw it on the paper, like, right, Box here, box here, what would that arrow look like? And then like, you know, I said, okay, what if this box was bigger and this box was smaller? Like, what would that arrow look like? Or what if the two arrows were, you know, horizontally aligned? Like, what would that arrow look like? What, okay, what if this one's actually inside of this one? What would that arrow look like? And so I'm just drawing, you know, hundreds of boxes and arrows and combinations of different sizes and placements and all that stuff. And then I'm looking at, I'm like, I sense that there's a sort of pattern here. Like, let me try and formalize it, let me try and code this up. And it became the most complicated thing that I'd ever tried to do with code, which was just try and just draw these perfect looking arrows. The amount of edge cases, like, oh my God. But no, that's like the intersection would be lost because the things inside of it or it's too close, or when do I decide that something's too close? And all this stuff and it was turned out to be highly subjective. Number one, at the end I was just like, well, There is no answer to this. It's just like, what do I think is a good looking arrow? And then I'm like, well, I know what I'm been painting and drawing for like 20 years. I know what a good looking arrow looks like. Of course, if anyone should notice this stuff then it should be me. And I was posting about this on Twitter and some of those posts on Twitter, even though I had a fairly small audience, were getting a lot of interest and people were following along. It was like, yo, check out this guy, he's losing his mind over arrows. Get a load of this. I would have to be making these debugging tools and it would look like a kind of like a spring system, you know, collapsing and expanding as the arrows would bend around the different sides and pick the which route it was going to take. And all this stuff. I ended up with something that worked pretty well. It went into my state designer app, you know, but I also open sourced it and shared it as a library called Perfect Arrows. And that, that got pretty popular as well because it was a problem that seemed like it should have an answer to. But like for an open source project it was very unusual because there was no answer. It was like a design answer, but expressed through code. This was Steve's take by this point educated take on. Here are all the parameters of the problem of you want to draw an arrow between two things. Here is my best shot at making it look good in all of the circumstances. And people really liked that. And it seemed like something kind of unusual and kind of new for a, a developer or programmer to be trying to tackle that type of problem. A lot of the open source that you, you see is a very well known problem. You know, I want to be able to sort a very long list or whatever. I want to convert between this data type to this other data type. You kind of knew the parameters of the problem that was just about implementing that. How do you implement that in a way that is like really reusable, really like, you know, lightweight and also really fast and performant with the idea that like on a good day if you did this yourself, you would end up with this, the same code or if you had a lot of time to focus on this thing, you'd end up with the same code. But you don't have to because it's an open source thing, it's just shared, right? And you're kind of like giving back to the developer community. But for, for me I guess it was like, yeah, here's this thing where like you probably wouldn't obsess over arrows, but if you did obsess over arrows, like, and. And you had good tastes in the arrows, like, this is probably what you would end up with. But that's already just so far from like, the normal way that open source is to going. And I liked that. I was like, oh, this is great. And also, hey, remember that job that I had at Framer where I was trying to make content that people really liked for Framer, which was my job at the time, you know, the design educator. I was trying to make content about the Framer product, but technical content that appealed to designers, you know, because at the time, Framer was much more technical, much more Cody. And I couldn't figure it out. Like, tutorial videos. No. No one would watch the tutorial videos and the. The blog post. No one would read the blog post. No one. No one actually really wanted to learn research. Yeah, I think that was part.
Interviewer
Of it. There's a reason for.
Steve Ruiz
That still. I was like, like, why couldn't I just make, like, good content? Like, I know that audience exists. I know that when I went to Framer meetups in London, there'd be like 200 people there. And I'd say, how many people have used Framer before? And, like, four people would raise their hands. You know, like, there was just an interest in this type of content and a very aspirational interest as well. How do you make content that people with those types of interests would care about? And. And it felt like this arrows stuff was, like, getting closer. It felt like people didn't have to read any code. They could kind of see the. The complexity of the problem and see the decisions that I was making. And I could talk about that really easily in a short Twitter thread, you know, with like, GIFs and, like, that was great content. So I wanted to keep going. The next kind of little moment or vignette or, like, kind of chapter of that story. Road to Teal Draw was. Play was like a mobile app. I wanted to record some videos about Play, like, how to use it. And I could. I could connect my phone to my computer and, like, put that into obs so that it was like you could see my screen. But what you couldn't see and what I had really relied on when I was making tutorial videos for. For Framer was that on the desktop, you have a cursor. And, like, you know, when you're sharing your screen, like, the cursor kind of is your. Your. Your hand. You kind of, like, are like, look over Here or like here. You know, you kind of, like, are guiding the eye of the viewer using this cursor. In this world, I couldn't do that because my computer was, like, connected to my phone, and I didn't have a camera with my finger, and I couldn't touch the screen because if I touched the screen, then that was actual inputs on the system, Right? And I'm like, you know, what I really want to be able to do is just draw on top of my screen. Kind of like like an American football, you know, game, where you're like, this guy here is going to run over to here. You know, you're doing these big, fat, yellow arrows type of stuff. And so I did it, and it. It was really cool. And by the way, I learned a lot about drawing on top of your screen. You know, it's called a telestrator. This. This thing that was invented in the 60s to, like, draw on top of your screen. Telestrator. I'm like, all right, I'm going to make a telestrator. And I did. And it was, like, really nice. And it was an electron app that was in front of your screen, but it was transparent, and it would allow events to pass through. But if you, like, kind of flipped a switch or, like, did a keyboard shortcut, then it would start capturing events, and it would start using that as to run this drawing tool, and the drawings would kind of fade away. And it was exactly what I wanted. It was really cool. And I used it, and it was great. Then I was like, well, you know, this is cool, but the stylus that I have, because, you know, it was like a digital painting stylus, like, had. It was Wacom, you know, it was like kind of an expensive, really nice stylus from, like, the. The art days also. Like, and it. That could do pressure. Yeah, Like, I knew that that could do pressure because I could use it in Photoshop and stuff. So I'm like, how do I get that into my little Illustrator app? You know, I found, okay, this is how you get the pressure off of a pointer event. That's really easy. But, like, how do you make a line that gets bigger and smaller based on the pressure? No answer. Dug a little deeper. There was like a. You know, because I'm like, I'm sure I've seen this before. They're like, oh, yeah, there's like a signature thing for, like, react native. How does it work? All right. It works terribly. It's like, the worst. I mean, it was cool. It was a good, ambitious project, but it Was like, I would never have shipped this. Like, this was just like, this is clearly not the answer to the problem. Like, there. There's got to be a. A better thing, right? And it does it by making lots of little tiny lines and just varying the width of the line, right? Because in the browser, you don't have, like, svg, you don't have any of these primitives of, like, a line that's width changes as the line moves along. So you have to fake it. And you can fake it with, like, line segments that have, like, wider segments, narrower segment, narrower segment, narrower segment. Or you can do the thing that, like, Photoshop does, which is called a. Like a dab brush, where you take a single shape, let's say it's a circle, and you just repeat that same shape lots and lots and lots and lots and lots of times, getting smaller or bigger, depending on the pressure. And because the shape is so close together, you end up with like a. What looks like a single continuous shape. The only problem with both of those things was that they only worked with raster images. They only worked in, like, pixels. I was like, well, I want to do this in svg. Like, I don't want. I don't want little shapes and lines that don't look good anyway. I want, like, a polygon. Like, I want points that wrap over whatever the points that I drew and that constitute kind of like a. The shape, like a vector shape. Turns out no one had really figured that out, and I took it upon myself to figure it out, and I took it about. I started this because I just watched a video about how you do racetracks. If you were programming, like, a retro video game, you wanted to have, like, a racetrack. And I was like, oh, I could do the same thing. You know, you take all the, kind of the middle points and you kind of go out left and you go out right, and that's how you make the racetrack. But I'm like, you know, if you just went further to the sides, further left and further right when there was more pressure, and then, like, less to the left, less to the right, when there's less pressure, that would create a line that kind of, like, got thicker and thinner, right? I could do that. I could figure that out. Let me jump into it. And that turned out to be a.
Interviewer
Really hard. I knew that was about to be the next thing.
Steve Ruiz
You said. I'm posting about this on Twitter. I'm sharing these, like, little, like, demos and, you know, debug mode views of skeletons of these, these polygons. I'm getting people in my, my DMs or in, in the replies being like, oh, yeah, I worked on this for my PhD and my whole PhD was just about the corners, you know, And I'm like, oh, shit. Like, I'm really into it, you know, now, like, this is. I'm like way over my head and be having calls and talking to folks, you know, who are like, no, you need to like, learn linear algebra, buddy. Like, you need to, if you want to do this right, you're going to need to figure this stuff out. And so know vectors and 2D vectors and. Okay, uniform, multiple magnitude. All right, like, got it. Like, got it. Because, you know, what I wanted to do was make the ink. So I'm like, whatever, I got to learn to make the ink. I'll. I'll figure that out. And then just again, like kind of obsessively working this problem for, for months on Twitter in public to the point where like, folks were just like, dude, you gotta like, just call it, you know, like you gotta, you know, and be like, no, no, no. There's still so much other stuff to try and to learn or to, you know, I haven't even scratched the surface.
Interviewer
Or whatever. I'm like sitting here with a smile on my face because I feel like I just got to know you a little bit. Where one, you have this appetite for absurdly hard problems that most people would never go down that rabbit hole. You also very clearly are a fast learner, but then you have this obsession with the details and a willingness to tease apart every little piece from first principles in order to figure out the best way to build something. So take in that experience. And then also there's some interest in teaching and educating and explaining ideas and you're able to recognize the fact that that's also in many ways probably the best way to learn something. So if we combine all of that. You've spent years in tool design, like really, really deep in tool design. So if you were teaching some kind of an upcoming content series on tool design, what are some of the main principles that you would be hitting in order to help a designer listening really go deep into that world and understand just the types of things that you have to.
Steve Ruiz
Think through. I think of all tools as like kind of decision making tools. Like a color picker is a great tool, right? Not the only way of representing color. Like you could pick a color like using like hex codes or RGB codes or something manually. It's just. That's bad. It's not A good way of making those decisions. You're not dialing anything in. You're not getting that like really quick feedback on one decision or another and being able to compare things. A good tool will allow you to do that, to very safely make a change and go back, have that safety net of I'm not destroying anything or I'm not like screwing anything up while I'm working on this. Which is why I think design systems kind of anyway, you need that, like that safety in order to make those decisions. You also need to just be able to compare options. You need to be able to make those options. It's a lot of dialing in. It's a lot of balance between precision in terms of the being able to do things precisely and then a lack of precision in other ways in order to allow you to make a breadth of decisions. In other words, like if the app only allows you to just to the gaps, you know, but it gives you a really good control over the gaps, like that's great, but like that's not the only thing that I need to do here. If you have no choice but to make every decision, a precision decision, then like the speed of operating just becomes really slow or even like kind of too much to hold in your head at the same time because like you're having to make too many decisions. I suppose even things that aren't important you might have to make decisions on. Whereas a good tool will allow you to kind of focus on the decision that you're making or the work that you're doing, which may or may not be precise, without burdening you with having to make all those other.
Interviewer
Creative decisions. You obviously have an eye for what good tools are and you can see all of the details and the thousands of micro decisions. So as you're using these different canvas based tools, I'm sure you get to the point semi frequently where you're just like, well, that could be improved. Well, I could, you know, I could see a way to do that a little bit different. So then I want to dig into this tension where it's like, how do you wrestle with one, the desire to innovate and improve while simultaneously capitalizing on the familiarity that exists and you know, the real value in matching.
Rid
Something one to one, even if you.
Interviewer
Can see a better way to do it. So how do you think about that tension as a tool designer and someone that's kind of working on that foundational layer like.
Steve Ruiz
Tail draw? The reason why this can be a commodity product is we're an open source product is because the canvas is kind of a known thing. It is a commodity, or at least that's the theory is that like when you use a canvas product, you automatically kind of bring with you tons and tons and tons of affordances that you've picked up by using Figma or by using Miro or any of these other tools. And you expect it just to work the way that it's supposed to. You don't bring those same type of expectations to every piece of software of like, the tool has a kind of like, it's a known thing, right? But the known thing for Canvas is very complicated. The example I use for another one of these really well commodified and yet super complex things is like a text editor, right? So if you're typing into a text editor and you kind of like pause and they keep typing and then you say, ah, that's not right. And you hit undo. If you hit undo and the thing that was undone was the last character that you typed just, they did undo again and went like, like one character back. You'd be like, this thing is just broken, right? Because it should go back to where you paused, right? It could be anyway, but like, that's the way that it works, right? That's the convention, very strong convention around undo redo in text editors. If it doesn't work that way, it's not like, oh, well, this text editor just works a little different. You know, I gotta press this a lot of times in order to get back to where I want to go. It's like, I cannot use this. That's like, you know, IRS level, you know, website like terror. I mean, it's. It's just completely unusable. Even if the rest of it all works, like, if you get one of those really important conventionalized features wrong, like, it just is like, well, this is not a text editor, is it? You know, the canvas has the same sort of things that should happen. Like if you pinch on the canvas, the thing should zoom in or should zoom out, but it shouldn't zoom out towards the center of the screen. It should zoom in to where you're pinching. And if you're zooming out, it should zoom out from that. That should be the origin of the transformation of the canvas. And if it doesn't work like that, it feels broken. It feels like, oh, this is like a tech demo. This is not a real thing, right? If you select a bunch of boxes and you rotate them, they should all rotate together. That does put me as a, like a toolmaker in a Spot of having to recognize what of the thousands and thousands and thousands of features inside of something. Like a canvas needs to be the same way every time. And then also identifying, okay, which ones are actually different here? Right? Like, how does alignment or justification work that is different between these different apps? And is there a reason for that? I do notice the fact that, like, a lot of our decisions with Teal Draw or a lot of, like, my decisions with Teodraw are going to be increasingly influential on what those norms are. Because at some point someone's going to say, like, okay, well, what does ClickUp do in this case? All right, cool. What does inflight do in this case? You know? And like, what is the, you know, these, you know, what does Autodesk do in these cases? And like, like, oh, look, they all do it the.
Rid
Same way. Hey, really quickly, let me tell you about the all new Dive Talent Network. I've hand assembled over 100 of the most talented designers and builders that I know so I can recommend them to my favorite companies. So if you're listening to this and you're open to new opportunities, the Talent Network is anonymous and super low pressure. It's just an easy way to see what's out there without having to post on.
Interviewer
Social media. So if you're interested in.
Rid
Joining or maybe you're looking for your next hire, head to Dive.
Interviewer
Club Talent. It's almost like you hold up a magnifying glass that the vast majority of people do not have access to. Like, you're seeing things at a level of detail that is unique. I want to zoom out for a second because there's an interesting trend that even I'm seeing as I do more of these demos, where the canvas as a paradigm for this next era of tooling is quite clear. Like, there is an uptake. It's a good time to be in the Canvas SDK business, apparently. But you said something last time we talked that that was pretty interesting. You talked about how most of today's apps are still conservative uses of the technology. Can you unpack that a little bit? And I want to ultimately help people get inside your brain a little bit where you're imagining where this could go and stretching people's ability to envision.
Steve Ruiz
The future. I think within just the category of design tool, there's a lot more that you can do in design. I think you're doing some of this as well. Like, of saying, like, this is an environment for creating production assets, where this is an environment for creating websites or applications, or this is a environment where the design is executable in some way. Kind of like Teodrop Computer or some of the workflow tools that we're seeing. Or this is a design that is like mapping back to a real physical process. Even like I've seen tools that, that use the canvas in pharmaceutical, for example, where you have these really complex workflows either doing research or production that need to be designed, but you can only design the parts that haven't run yet. So it's like this workflow, it's like time is moving along this and everything back here you can't do anything about anymore because it happened. But everything this way you can change, you know, and like learn from here in order to change. This is fascinating, right? The way to do that easiest, given the complexity and given the relations of all these different things, is to do that visually and to do that like kind of in a way place where you can zoom in and out and move things around. I think workflows themselves are like, still a really very popular right now, but also like kind of expanding category of what you can do with workflows. And I consider workflows a kind of a subset of canvas within like kind of the whiteboarding category. That's, that's pretty well well known. But you start kind of going into these verticals, right, where I think Miro, for example, like people use that for UX research, for team meetings, that you use it for education, for like remote off sites, for onboarding, people, for like so many different use cases in Miro, which I think is a strength for Miro that it can accommodate all those things. But it's also kind of a little bit of like a bad fit for all of them. You know, like you say, like, oh, what would Miro look like if the only thing that it did was facilitate team, you know, onboarding events or remote off sites or something like that, you know, what would the feature set be? Would probably be a lot less, number one, but also it would probably have things in there that are unique. The way that you represented people on the, on the canvas, you know, might, might be different or might be more. I mean, I, I don't know because I'm not building the, that product. But what I've seen is that there's, there's interest in developing those, those different verticals much deeper in ways that eventually don't really look like whiteboards. They, they look like, you know, things like Padlet, you know, where it's like, God, they're like rebuilding HyperCard essentially because they wanted to Give these like presentation tools to teachers to build these environments for their students to explore. I've seen classroom tools where there's even some of the AI stuff coming in where you have auto content that is being generated on the canvas in response to like the questions from the students in response to the research that they're doing. It's just a visual way of presenting that information. Again, same canvas. I think the multiplayer canvas experience is like so good and so under explored and I think even things like been building a lot of like starter kits and one of the starter kits that I want to create next is just board game. Like I want to have a dice on the canvas, I want to have card, deck of cards on the canvas and I want to see what people build with that because it's a multiplayer real time canvas. And now I have dice in my hands like, all right, let's play some D and D, man, let's play some poker. You have all the primitives there. I need to select, I need to move, I need to drag, I need to activate, I have collaborators and they have their different spots and drag and drop areas, all that stuff. So it's like so many entire kind of product categories can fit into the canvas, but the ones that we're familiar with are again, it's, it's like a really.
Interviewer
Early generation. Even just seeing like we're recording this a week after Figma announced the Wii V acquisition, like, yeah, just that is going to introduce so many designers to a different way of thinking about what a canvas can do. And these change workflows and we were talking about this before we hit record. Like people come to me with demos and I'm seeing a lot of canvas based demos. It's starting to feel clear to me that a lot of the coding that I'm going to do might be rooted in the canvas where I'm seeing different versions of my local host side by side and comparing and contrasting and creating branches and everything exists on a canvas and it's like, whoa. I'm just more of this future feels like it's going to exist on the canvas in a way that.
Steve Ruiz
It'S pretty exciting to go back to the what makes a good tool essentially like code great. It's really not good at that ideation stage. It's really not good at making decisions and comparing things together. There's ways to do it and we're kind of even some of the AI tools that are out right now are like cursor with multiple agents working on multiple like kind of work Trees and stuff so that you can kind of switch between them. It's all very. It's very clunky still. Not for lack of the genius of the products or anything, but it's just like. It's just not the way that you kind of compare things. It's not the way that you tweak and dial in and all those things. You can do that on the, like, there's. It's still like, it's not an accident. Why people really like canvas tools within the design and the space. And this. This idea of having an infinite space in order to compare and like, branch and just go deeper and deeper and kind of rewind and all that stuff. Like, way easier to do on the canvas than in a git history or something like that. It's just not impossible, but, like, prohibitively hard. If you're going to do it, you're going to do it a fraction of the amount of times that you could do it in an environment that was more geared towards. Right. That number of iterations. That's such a big thing. And, you know, like, how many. How many different versions of this thing can I whip out before I decide to make.
Interviewer
A decision? Can we talk a little bit about your experiments? You're always posting these little things on Twitter. I've seen some fairies floating.
Steve Ruiz
Around recently. Right now I'm in the Marriott in Lisbon, where I just presented fairies at the Lisbon AI conference. And let me tell you, it went.
Interviewer
Really.
Steve Ruiz
Well. Nice. I'll tell you, talk to you about it. We sell the SDK, we license it to other companies. Our value proposition is that you can use Teldraw to build cool things and very different types of things. And so it's kind of incumbent on me to build cool things and very different types of things. And I also learned during those kind of days before Teal Draw that the best way to market a tool like this is you just build with it. You build interesting things with it, and you kind of develop it a little bit in public and all that. And like that unanswered questions are kind of like things that are obviously not done or maybe even not doneable are most interesting to builders. Audience. Right. Kind of want to follow along as someone's doing something, and you also kind of want to wonder about how you would do it. I think that's. That's true for me, and I think it's just true for most developers and designers. And so, yeah, the things that we've built with Teal Draw, there are a lot, but the things that really went viral. And that came out of nowhere. Basically were like Teildra plus AI in various ways. I'll kind of speed run this because that's a whole different podcast, right? But like, we did Make Real, which is where you could draw a website or draw something and then select it and click this button called Make Real. And it would create a website, you know, and put it on the canvas. Because our canvas is can you can do that and it would be whatever you drew, right? Like, we would just take that screenshot, we'd send it to an AI and say, hey, you're a 4,000 year old senior web developer who loves their designers and wants them to be happy. And like, you know, your designers who you love just gave you this, this low fidelity wireframe and ask for a working prototype. Can you build that? And. And we do it. And it was 2023. It was, you know, the vision models had just come out and it was actually a designer named Sawyer Hood at Figma. Sawyer's awesome, by the way. Sawyer Hood builds all sorts of cool things. But he had basically prototyped that using Teodraw and that started getting viral. And then we took it and we ran with it, right? We're like, all right, let's put this back on the canvas instead of in some.
Interviewer
Modal window. I remember that tweeting still like scrolling and seeing that for the first time. And I was like, oh.
Steve Ruiz
My God. It took over Twitter for like a couple of years and. Because then we kept on discovering new things, right? Like, we discovered that if you drew on top of the website and then we, we sent that as the next prompt, even just with an empty white box and said like, hey, this white box contains your previous website that you made. Here's the code that you, you made last time. Make a new, new prompt, right? Using, using whatever the user is, select. And if you, if you had just kind of like crossed out an icon somehow it would connect the dots. Be like, oh yeah, that's in the top right corner. It must be crossing out the menu icon. All right, in my next iteration, I'll just get rid of the menu icon and then you'd have the next, next iteration, right? You had the next working prototype, which you had created again just by drawing. And then, you know, people are like, oh, you could just put screenshots next to it and say like, hey, make this look like, you know, stripe.com, the screenshot of stripe.com or like, here's, here's like a picture of a certain icon. Make this icon and just next prompt and next prompt to next prompt. You could just give it like, you know, figures like you'd see in like UX documentation or something like, this is exactly how this rotational dial should work or whatever. And it would be like, all right, I can do that, I can do that, I can do that. And it was just fascinating. We were learning this in real time. It was the first time any, I mean, this was pre vibe coding apps and stuff like that. For a lot of people, this was their first time making something that works, like producing an artifact of like software in any kind, any shape. But this madness. There were like millions and millions of billions of views on these tweets and engagements. And my phone was ringing off the hook and we were just like all getting RSI from like reading Twitter and it was like nine solid days of like virality just based on this, like just draw a website. After that, you know, some things got a lot easier. Fundraising got a lot easier. But also it was like, well, this is like the perfect use case for this canvas. Like if we didn't. If I hadn't already been building Teal Draw, like once the vision model dropped, I would have started building Teal Draw. I'm like, in order to take advantage of this wonderful technology, you're going to need a really good hackable canvas in order to just work with images. Because this is model that can understand images and you're going to need to create those visual prompts even beforehand and modify them and all that stuff. But it just so happened that we had the thing and that we'd been building it for, for years and it was like just ready to go. And it was a wonderful coincidence and we started doing more kind of these AI demos. We did one where you would be drawing and we used like a real time image generator to like draw in real time as you were drawing. I don't know if you ever saw that.
Interviewer
One too. Oh, I.
Steve Ruiz
Definitely drop. Yeah, yeah, yeah, yeah. We learned a lot from that one was that like, stuff's expensive and we shouldn't spend all of our investment money on demos. Fun.
Interviewer
Little stuff. So you get lucky on the generative stuff. What are the chances that in the same way that we now recognize that a canvas is obvious for generation, we look at more of these agentic workflow as like, oh, of course that happens on the canvas. So given the fact that you're probably not trying to 100% just get lucky, like what are some of the ways that you're trying to explore or push that forward or capitalize on what appears to be the next.
Steve Ruiz
Big jump. The conversation that we had in kind of the beginning of 2024 was, where do we think this is going to go? ChatGPT had come out like the image models. I was like, okay, this, everyone's like, all right, this is going to be defining technology of our, our generation. What's our role in this future? What's the role of the canvas? And ChatGPT was really just blowing people's minds and we were all just thinking about it constantly. We were just like, well, chat. ChatGPT. Chat works because chat works for people. Like, I can chat with my friends, I can chat with, you know, my, my, my wife and my, my family, whatever. Chat works really well for people. It just also works well for, for AI. The canvas works really well for people. But should it also just work for AI? Like, is that going to be a reason why the canvas becomes more popular, is because, like, it's already good at collaboration. It's just a good environment for collaboration. We'd spent years already kind of unpicking, okay, what is unique and good about the canvas. But it really made us step back and think about, like, why, why is it actually good for collaboration? What can we do in the canvas that we can't do in, in a chat? Things like working simultaneously but, but in a way that you don't see what other people are making, but you know, it's over there, you know, and you can kind of come over and look at that and then come back and look at your own stuff. But like it's happening, but it's not like in your view or in your feed, it's not distracting. You kind of work in parallel, even though you're working in the same document. Very hard to do, by the way, in like Microsoft or like a Google Sheet, by the way, like that type of parallel, but in the same document. Very tough. And you could do it in a canvas. You could do it. You could kind of tell what people were doing also based on like where their cursors were. You could also see like, who's working together based on like the clustering of collaborators. You could leave notes that people pick up asynchronously through comments. I mean, like, as we thought about more, you could always chat, you could lay in, you know, video very easily or audio, you know, like these other modalities could just kind of plug in in a way that makes more sense as like a real time thing than it would in a, in a chat. Yeah, so we kind of developed this theory or this thesis around. All right. The canvas just might be a good place for intelligences to sort of work together. Suddenly you have something where you're like, here's a screenshot of a whole bunch of noisy sticky notes everywhere. Like, what happened during my weekly planning session. You know, make me an outline of that structure that put it into the database or something. And that had never been possible until 2023, 2024. Yeah, so. So the. The thing that we wanted, that we realized, you know, we would. We would need really, is this. Like, well, I want to collaborate with AIs on the canvas. I want little virtual collaborators. I want some of the cursors on the canvas to be people, and I want some of the cursors to be AIs. And I might want to have my own little private AI assistants. And, you know, some of them might just belong to the board or just be other things. And they should be able to work with me and see what I see and make the same things that I can make, but, like, be AI. It's in the same way that I could chat with an AI or have a chat in my slack or a member of my slack. That's an AI. That was the vision. Okay, we're going to. That's what we're going to do. That's the thing that Teildraw will be able to facilitate because we have all the bits and pieces there. Is it possible? Today we looked into it. We prototyped, and it is not possible today. We could prototype, but the bots were just bad. They were bad. They would play chess with you or checkers with you, but they would, like, draw hexes in the wrong spot and they wouldn't know it. And you'd ask them like, hey, did you draw that in the right spot? They're like, no, I drew that in the wrong spot. I can notice that. I drew that in the wrong spot. Let me draw it in the right spot. And it would be like way over on the other side or something like that. Like, they couldn't figure out the coordinate systems, all that stuff. We tried to limit it and teach it and, like, kind of figure it out how to, like, prompt in such a way that it got the right information, but it was very. Just bad. So we're like, all right, so they can't do this. It's not like we're going to ship a feature that has this. If you remember, there was a company called Diagram, Jordan Singer's company, that ended up selling to Figma, and that Was part of their proposition was like, okay, we're going to have these virtual collaborators. They ran into the same thing really quickly, which is like, oh, it just doesn't work the way that we thought. There's a lot of things models are great at. We were in this unique position of having a fairly lo fi, kind of goofy, creative looking canvas anyway, even though this like sucks, like, let's still share this, like let's still build Autocomplete and like start tweeting about it and sharing it. Let's try and build some of these Gentex or like these prompts, prompts to generating content on the canvas. Even though it sucks, it couldn't be shitty, but it's also going to be amazing because we would. We were all totally impressed by it internally, but it was clearly not enterprise software. It was clearly not anywhere near the maturity that we could really stand behind and saying like, this is, this is as much software the rest of the app. It was just hilarious and it was like so entertaining and like just amazing. And you could like see it as a pointing towards the future. And it was funny. It was like really funny. We did Autocomplete over that summer. Lou Wilson mainly working on Autocomplete. Ryan Reed also working on Autocomplete. And we made some videos about that. But it was just too bad to ship because it would do things like we would tell it, here are the last three things that the user did and ask it for what are the next three things that the user will do? You draw a circle and it would be like, all right, drew a circle. I don't really know what he's doing here, but let me keep watching. Then I would draw like a smaller circle inside of the circle, kind of off to the left. And the model would be like, I know what he's doing. He's drawing a face. I'm going to draw another circle, you know, off to the right. And then I would be like that, that is right. And I'd hit tab and I'd accept it. And then that would be sent to the model. Okay. The user drew a circle. Drew a circle, drew a circle. And the model's like, oh, I get it. The user is drawing a line of circles. Let me draw another one, you know, so it would be like eye, eye. And then like another eye over here. And it would be like, oh, now I really know. He's drawing like circles. Let's just go like it's circles forever. We draw like an arm and it would draw like this part of the arm, and then this part of the arm and then this hand and then the fingers. Then it's like, oh, I see. It's like kind of like a tree being expressed, you know, with where one becomes five. Now each one of these should become five. Like, add fingers on your fingers and stuff. It was really, like, kind of like body horror. But Autocomplete was awesome. We eventually took that code, which involved turning the canvas into text and sending that text to the canvas, to the model, along with the image and other information about, like, where the user's camera was and other stuff. We took that and adapted it to just an app where you had a box and you could. You could put that box anywhere. You can make as many of these boxes as you want. But the box had a text input. You could just type into the text input, draw me a cat. You make me a diagram. Do whatever you want, right? And it would just generate that within the little, like, work area that you defined for it. I mean, it was very quiet. We launched it as, like, teach Teal draw dot com. You can still go there and play with it. It's gotten better because the models have gotten better underneath it just like, Make Real has gotten better because the models have gotten better and just updated the models. But it was. It was mostly a demo of, like, hey, this is possible. It's like a demo of, like, hey, you can get a model to do the kind of pelican riding a bicycle type of, like, drawing thing, but in Teal Draw. And it was very cool for it to be generating not, like an image, which they're much better at, but generating, like, stuff. Like, generating the same stuff that I can make in Teal Draw, the same primitives that I can make. And the demo that I always do is, like, have it draw a cat, and then I draw, like, a rectangle, tall rectangle next to it with, like, a little yellow rectangle on top. I say, make the cat blow out the candle. And again, the thing that it sees is a screenshot and a bunch of, like, kind of XML that describes the different shapes on the canvas. It doesn't get, you know, here's the candle, and it's at these coordinates or anything like that. But sure enough, it'll just make little blue lines come in the mouth of the cat and, like, delete the yellow little rectangle and, like, probably draw some smoke and stuff and be like, I did it. The cat's blowing up the candle. And it's like. It's extremely, like, shitty but amazing. Like, no one's losing their job over. Over this thing, knowing how to. How to use the canvas. And, you know, the cat's kind of a bad drawing of a cat. And the. The smoke is a bad drawing. Smoke. Like, it's not great. It's not even good, but it's like, wow. You know, like, this is. There's. There's something really interesting I thought a lot about, like, how do we frame this in a way that is narratively appropriate to the skill level of these AIs? Because if I just said, all right, these are your virtual collaborators, they're just not good, right? They're. They do crazy stuff. They do, you know, things that no human would do. And so they're like. They're just extremely not people. And so framing it as, like, virtual collaborators, like, seemed wrong. So I was like, all right, maybe the ghosts, you know, that you're kind of, like, invoking for, like, spirits. And I have somewhere, I have, like, the little drawings I'm sketching as I'm thinking about this, all this stuff. And then I'm like, they could be little, like, bugs, or they could be, like, fairies. They're small, right? They kind of fit the cursor size. They're not people. They're definitely not people, but they're, like, kind of humanoid. They're not, like, bonded to you. Turns out that there's a lot of different types of fairies, by the way, Scandinavian fairies, which are kind of okay, but. But not very friendly. There's, like, Irish fairies, which are terrifying. Don't read about Irish fairies. Bad. Steal your kids, come down your chimney, that type of thing. And then there's, like, English fairies, which are kind of just charming. They're the Tinkerbell type of things that we're used to from, like, Disney lore and stuff. So I'm like, all right, they're definitely, definitely English fairies. Let's. Let's make sure that they're English fairies. That's also appropriate. We're in London, but they have these powers and they have accessories and, like, you know, oh, great. Like, let's go deeper on this. As we got deeper into it, a lot of the problems of dealing with agents started teaching me why the canvas is such a good place for collaboration to begin with. One of the biggest problems, if you ever, ever tried to, like, vibe code with, like, five agents at the same time, is that it's very hard to remember which agent is doing what and which agent is which and what context they have. So even just telling them apart. Oh, yeah. What terminal was I I don't know. Again, like, I don't know if you've ever done this, but you can imagine, like, the screen full of just, like, chats that are just running, you know, as these things are coding, and you're like, all right, that one's doing the authentication. This one's setting up the project description. This one is doing this and this. So there's just a which one is doing what? Problems telling the difference between them. Problem. And that actually works really well on the canvas because they just look different, you know, give them different hats, give them different wing patterns and whatever. Give them different clothes, colors. There's the. What is the state of all of my agents that I'm trying to run at the same time? Even if it's just one. Like, is it waiting for me? Is it thinking? Is it doing something? Is it working? I want all of my agents to be totally maxed out and running at all times, right? I don't want anyone to be. To be waiting if I'm really trying to push things. And the system should make that easy. And so that's another part of the state that we can represent visually. There's like, what are they working on? You know, like, are they. Is this person writing this thing? Or are they making wireframes or others? The canvas makes it very easy for the same reason why it's good for people. You just see where the fairy is, see where the little cursor is. The complication of, like, addressing tasks to multiple agents at the same time or orchestrating these types of systems. This is what we're doing, like, very, very recently, this is what I was just demoing here in Lisbon. I want to give a task that is more than any one of these agents should work on at the same time. And then I want the agents, the fairies, to, like, self organize around solving that task. Say, hey, I want you to create a wireframes for my app, but also I need you to write the PRD based on these inputs. And then I want you to make, like, a chart of how those wireframes, like, interact with each other, like a user flow. And the fairy will start flapping the represented working. It'll think it'll kind of, like, do one of these. A little character on the canvas, you know, touch his chin, kind of tap his chin and be like, well, that's. That's too much work for me. Let me summon some other fairies to assist with me. And then that fairy that you talk to will kind of switch into an orchestrator mode, where it is now creating tasks and assigning tasks to the other agents in the system to go do those things. Things and then also, like, checking in on the progress of all those things and, you know, assigning more tasks when as people finish the rules in order to enable that are very small relatively. But, like, suddenly, like, they just this crazy emergent behavior, like, happens on the canvas. The talk that I gave, I had, like, eight agents, like, all working at the same time. It's kind of chaos, but it was, like, awesome. Like, it was well beyond what I'd ever done on any other paradigm. And, like, the parts that were bad were still, like, you know, like, yeah, the model doesn't know how big text is or doesn't know not to put things on top of other things, like, reliably and to spread stuff out. The parts that were good was that I knew what everyone was doing. I could tell the difference between all of them, and I could know when things were going wrong also and be able to say, oh, no, no, no, stop building that. That's just wrong. You know, bad, fairy, all those types of things. So the thesis isn't very creative. It's just like, well, it's good for people. It'll be good for.
Interviewer
AI too. And it's cool to see again, you just think through every little detail, from first principles, even down to the posture of the fairies as a signal for what they're doing. Like, there's so many micro decisions in that more agentic foundation. I'm appreciating listening to you talk about just your thought process. Even.
Steve Ruiz
For it. I mean, I'm very lucky to work with some other really creative people. Max Drake, Lou Wilson, mainly on this, also Mima Cavalo. So I've kind of built that team around me to. So I'm not. Not just Steve and his ideas, but we. We do have some of the craziest conversations when working through these features. Like, all right, like, what if. What if there was, like, a pond on the canvas? You know, like. Like an enchanted pond, you know, and it's like, yeah, yeah, yeah. Because, like, you know, you'd want to, like, have a ferry that kind of runs the pond. And if anything enters the pond, you know, it should operate on that. That should be the prompt to this agent. In other words, like establishing a domain, a folder or whatever, and having an AI agent that, like, essentially manages the contents of that folder. If anything enters this folder, I want you to run this workflow based on what we've agreed. Makes total sense. Okay. Yeah, like, yeah, it's a pond. You know.
Interviewer
Enchanted pond. Naturally. It can't be anything else than.
Steve Ruiz
Enchanted pond. The metaphors actually work. They work in both directions. Some things will start from kind of the technical side, the, you know, the AI world. Okay, how do. How are we going to do mcp? Be like, all right, well, maybe the fairy is like, kind of warging into. Into notion or warging and, you know, like, it's like eyes roll back in its head as it's accessing this information because it's not going to be able to do anything else. Maybe we whispers to a butterfly and the butterfly flies. I don't know. We need some sort of metaphor for this. Accessing information from a different. That's not accessible to the user directly. And then some. Some of it, it kind of goes in the other direction of saying, like, yeah, fairies have wands. You know, like, what do we do? Can we do anything with that? Like, you know, I'm reading PDFs about fairy folktale, and I'm like, yeah, a lot of fairies, like, leave gifts for people. Like, there's a lot of, like, gift leaving and gift giving in fairy lore. Like, how would we do that? And be like, oh, you know, you could kind of leave things for the fairy. Oh, yeah? What would you leave? You know, maybe you say, like, extra context, that if you're working in this area, make sure that you read this first. Which is something that we do all the time in like, gentic coding of like, little context files or agents files or, you know, please read me before working on my tests. Type of files. You're like, yeah, let's, let's. We could do that on the canvas. Like little scrolls or little notes or little letters or something like that. And they could leave them for each other. And it was like, oh, this is great. Right? So having that layer of metaphor is actually really great for coming up with ideas for. For it has been for.
Interviewer
Us. Yeah. I remember talking to Mike Smith from Smith Addiction, and he talked about how once you hit on the metaphor, it just starts unraveling and it, like, accelerates ideation because it starts to compound. And I can tell you put yourself right in the middle. You know, you gave yourself a nice little fairy lore foundation, which in itself, I think is a microcosm of who you are as a builder. The fact that you went so deep into fairy lore, but you're at the point now where it's like, oh, it's just. You can't even keep up with how fast the ideas.
Steve Ruiz
Are coming. It's great. Our Our wireframe, our, our teardrop docs on this stuff. We post a lot on Twitter as we're kind of building these things. Max and Lou also like great follows for, for this stuff because we're just figuring it out as we go. But there's just. So it's, it's really hard to miss. And we are absolutely figuring this out. You know, we have to like remind ourselves that the thing that we're figuring out of like how do you do human plus AI plus multiple AIs and multiple humans collaborating in a single document in real time? There are very few companies, products, anyone working on that problem that we are in like massively uncharted territory. But also like some of the most ambitious stuff like happening in software, right? So like if you wanted to tackle a problem, that's a big problem to tackle and we just kind of launched ourselves into it. They're real, really interesting problems of like again, yeah, like how do you handle orchestration, how do you handle waiting, turn taking for example or like following of like the different kind of states that the AI is going to be in. It's actually very similar to like video game AI where you have, you know, these little strategy games like Starcraft or CRPGs or something that we look a lot at of which solved a lot of these problems where you have these little entities that have their own little state machine of like I'm waiting, I'm acting, I'm moving, I'm like patrolling all these things and they kind of like move between those states based on enemy approaching and now they'll aggro and now they'll switch into a fighting mode or something like that. And we're doing the same thing. It's just that we're also using like AI AI now to like control the behavior while they're in that state. But it's still a lot of like programmatic switching between, between things and setting up those, those like roles and modes for these, these fairies also, which I think is something that you're going to see elsewhere. I mean it's just like though seems to be the way to solve this problem, like the multi agent orchestration problem. It would be the same in code. It just would be impossible to see in a way. It would be impossible to like manage or at least very difficult. Hopefully they figure it out because it's really cool to see, see them working together. And I could imagine having a fleet of AI coders who can talk to each other and email each other or whatever is going to be worth it as.
Interviewer
Well, same. It's, it's a, it's becoming clear and I just appreciate you letting us get in your brain a.
Rid
Little bit. Like, I feel like I.
Interviewer
Got to know you in the way that you think and process things a little bit over the last hour. Plus, and I've thoroughly enjoyed it, so I appreciate your time and I will look forward to more Twitter demos because they're definitely my favorite part parts of that little bird app. Right now, before I let.
Rid
You go, I want to take just one minute to run you through my favorite products because I'm constantly asked what's in my stack. Framer is how I build websites. Genway is how I do research. Granola is how I take notes during crit. Jitter is how I animate my designs. Lovable is how I build my ideas in code. Mobin is how I find design inspiration. Paper is how I design like a creative. And Raycast is my shortcut every step of the way. Now, I've hand selected these companies so that I can do these episodes full time. So by far the number one way to support the show is to check them out. You can find the full list at Dive Club Partners.
Host: Ridd
Guest: Steve Ruiz (Founder, TLDraw)
Release Date: December 1, 2025
This episode features an in-depth conversation with Steve Ruiz, founder of TLDraw, a pioneering canvas SDK powering numerous startups at the intersection of design, real-time collaboration, and AI tooling. Host Ridd and Steve explore the evolution of the digital canvas, the nuances of tool design, the nature of collaborative design environments, and what the future holds for collaborative AI on canvases.
The discussion also delves deeply into Steve’s personal journey from fine arts to developer to founder, the creative and technical obsession with seemingly small problems (like drawing the perfect arrow), and practical theory on how the next generation of AI-first tools will operate visually.
“If you got in early with those tools, it really was a differentiating factor... you had to have an appetite for complexity back then.”
— Rid (06:00)
“[Drawing arrows] became the most complicated thing that I'd ever tried to do with code... It was turned out to be highly subjective.”
— Steve Ruiz (13:26)
“A good tool will allow you to... very safely make a change and go back, have that safety net... You also need to just be able to compare options.”
— Steve Ruiz (25:11)
“If it doesn't work like that, it feels broken. It feels like, oh, this is like a tech demo. This is not a real thing.”
— Steve Ruiz (28:31)
“So many entire kind of product categories can fit into the canvas, but the ones that we're familiar with are... an early generation.”
— Steve Ruiz (34:54)
“For a lot of people, this was their first time making something that works, like producing an artifact of software in any kind, any shape. But this madness... there were like millions and millions of billions of views.”
— Steve Ruiz (40:25)
“They’re not people, they’re definitely not people, but they’re... kind of humanoid. They're not, like, bonded to you... They have these powers and they have accessories...”
— Steve Ruiz (46:58)
"One of the biggest problems... is that it's very hard to remember which agent is doing what... that actually works really well on the canvas because they just look different, you know, give them different hats, give them different wing patterns and whatever.”
— Steve Ruiz (54:28)
This episode is essential listening for anyone interested in tool-making, generative AI, collaborative environments, and the rapidly evolving role of the canvas in product and workflow design.