Loading summary
A
I think that's probably a good place to be right now as a designer is really understanding what's possible, trying to push the edges, even if it sucks and you get stuck. From that moment on, it's just changed the whole way you work. Like, as soon as you realize you can't design half of this stuff in Figma, what you're really designing is the harness for the agent to do longer things and verify its own work. In the same way, it would be crazy for a company that did issue tracking or note taking to not take a hard look at themselves in the mirror and try and figure out their role in the future. It would be crazy for designers today to not be doing the same thing. Our obsession with titles is what will screw people over. If you're like, but am I a designer or a PM or an engineer? Like, oh, my God, like, which path? No, no, no, no. These things are going away. Like, all of this stuff is getting very, very blurry. It's getting very, very easy to move between those disciplines. You want to be the kind of person that can move between those types of working very fluidly and not get ca. But what's my. What's the box I'm in at this company? Right?
B
Welcome to Dive Club. My name is Rid, and this is where designers never stop learning. Today's episode is with Brian Lovin, who's one of those people that I pay very close attention to both how he works and all of the tools that he's using. So we're about to do a deep dive into his design process and all of the ways that he's iterating on both how he designs AI tools and also how he uses AI tools. But before we get into all of that, I wanted to know what it was like when Brian first joined the design team at Notion.
A
One of the things that ended up working really well is before I joined Notion, they invited me to their design off site. Like, basically when we were working on the deal to join Notion, I don't even think we'd closed yet, or we were close to closing. They had their design off site in New York. I was in New York. So I just went and hung out with the team for a week. And that was helpful. I think it was probably awkward for everyone because they're like, who the hell is this guy that. That doesn't work at this company that's just showing up like, what are we allowed to talk about all this kind of stuff? But it was like, hey, Brian's going to join. I'm Going to join. And so I just jammed and like, mostly I think I just sat back and like absorbed and saw what everyone was working on. And one of my favorite sessions at the off site was, I think it was the last day they'd rented this co working space and they broke off into teams of like three or four people. And each team drew two pieces of paper where each piece of paper had like the name of a feature. And it was like, go design something that combines these two features. And it could be, you might get lucky and draw like AI plus chat, you know, or you could get unlucky and draw like formulas plus permissions. And you're like, oh my God, like what? I don't know. And then everyone went off and designed for, I don't know, it was like not very long, like half an hour, an hour maybe. And then everyone presents their ideas and the quality of the presentations blew my mind. I'm sitting there and people had like high fidelity prototypes, like pulled together really impressive figma click throughs, whatever with animations. Everyone was very quick at pulling together like production assets. So it felt very realistic. And some of the demos were. Everyone was like, oh yeah, we should ship that. Oh yeah, we should ship that. Like, really cool. And so that really was my first experience with the Notion design team and it was just very impressive and I felt excited. Everyone was very welcoming. I don't know, like, it was just good vibes all around. And so that was In, I think October 24th and then I joined January 25th. I got thrown into the deep end when I joined. The first project I worked on was called Internally. We called it App Builder. I don't think there's any secrets here. And it was very complicated. It was like, hey, Notion has all of the primitives that we need for AI to build any kind of tool that you might use at work. We have databases and charts and views and pages and AI chat. Like we have all this stuff. Why not just take the next step and let it write code and generate any sort of app that you might want to live inside of Notion? And that was just like a really intense project for the first four months. And it was funny because I actually didn't really work that closely with any. I worked closely with one designer who ended up leaving a few months and he'll know who he is and that was an awesome partnership, but otherwise I kind of felt a little bit isolated from the team. We'd never shipped App Builder in that form, but we did split it apart and ship it in chunks. So then the first big thing I worked on was Notion Agent, which was like, one slice of that. And then the team shipped custom agents, and then we shipped now workers where you can, like, write code and deploy and host code on Notion that AI agents can use. And so it's cool. Like now, over this arc, seeing that initial thing I was working on, I feel like all the parts have been built and they're working inside of Notion. I don't know if that exactly answered your question, but, like, I just had a really fun onboarding. Is like, this was gonna be the place for me. Very intense first few months where actually the first thing didn't really work and I had to, like, pivot and adjust and, like, learn on the fly. And then here we are a year later, and I don't know, just nothing has changed. Like, the team is awesome, and I'm very excited about Notions Place in the future. I think everyone here is, like, sufficiently AI built. And that's fun.
B
Real quick message and then we can jump back into it. So Jitter just released image to video, and it's a pretty big deal. All you have to do is upload an asset and Jitter will instantly generate a short video clip from it with the help of AI. It's an easy way to add motion and depth to static backgrounds. Maybe create little clips for brand visuals or just in general make any scene feel more alive with subtle movement. It's super fun to play with and available today. Just head to Dive Club Slash Jitter to try it out. If you're like me, then you're prototyping a lot in code lately. But the problem is you kind of have this choice between a tool that isn't hooked up to your actual code base and design system, or a localhost prototype that's super annoying to share. That's why I love what DESN is doing. In one click, you and everyone on your team can prototype directly in your code base without ever opening an ide. Desen extracts your design language and gives you the perfect sandbox to explore without any of the technical hurdles. And when you're ready, there's a nice little share button top right, and you can send it to anybody on your team. It's a pretty big deal. And you can connect your code base and start prototyping today. Just head to Dive Club dessen. That's D E S S N. Now onto the episode. What's it like being a designer whose surface area pretty much entirely revolves around working with and shaping AI as a material?
A
So when I joined in January. I'd never touched AI. In fact, when I joined Notion, I was pretty skeptical of AI. Like I remember when we were at the end of working on Campsite, which the startup before, at the very end, Cursor tab completion was getting good just to like put a frame of reference right. Like we didn't have coding agents. I think this was pre Claude 3.5 models and I just thought it was all kind of bullshit. I'm like, nothing it makes is good. And the people who were paying attention well weren't concerned about the current quality, they were concerned about the trajectory and how fast it was getting better. So I joined that January and one of the reasons App Builder was actually hard and probably I just made a million mistakes. But one of the reasons was I kind of approached it like a typical design project, like, cool, we're going to build this interface where users are going to be able to like construct experiences with AI. And I'm going to open Figma and I'm going to like design this in Figma. And I was designing chat experiences where it's like, okay, the user is going to ask for an app that does this and then the agent's going to respond. It's going to show a little preview of what it's making and it's going to be perfect. And then we would test that out in, in, you know, engineers would actually build it and it just doesn't work that way. The models weren't that good. It was very slow. They would make mistakes, they would have to like ask clarifying questions. And so there was this moment probably in like February maybe I'd been a month in, where it's like, none of this is working. Like, I can't keep making Figma mocks to try and simulate what it's going to actually be like to use AI to build AI type things. So that's when like, all right, I just got to jump into the medium. And so that was when I started working on this internal tool prototype playground. And really the goal there was like, I just need a medium where I can talk to AI and like understand what it feels like to have it respond. I'm going to test all the models. Some of the AI SDKs like from Vercel, were pretty good at helping you whatever execute tools like return structured outputs. You could start to sort of simulate what it would be like to have a notion shaped thing, but just in this totally isolated code base. From that moment on it's just changed the whole way you work. Like as soon as you realize you can't design half of this stuff in figma, what you're really designing is the harness for the agent to do longer things and verify its own work. And a lot of that means using AI in your day to day work and just getting like a feel. And then the other is talking to the engineers who are actually building the harness and being like, why is this slow? Why did this get worse? Why is this model better than the other model? And luckily for us, we have really, really good engineers working on our harness. And I just try and learn from them and absorb their brains and they're like all, you know, living on the next generation of models, and I just try and absorb that into my design process. I think where designers need to be designing right now is like right at the boundary of the next, the current model, like, hoping that the next one solves it. Right? Like, we're all shipping shitty vibe coded slop, just hoping like the next model is definitely going to clean this up. Right. And I think that's probably a good, a good place to be right now as a designer is really understanding what's possible, trying to push the edges even if it sucks and you get stuck, and then just talking to a lot of as many smart people as you can. Obviously, like, working at a place like notion is very advantageous just because there's a density of people and we have in person days to learn from each other.
B
We did that with the voice models working on Inflight, where we spent like a good month trying all of these different interaction patterns for what it looks like to almost be interviewed as a way to give feedback. And it was clear that it could be good. Yeah, but it most definitely was not good. And it was kind of just like, all right, we're going to put a pin into that. There's something there. But like, well, I guess we'll just wait till the models get better. And it kind of feels like that line of thinking exists in almost every direction at the moment. If you are, you know, especially pushing, you know, doing more frontier work, the
A
people who are really engaging with these tools are, I think by definition, early adopters. Like, there's the masses who are using ChatGPT to ask some fairly basic questions. But if you're really trying to use AI right now to automate knowledge work or simplify your job or like remove a process at your company, you're a very early adopter. And the tolerance for those people for like, kind of subpar experiences is actually quite high. I think people are still willing to crawl over glass, to set up an open claw. Right? Like, that is not an easy thing, and people are doing it. And so I think that's actually okay. And it has been weird because you have to, like, readjust your expectations for your own quality bar, where it's like, I know that this thing isn't perfect, but the people who are using it are very tolerant to those imperfections and are willing to, like, trial and error their own prompts over and over again. One of the important things that I got to see at Notion, and I think probably every designer, every company is experiencing right now, is every six months, everything that we did before becomes more or less irrelevant. So, like at Notion, we've rewritten the agent harness, I think, three times since I joined, like, roughly every six months. Each time it's just stripping away all these old assumptions that we used to have. And like, today we're designing around harnesses that are roughly, like, give your agent the ability to write scripts and search for things. Like, that's pretty much all you need to write a good harness right now. But I'm kind of operating under the assumption and that that won't be true in six months. I don't know. Like, are skills going to be useful in six months? Are we going to need all this?
B
Like, you are prompt engineer, has all
A
the range you are a senior engineer at.
B
Yeah, exactly.
A
I don't know if we need all of that stuff in six months. So I have found a little bit of, like, inner peace with. It's just going to keep changing. And my job is to, like, understand what's happening, why it's happening. Like, why is it good at writing code? And what are the implications of that? Okay, well, it needs, like, a safe space to execute that code. Okay. You look around, everyone's spinning up sandbox. Sandboxes are slow. Let's make them faster. Right. Like, all these things are just downstream consequences of the agent harnesses changing. And they've changed every six months for the last couple of years. And I assume that will continue clarifying
B
question on the prototyping playground that you built. I've seen a little bit, but help me figure out where it exists on this spectrum. Like, on one end, it's like, fresh, lovable build, no context. On the other, it is a perfect copy that you're like, you know, updating somewhat recently of the code base. Like, where does that exist? Like, because I think that level of fidelity is something that I don't see that often. I'm assuming it's somewhere in the middle. Talk to me about that for a second.
A
It depends on the prototype. It's just a big directory of prototypes and then we have a component kit that kind. It's like, close to notion. It could be perfect, like, if we just spent more time on it. But we've tried to do like the 8020 thing, like, get it pretty close. Like, we have a sidebar that looks kind of like notion sidebar buttons and dropdowns that are kind of like notion buttons and dropdowns. And then it's up to each individual designer to push that as far as it needs to be pushed to prove the point of their prototypes. That's why I say it depends. So, for example, we have a designer here named Will Dawson, and he's been working on the editor and like some of the inline stuff that you can do with AI while you're working on a Notion page. The only way to really feel what it's going to be like to interact with AI while you're typing a page and selecting text and like navigating with your keyboard is to actually be in an editor. And so in prototype Playground, he more or less recreated like a very simplified version of the Notion editor with slash commands and inline AI. And the AI does actual things and actually writes back and updates the page in real time and selects stuff. And it's very high fidelity. And if you stare at it, you're like, I think this is notion. And it's not like that. You have to push to really prove that this idea is good. So, anyways, to answer your question, it's a wide range. We try and give some components to let people get close enough that then each individual designer can push as far as they want. Now, over time, I don't know what's going to happen. I think two things will happen. One is we'll continue improving those components inside of Playground so that just by default, they look better and better over time. But I've also seen more and more designers just skipping Playground and they just go and iterate and design stuff in the production code base. It's not appropriate for every type of thing you're working on, but there are an increasing number of things where it's appropriate to just, I don't know, boot up the Notion code base. Throw Claude at the exact same problem, and you might as well get like a real thing out the other side if you're going to spend roughly the same amount of time.
B
Anyways, how much of the Notion design team is actually touching prod working in code at all Is it still acceptable to be doing predominantly FIGMA work? Talk to us about that breakdown.
A
I think it's totally fine depending on what you're working on. But I would say the barrier to entry to try making something in code is so low that I think everybody has dabbled with it. What percentage of people go straight to prod? I'm not sure, maybe like 10 to 20%. And it's still task, task specific. There's still a lot of work that happens inside of figma. There's a lot of people playing with paper now. I don't know, I feel like I'm just surrounded by my people. They're all like, we're all tinkering and trying the new things, but I think it's still about picking the right tool for the job. And there are some types of design work where it's just more helpful to be in a 2D canvas taking screens, shots and masking over it. And just like, I think this looks roughly right. And then on the other side, no, I need the highest fidelity version of this in a production environment. And we push the engineering team at Notion to make the Notion code base more legible to AI. So like a lot of work that's happened under the hood and I assume is happening at most companies is like, how do we enable this type of work from people who don't have engineering backgrounds so that the code that, that they write with AI is not shitty, doesn't break the service, is tested correctly, gets reviewed correctly. And a lot of that is like code based infrastructure stuff, right? Everything from skill files to like your actual CI pipeline to make sure that silly things aren't slipping through the cracks as the company funnels more and more energy there. I just feel like we'll all keep climbing this gradient towards trying things in Prod more often than not. But there's always going to be experimental, confusing ideas where it's just way better to sketch on a napkin or on a whiteboard or open figma and just draw some rectangles. Actually, I'd say a lot of people now just open TLDRAW and just like draw squares.
B
As you move along this gradient, how does that change the way that you all collaborate and work together? Or pieces of the design process that were previously a staple that changed. How much does the medium change?
A
Collaboration at Notion, I think it just moves from, hey, check out this figma URL to hey, check out this deploy preview. I think that's pretty much it. Like what, what's the artifact that we're going to point at together and try together and moving that from one domain URL to another. Right. I think that's, that's been a big change. Prototype Playground has been cool because it's all just one code base. You can like go and poke around other people's stuff and like yoink cool interactions from other people's prototypes or just like duplicate theirs and riff on it. Like that type of interaction has been cool to see, but I don't know, I mean we still just have design crits and we show up in a meeting room and someone like talks about the problem they're trying to solve and maybe we share a different shape URL with each other.
B
It's kind of an in the weeds question. But when I think about what I thought my workflow was going to be like six months ago, like looking into the future kind of thing, I assumed that I would probably be doing a bunch of quick functional prototyping in code and then really sweating the details on the canvas canvas and fine tuning all of the visuals. And I found that it's like actually the complete opposite where I'm doing a lot of like you mentioned paper. A lot of my go wide explorations are actually happening like through conductor in paper. But then when I want to sweat the details, I'm doing it all in code. Like I haven't like really specifically nudged pixels around on a canvas in, in a lot of months.
A
Actually I haven't nudged pixels in figma in a long time. The frustrating thing is like AI is still terrible at last mileage fit and finish. And I think there's two ways to go about it. One is what you're describing, which is you just get into code and like touch the css because at the end of the day that's the thing that you're going to ship and that's, that's all that matters. And for that I think AI sucks. Even all the leading models, you can make them ultra think and they're just bad at like visual interaction design. Some people watching will say skill issue, which I can accept, but I don't know, I've been doing this for a while and like I can't get a hold on it to do good things. And so you, so you do last mile inside of a coding tool and ide or if you're really crazy and don't value your time, you like prompt your way to a polished thing. Like there's people prompting like nudge that element four pixels to the left. I'm like don't. Why are you Spending money to nudge something in code, just like, go change the class, right? Don't do that. So that's one way. And then the other way is actually funny enough, like having a higher fidelity system inside of figma and just crossing your fingers that the MCP tools or paper, like, just cross your fingers that the MCP tools. Tools are getting better. Figma's mcp, I think, is actually getting better. And the last time I used it, I had a design in figma with named layers, and all the components are roughly named the same as they are in the code base. And I don't know, I can like, paste a URL of a FIG frame into Claude code and it spits out a pretty damn good thing on the other side.
B
That's cool.
A
And so we have like, gone full circle into this funny world where, yeah, it turns out if you name your layers, like, like the model can kind of figure out what you're trying to do with the thing. And so I see teams approaching that on both sides, like, invest in the Figma thing. Invest in whatever naming scheme structure there will make your designs more legible to AI. And then on the other side, obviously that has to end up in code at some point. And I think, honestly, that's just like getting in there and touching pixels and making buttons feel good or whatever. And then once the button feels good, then the AI knows how to reuse it.
B
So, yeah, I think that's actually the key because people talk about how AI is really bad at design. I'm like, you're totally right, but you're absolutely spend the time. I missed that one, man. Like, if you spend the time really, really, really sweating the details on a good set of primitives, once AI is so good at not only reusing, but also extrapolating. Like, I might be like, hey, I really like this tertiary button, but I'm using it on this card. The height's much bigger. It's not going to be a noticeable effect. So I'm just like, take that general idea and just extrapolate it. It. Oh, man, I've been getting such good results from that. The other thing that I'm curious if you're using at all, do you use agentation?
A
I have played with it.
B
That's my antidote to the nudging, I think.
A
I think agentation is awesome, but I haven't used it very much.
B
I'm okay saying, nudge this four pixels to the left. If I can bundle that with six other prompts like that, and then do it all at Once and then I'll just like spin up a new tab in Conductor. So I might have like five tabs in Conductor, all doing four or five different prompts simultaneously. When I'm like doing like more of the visual refinements, at least I convince myself it's, it's faster. But maybe I'm also just. I don't know, I don't know. I don't open the CSS anymore, I guess.
A
I don't know. I don't know. You know, there's, there's the, the studies where it's like people self report being way more productive with AI, but objectively, on a measured basis, we're less productive. And the counter to that argument is it's okay if you're like accomplishing the same number of tasks in less time, but it seems like more people are accomplishing more tasks at like a lower quality in the same time. So we're in this weird zone where it's like, yeah, you can do more. Each thing needs more follow up and fixes and that makes you feel more productive overall. I'm not totally sure. But agentation, I mean the whole point of that is. And tools like that, right? Like, what's the other one that Josh Puckett's making?
B
Oh, Dial Kit.
A
Dial Kit. Like all these things, right? Like we just need to make our intent legible to AI. And your intent is, or the fidelity of that intent is much higher. If you can say this react component in this position with this ID should look like this instead of having to like describe visually something on the screen. Like, of course the AI will be better at performing operations when it knows the name of the thing or the underlying component or knows what file it's in. So all of those tools, it's all about just making your, your intent legible. And that's the kind of stuff where it's changing all the time, right? Like agentation came out like a month ago and then now Chrome has all that stuff like built into Chrome. Any agent can use any browser now. Like this stuff is changing so fast. But all in the pursuit of agent legibility.
B
On the topic of things changing so fast, can you walk us through some of the key evolutions of your tool stack over the last three, six months?
A
We're in this really funny timeline where people are tribal about the tools that they use. You have people who use Claude code, you have people who use Codex, you have people who use Cursor, and everyone kind of thinks the other people are dumb or they try and like justify why their Thing is better. And then you get into like, very, very niche arguments about, well, Claude is better at like, creative planning and Codex is better at following specific instructions. Like all these things might be true. So for me, what that means is I just want to try all the stuff. And I try not to be tribal. I, I generally prefer the, the Claude ecosystem. But, you know, I had given up on. Well, to answer your question, I was using cursor a lot. Last year I stopped using cursor in favor of Claude code in the terminal, then switch from cloud code in the terminal to Conductor. Conductor to me was like the perfect middle ground of like, I can see the code, but I can multitask. And I'm mostly just chatting. But then, you know, cursor comes out with Composer two fast. I'm like, okay, let's go try it out. And you open Composer two fast. And it's pretty good. And it's fast and I get to look at the code and I can tweak it at the same time. Like, oh, this is the perfect tool for me to do front end pixel polishing, type work. I don't want to be like, sitting around waiting for Opus to change the corner radius of things. And like, then I have to switch from the terminal to a browser. And like, I don't want any of that. Like cursor. Cursor is actually a really good tool for that. So right now I have no allegiance. I just pick the best thing for the job and then if it's at work, I get to benefit from unlimited good free tokens. And if it's personal, I just try and optimize for tokens per second for some ratio of intelligence. Right? Like, and I pay for the quad max subscription, the 201. So I'm like, ah, I kind of want to get my, my, my money out of that thing. So I bias towards Claude in Conductor
B
is the primary value prop other than speed. Like, I, I get the speed thing with cursor. But is that becoming your choice? Because you are actively getting in and writing like handwriting like a caveman C. Yeah. Okay.
A
I've handwritten more code in the last two weeks than I did in the last four months. The current meta right now, if you pay, if you're on Twitter a lot like I am is nobody's writing code anymore. What are you talking about? Like, touching code that's so 2025. And I'm doing that a lot. Like, I use agents to YOLO PRs, especially in prototype playground. Like, I YOLO stuff all the time. And I don't need to look at the code. I think the models are still bad enough that if you really care about the final quality of the thing that you're shipping, I think you gotta look at code. And I just happen to be working on some stuff recently where I like really care about that or that that part matters, the final fit and finish. And so yeah, I'm opening an editor and writing code like it's 2023, 2024. And it feels pretty good because now I get to reach over to the side and like grab this really fast or smart model to just take some of the annoying parts off my plate. While I'm in the middle of that, I think it's a good time to not have an allegiance and not get tribal about tools. Just try it all. Just try it all. See what works. And try and understand when people say like Codex is better at following instructions and cursor is better at creative planning, like what that actually means, like it's one thing to hear that and be like, okay, cool. But then you go and try it and you try and really understand why you would pick a different model for a different type of task. The better sense you get of which tool is good at which job, I think the better you can be as a designer or a software creator because it'd be like, I don't know, knowing when to reach for a frame versus a group or auto layout versus Freeform, moving of things. Like you just have a better mental model of what to reach for to accomplish the thing that's on your mind.
B
There's one question that I can't stop asking myself. What if companies applied to talk to you rather than the other way around? And that question is the foundation for the all new Dive Talent network. And it's working. Like right now I'm helping many of the most exciting startups that I know to hire, hire the designers and builders who listen to this show. So if you're curious what might be out there and maybe you want to get on my list, or maybe you're even looking for your next design hire head to Dive club slash Talent to join. Today I started doing little tests in Conductor just to Compare Opus and Codex 5, 4 in different environments. And so like maybe I'll have Opus create a plan and then I'll have Codex review the plan and then make them go back and forth. I get that's been really helpful. And then I've also just, you know how they have like the built in review button in Conductor. Yeah, yeah, I've just been, I've been hitting it. Like I literally did this this morning. I hit it with Claude and then I went into the settings and I changed it to Codex and I hit it again. It's just like, what's. Yeah, what does the.
A
I know. I do the same thing.
B
Yeah, I think I like Codex for reviews more actually. I'm like, I've had a few instances now where I'm like, I think it's actually better. It's catching things.
A
I do the same thing because it catches things. Like, I think it's the exact same reason you would do a design review with more than one person. Right. Like, everyone's just going to have a slightly different perspective. The thing that I find really frustrating about that experience is that it's very inconsistent. Like I, I do the same thing. I will plan with opus, have Codex review, and then after that I actually don't really care which one implements. And then after that I'll have one of the two review the implementation and I actually get very inconsistent quality of the review. For example, I'm working on this side project Shiori and I was like, I should like run a prompt and do a security check. And I gave it to OPUS or Codex, I can't remember one of the two. And it was like, I found 10 security issues and then I gave that thing to the other model. And the other model's like, this is security theater. Like, literally none of these things matter for your use case or your code base. This is all bullshit. Like they're, they're wasting your time. Time. And then you could give the same prompt and they would say the opposite thing. I find that is the pretty exhausting part of using AI today is the inconsistency. But feeling like you have a grip on it. Like, oh, if I just get my skills in the right order, if I just review things in the right flow and have this model do this. I think it's a lot of cope for the fact that the models are still inconsistent. They still generally like, are bad at outputting consistent code. They still write really stupid shit that if you're not reading the code and then three months later you're like, I don't know why my app doesn't work or it's really slow. It's like, yeah, because you've just been like slop cannoning thinking that you're doing this genius, like two model reviewing each other. They don't know yet. Who knows in a year it might be solved. It's not. Right now I don't think taking a
B
step back from tools and getting super granular into literally the types of prompts and text and commands or questions like what you are literally using language wise to interface with the models. Are you able to see clear changes or progressions in terms of how that part of your practice has evolved over the last, whatever, six months of really trialing all this stuff?
A
If you think about what the AI is doing, it's taking some number of upstream context tokens that you give it it and trying to come up with the best prediction of what follows. Right? That means if you have really high quality input tokens, you're going to get higher quality output tokens. But those input tokens are where you get to steer where you want the thing to go. So for example, I mean, this sounds so basic, but like, if you want it to pay attention to a certain thing, you have to say the certain thing. Like you actually have to say the thing that you want it to focus on. If you just tell the model, hey, what are some edge cases we've missed? Now all of a sudden, the words edge case, they're in the context and it's going to like, you know, interpret from that forward and be in the mindset of looking for edge cases. If you understand how that works. You start to do these weird. They almost feel like, is it like astrology or horoscope type of prompts? Where it's like, how could this be simpler? Or what are edge cases we might have missed? Or is this the most elegant way we could have done this? Or I'm going to show your work to my boss. Can you make sure that it's like really good like these? I do that one. These are stupid. These are stupid prompts, but they work because it just flips the right bits so that the model will extrapolate from there. One of my favorite ones is, is from Simon, who's the co founder of Notion. It's I have it. I have a snippet for it. It's let's step back and think really hard. How can we make this simpler and dumber while still achieving our goal? I think that's a good prompt. And I run that, I don't know, 20 times a day. Really?
B
Wow.
A
Yeah. Like after everything, every plan, like, make it simpler and dumber every time it fixes a bug. Is there a simpler, dumber way to do that? Every time I'm like, it makes a new button, I'm like, is there a simpler way to make that button? I bet you made it way too complicated. So that on the language side is why anyone who knew how to code before AI is having, having a way better time using AI than someone who didn't know how to code. Because guess what, they know the right words to say. They know what goes in that, that upstream sequence of tokens. They know how to say things like, I don't know, durable workflow or background job or Q parallelization. Like, like these things that I think if you don't know how to program, like you just don't have the vocabulary for. But if you've been programming for a while, you have the vocabulary and it feels very natural to describe those kinds of systems. Those people are obviously getting better outcomes. That's why like I have to be very open minded to everything that I'm bad at is skill issue. Because in theory, if you can describe your intent more clearly in English with the right set of words that the model can extrapolate from, it will do the right thing in theory. Right? So I think a lot of times where it does bad things I have to like self reflect and be like, I think I wrote a bad prompt or I wrote a lazy prompt or I was tired when I wrote that. That that is changing how I I relate, right. So like I've stopped writing late night prompts. Like if I'm tired I should not ask it to do a thing because I'm going to ask it in a lazy, tired Brian sort of way.
B
Let's talk about Chiori a little bit. This I've been seeing little some tweets, things you're building, so maybe give us a little bit of context. But then the lens that I'm particularly interested in using is, you know, at least for myself. Anytime that I'm doing something from a blank canvas and you know, just no constraints, I can do anything I want. It' opportunity to try new things or to tinker or like do things a little bit differently. So like given the context of what it is, how are you then taking advantage of the blank canvas this time around?
A
Yeah, it's very funny and almost embarrassing because it's like we can make anything. AI can make anything. And I'm like, I'm gonna make a bookmarking tool. It's like, it's like weather app, stock market graph chart to do and obsidian replacement. Yeah, yeah, yeah. I, I started working on Shiori I guess a year, a year ago just to solve a problem that I have which is like no matter what system I have for Read it later app, I just never go back and read the Things later. I haven't quite figured out why that is, but maybe it's this idea of ownership. Like, having made the thing makes me feel more connected to it, or I want to, like, get juice out of it, you know? So I made my own little read it later app, and basically all it did was sync stuff into to Notion. And then I used my Notion as the, like, list of stuff. I was. I was basically Notion maxing all of last year's, like, how do I put my whole life on. On Notion? And that actually worked. It was working pretty well, but it's kind of hard to distribute that as an app. It's a Notion database, and I have some, like, functions running in the cloud over here, but I don't know how to distribute that. Having things in Notion is great, except it's sort of fighting the rest of Notion for attention. Like. Like, Notion has all my other stuff, but sometimes when I want to be in like a. I want to sit down on the weekend and read for half an hour. I just want a very simple, focused experience that shows me what's. What's up on my reading list. So that's when I started working on Shiori, which is like, the standalone version of that. The cool thing about AI is you can stand up a product like Shiori in two seconds. Like, literally the dumbest prompt will get you there because there's so much training data, because so many people have made bookmarking and read it later up. The interesting part for me has been, like, figuring out how deep the iceberg goes. Like, on its surface, it's a list of links. It's not complicated, but under the hood, and especially when you have users saving links, you realize people do weird things with their bookmarks. People have saved links that redirect to localhost. Like, how is this possible? Who. Like, somebody somewhere set up a redirect for a live domain on the web, on the Internet that redirects to a local host route. Usually those people are trying to hack you. So you have to, like, consider that. I've had people import, like, 10,000 Wikipedia articles. Okay, 10,000 Wikipedia articles. Like, how should we process that 10,000 Wikipedia article import happening simultaneously? Oh, I need to go learn about distribution of systems and parallelization. And what happens if one of those fails? Do we retry it? What happens if one of those wikipedia articles is 30,000 words long and doesn't fit into a column of a database? Right. So you get to hit all these little edge cases. And so now what? I'm having a lot of fun doing with Shiori is just going really deep, handling all these edge cases and at each step trying to write the test or the verification framework or add the logging necessary so that AI knows about all of these edge cases and can design around them in the future. This is very basic, but I'm kind of proud of this. Like the flow now is when someone reports a bug on Shiori, which yes, bugs still happen, all I have to do is take their email address and paste it into Claude or their user id. Whatever it is, I paste it into Claude and I have Claude hooked up to Sentry for error reporting Supabase so it can like figure out information about their account. And I use Axiom as a log train and I can just paste their user ID and paste the content of their email and it will go and replay exactly what happened leading up to the moment of failure. Because every single event is like logged and tracked and all the exceptions and stuff. It knows what device they're on, all this kind of, this kind of thing. So bug reporting and bug fixing has become quite fast. So to me that is the fun part. Now I feel like, like Shiori, you know the rock tumbler analogy from Steve Jobs, you put all these rough rocks in the thing and you tumble it around and if you tumble it enough, what comes out is like a very set of polished stones. I'm having fun figuring that part out. Like what is the tumbler that lets me get a polished stone out the other side? Something that is resistant to edge cases. But if an edge case does come up, I'm able to find it very quickly and easily. Or the AI finds it very quickly and easily on my behalf and just fixes it and then adds that. So it's like a self healing, self reinforcing system. Not all of it's automated yet, in fact very little of it's automated. But that's sort of the long term goal here is like can I put more of this on Rails so that I can just focus on building cool and fun new features and the like infrastructure, the scaling the edge case, handling the bugs, all that stuff is a self healing loop. The cool part for me about working with AI now is I get to stretch just beyond what I would have been comfortable doing as an engineer writing code before AI. I don't know. I want to build an API for chiori, like 2024 version of me as a programmer would have been like, ah, I don't really know where to even start. Like how would I build the cli? How would I Build an API? How would I do rate limiting? How would I do abuse detection? How would I do all these kinds of things? I don't know. And now all of a sudden that type of work is very accessible. I can just, just guide the agent in that direction. That has been cool for Shiri, like API, cli, mcp. Just pushing a little bit beyond my comfort zone, and I'm probably not pushing enough. As a designer who has been writing code for a while, it's hard to shake out of this mindset of like, oh, but how would I build this and get into this new way of working, which is how would I describe my end vision or the end goal in a clear enough way that the AI can do it?
B
It honestly made me go back to what you were talking about earlier in terms of productivity and speed and everything. And I think I actually agree. It's weird. The dominant measuring stick that we use for AI is so often speed and efficiency, but I'm like, I don't really care about that that much. That's not the impact that it's had for me. It's not efficiency. It's given me the ability to play in a set of adjacent ballparks that I previously had no business even walking into the door, you know, so it's like, it's like more about, like the raising the ceiling and like increasing what I can bring to the table. I'm not necessarily doing that much stuff faster, but I'm doing many more different types of things that were previously inaccessible.
A
Shiori has kind of become a little bit of a antidote for me because I spend my week at work, like, do more faster, write more code, build more prototypes, and then I get to sit down with Shiori and think like, like what needs to be true for this to stay interesting and stay a project that I would work on for the next several years. Like what, what has to be true and like part of that is I have to understand the code base. I have to care about who's using it and that they're taken care of. I have to care about good infrastructure. And those things actually force you to slow down. I quite enjoy that. And what AI is allowing me to do is slow down and focus on the hard parts and, and let the computers take care of the rest. So, for example, my current workflow right now is during the weeks I plan and during the weekends I execute. So a typical day might be I wake up, check and see if there are any errors overnight or bugs or user feedback, and then I just have a Simple skill that's like, fix issues, and I can just paste that into Claude and let it go and fix all the sentry issues or go and debug things. Cool. That's off my plate. Now I can go, like, talk to AI and, like, I think I want to do this feature next, but I'm not quite sure how it's going to work. Here's roughly the intent or the. The user goal. Submit, and then I go to work, and then I come back at night and I read the plan and I'll chat with it for a little bit, and then maybe, like, do one more round. Submit, and then I go to bed. Okay, so that's like, one day. And then by, like, Friday, Saturday morning, I've got, like, a pretty good plan. Like, here's. Okay, here's how I think I'm going to tackle this thing, and then Saturday, all the bullshit's off my plate. There's not, like a pile of bugs to worry about. I just get to go and look, like, really study that plan and then get to work building it. I think that works for a side project, but also, like, trying to be intentional about pacing myself. I scrambled a little bit, like, to get it out the door, as you have to do, to, like, get all the features in place. But now I want to get into, like, methodical rhythm planning, shipping, polishing the stones, handling the edge cases, and just over time, like, maybe start slicing off little bits that can be automated. That's the interesting thing part.
B
Coming up on time here, I want to take a hard left because there's a few potentially unrelated things that I honestly just kind of want to hear about. And one of them is, I'm hoping that you can heal some of my FOMO from Tokyo Design Conference, because you. You gave a talk there, and obviously we can't go do the whole thing, but when designers left that room after you were done, what was, like, the core thing that you were hoping that they took away with them?
A
The thing I wanted people to take away from that talk is that AI is not magic. The underlying systems are very complicated, and I don't think anybody would claim they know what an LLM is doing under the hood, but it is doing things on your computer, like writing files and writing scripts and like, invoking dash commands that were invented in the 70s. And those are knowable things. And there's two ways you can sort of navigate that world. One is you can not pay attention to what AI is doing on your computer. The kind of code it's writing, how it writes the code, how it navigates your code base. You can just not pay attention and let the model take the wheel. Or you can be curious about it. Like, what does that mean? Like, wait, what is grep? What's grep? And what's grep in relation to like SED said, what's that? And you're going to find out about awk and you're going to be like, what the fuck? Like some guy in the 70s just like made this thing up and it has stood the test of time and now these like three little commands are what power. Like all modern agentic coding.
B
It's insane.
A
My manager at Notion, his name is Max Schroding, he's very, very good at nerd sniping me. And he nerd snipes me to be more curious about how computers work. He's sent me on a journey to play with Linux. He sent me on a journey to understand what's happening, happening when you boot up a terminal. And he's doing the same for many other people at Notion. Like, how do you demystify AI so that hopefully you can feel in control and wield it more effectively or just have a better opinion or kind of see where the puck is going again? AI is using things that have been around for 50 years and the people who know about these things are getting a lot of mileage. They're like, oh no, I know how computers work. I can point AI at this computer shaped problem and, and get cool things out the other side. So I think that's what I wanted people to take away from my talk. And the talk was really, I don't know, I shared some of the prompt examples I gave here, which is really about like, how do you just get the right upstream tokens that the AI can extrapolate from there? How can you nerd snipe yourself to like understand what's happening when you hit submit on a prompt? I don't know if it was successful, but I hope at least one person became more curious after that.
B
The comment about Max and Nerd sniping makes me want to go down another little rabbit hole because Max is someone who I guess I've always looked up to from a distance. Like, I've never met Max and I worked with him. And I think it is representative of almost the way that I view Notion as a whole, where I'm like, there's obviously a bunch of really talented people there building really incredible software. I've never been in really any environment like that before. So you're now, over a year in, how do you think either that culture or the people or the way of working, how else has that shaped you as a designer over the last year?
A
Plus, the cool part about Notion right now is from top to bottom, we are very comfortable revisiting old assumptions. Does Notion really need to look like this? Will it be this shape forever? And we're starting to poke and prod at that. Like the. Some of the designers on the team just redid the left sidebar, making it much more of a chat forward experience. A lot of us internally pretty much has used Notion through chat or through agents. The willingness to embrace that and change and question all old assumptions is very cool. I think that is a hard thing for a company at this size to do because it's scary. It's like things are going well, but the world's going to look very different in a few years and we need to be, be ready for it. And then I think the second part of that and what Max is good at and what the leadership here is good at doing is encouraging people to think bigger. I think right now OpenAI and Anthropic, they're way more YOLO about like, wouldn't it be crazy if we did this? And I think there's a lot of software companies that are a little bit scared to ask that of themselves. It feels outside of their like permission boundary. They're like cultural permission boundary of being able to play in that space. And so yeah, I, I think Max and everyone else is good at pushing people to like, why not? Why can't we? Here's Notion's box, right? Like people think of Notion as like a wiki, a note taking tool, a document collaboration tool, like what's right outside that box. That's really interesting. Let's go play there. And that is a very fun place to be.
B
Yeah, we're like two days from Linear Khari Publishing. The article saying issue tracking is dead. Which I'm like, how many companies on the planet would, would do that type of thing? I do think Notion is potentially one of them. But it's like such a weird moment in SaaS where like you said, everything on paper could be going well, but if you're not looking, I don't know how many years ahead. It's just like if there's one thing that's certain is we know that like the amount of change in two years is going to be extraordinary.
A
Nobody knows for certain, but it seems like a good idea for every company right now is understand how to make your systems legible to AI so that you can participate in that ecosystem in the future. It seems like the ecosystem won't go away. We don't know exactly what the agents will be doing or how they'll want to work in the future. So you just need to design your software in somewhat of a flexible way. Which is why we come back to like APIs and CLIS is like, these things have existed for a long time. Computers know how to use them. They're not going to go away for a long time. So, like, let's invest in those things. Right? I think we just shipped a Notion cli. Notion's not historically a developer tool. We have a developer platform. You can build integrations and build stuff for your team. But hey, Notion has a CLI so that everything you can do through the Notion API is now legible to an AI. And of course, we were very early with MCP and all that kind of stuff. Like, these are the things that I think you have to be doing to poke at the edges and just make sure you're in a good position to move quickly as the models change shape or new capabilities come out. So, yeah, I mean, kudos to Kari and the linear team and any company that's not doing that and like, really taking an honest look at their current business in this new world we're living in. If they're not doing that, they're in trouble. I think.
B
Final question, maybe we could shift from company strategy to personal more career strategy for a second because, oh boy. I think the other thing that, that blew up the Internet, at least our little community of Twitter, was Lenny's tweet about designers being the only role that is not experiencing growth right now. And actually there's the correlation between design growth and PM growth is not in our favor currently. Do you have any thoughts on, like, what that all means for designers and also just like what it looks like to respond well in this time as a designer who maybe is a little bit earlier in their career, maybe they're not working at a company like Notion. And you know, there's a. There's a pretty hefty uncertainty cloud above
A
us all the mother. In the same way, it would be crazy for a company that did issue tracking or note taking to not take a hard look at themselves in the mirror and try and figure out their role in the future. It would be crazy for designers today to not be doing the same thing. I think that our obsession with titles is what will screw people over. If you're like, but am I a designer or A PM or an engineer, like, oh my God, like which path? No, no, no, no. These things are going away. Like all of this stuff is getting very, very blurry. It's getting very, very easy to move between those disciplines. And you want to be the kind of person that can move between those types of working very fluidly and not get caught up in. But what's my, what's the box I'm in at this company, right? So you want a designer who can, can ship code to prove that an idea is good. You want a PM who can design around their spec to see if their spec holds up to reality. You want an engineer that really cares about building great visual component systems so that they can prototype beautiful things without needing a designer to come in and tell them to nudge stuff around, which is really annoying. Like you want people that break out of these made up up walls we've built around ourselves and our roles and our titles and are just focused on shipping good software. So I don't know, that's hard to say to someone who's new because there's just so much to learn and so much to do. I would say if you can just focus on making things for real people that solve a real problem, iterate quickly, try and make the thing as good as you can and stop worrying about. But I don't know how to code, I don't know how to write a PRD like, like, who cares? Just like ship the thing and solve the problem and learn how to do that faster. You'll be fine. I don't know. I don't know. I see. I think sometimes I don't know how long I'm going to have a job. Is all this going away and the only thing I know how to do is just keep making the best software I can make, solve real problems for real people, pay attention to the tools, the patterns, try and understand what's going on that the labs are shipping. Try and understand the current meta of skills and context management and prompt engineering. Like just try and understand it. And it does feel incredibly noisy. I would say this is the most distracted I've ever been. It is the most context switch exhaustion I've ever felt. The other thing here is you gotta figure out a way to make this sustainable. I will say here's my prediction. This is gonna age very well or very poorly in the next three months. When Opus 3.5 and Sonnet 3.5 came out last year, I think that was in like March or April or something. I went on a real bender of like, AI pilled everything. AI. I am doing agentic coding 24 hours a day. Me and my fiance, who is also a designer, we were taking a vacation. We were, like, sending prompts and, like, sessions from our phones. Like, we had a hard time disconnecting, unplugging. We were just like, oh, my God, we have to. If we're not prompting 24 7, we're wasting time. We're wasting valuable time. And if that sounds familiar, it's because we're in the exact same moment right now. Everyone's feeling the same way right now, a year later. And what I can tell you is that in the middle of last year, oh, boy, I hit a wall. You hit a wall because you just can't sustain that level of being plugged in and trying every tool and trying to be productive every waking hour of the day for more than a few months at a time. I think at least my brain can't handle it. And so I predict that we're going to have a comedown really soon. I would argue it's already happening, but in the next month or two. And the only reason that that wouldn't be true and that this will age poorly is if there's some big model capability that drops in the next month or two. But I think people are going to get exhausted. What that means is people will come back to, how do I spend my time more effectively? How do I stay calm in all the chaos and just focus? How do I turn off Twitter for two hours a day and, like, actually get work done without being distracted? I don't know. These things are going to be tried and true. Like, I think the ability to focus and be undistractable for two hours a day is, like, a meaningful competitive advantage right now, which is insane.
B
Brian, appreciate you, man. This is always fun. You're a joy to have on, and you always bring the. The energy and the laughs, and this has been a great time, so thank you.
A
Thanks for having me.
Host: Ridd
Guest: Brian Lovin (Designer at Notion, creator of Shiori)
Date: April 14, 2026
This episode explores how designers can embrace, adapt to, and lead in an era where AI dramatically redefines the boundaries of design, engineering, and product roles. Brian Lovin, a designer at Notion, shares firsthand experiences building and using AI-powered tools, reveals how AI is changing design processes and team workflows, and offers both practical insights and career guidance for designers navigating this shifting technological and professional landscape.
"From that moment on, it’s just changed the whole way you work. As soon as you realize you can't design half of this stuff in Figma, what you're really designing is the harness for the agent to do longer things and verify its own work." — Brian ([00:00], [06:49]).
"Just try it all. See what works... The better sense you get of which tool is good at which job, the better you can be as a designer or software creator." ([27:07])
"If you want it to pay attention to a certain thing, you have to say the certain thing." ([30:54])
"...anyone who knew how to code before AI is having a way better time...because they know the right words to say." ([32:37])
"It’s given me the ability to play in a set of adjacent ballparks that I previously had no business even walking into." — Ridd ([41:00])
"Our obsession with titles is what will screw people over...You want people that break out of these made up walls we've built around ourselves..." ([50:27])
"I think people are going to get exhausted...The ability to focus and be undistractable for two hours a day is a meaningful competitive advantage right now." ([54:23])
For more episodes, resources, and key takeaways, visit Dive Club.