C (2:37)
So N8N is really no different from your Zapier, your make, your Alteryx, Power bi, any of these workflow automation platforms that have been around for a long, long time. Because automation has been around for a long, long time. But now because the fancy word AI has been slapped in front of everything, it feels like something new. So the reason that I chose Ennadent is because it's a super visual builder and you'll see what I mean by that. When we start running this demo, you'll see the nodes turning green, you'll see the agent thinking it's just really cool to see what's happening because what's happening in the backend is there's a bunch of code and JSON going on that's actually making this stuff happen. But because now we're in a no code drag and drop interface, it really lowers the barrier to entry for people to get in here. So just for your guys context, I have no coding background. I went to school for business analytics and marketing and I've been able to just spin up some really cool stuff that would have taken a traditional software engineer weeks and weeks, but it can take me a few hours in a day, which is really cool. But the possibilities of NN are endless because you have all these different actions. You can build agents that can call on different tools, you can have these tools be different workflows, you can have the tools be different agents. So I know that sounds like a lot, but the point I'm trying to make here is you can literally do anything and automate anything in edin. So what we have here, like I said, is this is the main agent. So this kind of controls everything and we're able to talk to this agent through Telegram. So I could pull up Telegram on my phone for the sake of the demo. I'm just going to have it right here on the screen so you guys can see. And what I'm going to do is just kind of walk through what I'm doing and show you guys kind of step by step what the agent's doing as well. So because we're in test mode, I just have to click this button. So the agent's now listening to us and when I send over this image, so we've got this Image of me holding a JBL speaker. What's going on right here is you can see the visual element here. The agent just received that image, it uploaded it to Google Drive, and now the agent is thinking about what to do. So you're gonna see right here in my Telegram window, it's gonna come back and say, awesome, what would you like me to name that photo in your Google Drive? So now I can just say, okay, cool, let's call that Speaker. I'll call it Speaker Image. So now once again, the agent has received our message, it's thinking about it and it's going to go ahead and use this tool down here in Google Drive called Change Name. So it takes my natural language request, it understands context of what's going on and it uses the tool that best suits the use case. So it knew it needed to change name. And if I now go to the actual folder in here that it used and we refresh this, you can see that it just added a picture right here called Speaker Image. And this is the one that we just saw me upload into our Telegram. So it has access to this folder and it was able to change the name of this image just that quickly through my natural language request. So now that we have that image uploaded, I'm basically just going to ask it to make it into a professional sort of like studio looking image and then we can take that image and turn that into a video. So let's say we want to make some sort of, you know, like Instagram ad where we want to, you know, like targeted for summer, having summer pool parties or something like that. So I could say, please turn that speaker image into a professional studio. We'll just do image where the speaker is next to a pool for summer. Okay, we'll see the context that we gave it there. And now the agent once again is thinking with its brain over here, GPT5, it's going to try to figure out, okay, you know, I have this picture, I have this request from the user, which tool do I take action with? And what it should be doing is it needs to understand what file to actually edit and then it's going to use this Edit image tool. So one question that I get a lot is when you're building these agents, which chat model do you hook up? And basically just the brain. So you know, you've got like your anthropic Claude models, your OpenAI GPT models, Gemini, you've got all these different models and they all just have like different strengths and weaknesses. And right now I'm going with GPT5, which is actually on the slower side. But I found the performance to be better. Really good. And later I'll talk a little bit more about, like, what goes into prompting images and videos and stuff like that. That's where all the magic lies is in your prompts. So you can see the agent finished up. And now back in Telegram, we have this image where we have the speaker and it is by the pool. And the agent said, okay, I called this image speaker by Pool Summer Studio. So we have that picture done and you can see if you look at the actual JBL image, it has consistency with, you know, the wording, with the buttons, with even the handle thing that I use to carry it around. The value here is this model, Nana Banana. It can actually do good with like, text and it can have consistent characters because typically if you would have used an image generator, it probably would have messed up the wording or it just wouldn't look like the exact same speaker. It would just look like a very generic speaker.