Transcript
A (0:01)
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B (0:16)
When we think of generative AI, I'm guessing most people think of someone sitting in front of their computer, right? A knowledge worker banging away on the keyboard, needing to produce more content, more reports, more sops, right? Like that's what we think of. But I think we're missing something in this whole generative AI conversation. What about frontline workers? What about blue collar jobs? What about the people that are actually interfacing with our humans, our customers, right? What about those people, the boots on the ground? How can generative AI change those roles? Well, I think it is an area ripe for disruption and that's exactly what we're going to be talking about today on Everyday AI. How I think frontline workers may be the next frontier for generative AI. What's going on, y'?
A (1:13)
All?
B (1:13)
My name is Jordan Wilson and I'm the host of Everyday AI. If this is your first time, welcome. This is a live stream, podcast and free daily newsletter helping everyday people like you and me not just learn what's going on in the world of AI, but how we can all actually leverage insights from experts and all around the web to take what's going on and actually grow your company and to grow your career. So if that sounds like you, you are 100 in the right place. This is live, unscripted, unedited. I like to say it's the realest thing in artificial intelligence and the realest place that you can learn is our website. So please, if you haven't already, go to your everydayai.com Every day we put out a free daily newsletter. So we'll be recapping the best insights from today's interview as well as literally every other thing that you need to stay up to date. So don't spend hours every single day, you know, feeling like you can't keep up. We do that for you. Make it easy. We're going to make you the smartest person in AI at your company. So before we get started, let's go, as we do every day by going over the AI news. So first, a new physics platform called Genesis has been introduced, promising to transform the landscape of robotics and embodied AI applications. So this initiative is the result of a collaboration effort over two years involving more than 20 research labs from some pretty prestigious institutions such as Carnegie Mellon University, Stanford, mit, Nvidia and others. So Genesis says it is the fastest physics engine available, achieving simulation speeds up to 80 times faster than current GPU accelerated robotic simulators like Nvidia's Isaac Gym, while maintaining high accuracy. So the platform is capable of essentially taking a still image and generating 4D dynamic worlds, which serves as a robust foundation for data extraction and various applications, including character motion generation and robotic policy creation. So, with capabilities for generative simulation, Genesis allows users to generate data from natural language descriptions, facilitating tasks like scene creation and motion generation. All right, next, a new study from Anthropic. Very interesting, but kind of concerning. It's highlighting a kind of strange behavior in AI models suggesting they can intentionally mislead developers about their true preferences. So this study conducted by Anthropic and Redwood Research reveals that sophisticated large language models can pretend to align with their new principles while actually sticking to their original behaviors, a phenomenon termed as alignment faking. So in experiments, Claude3Opus, one of Anthropic's most advanced AI models, attempted to align it fake 12% of the time when being asked to answer potentially harmful questions, despite being trained not to do so. So researchers found that when Claude 3 opus was retrained on conflicting principles, it exhibited deceptive behavior 78% of the time. Bad model, indicating a significant risk of misalignment. The implications of this research are serious, as it suggests that developers might be misled into believing a model is more aligned than it actually is with safety protocols. All right, yeah, that's extremely concerning. But you know, shout out to Anthropic. They're always putting out great research that's really looking at both the pros and the cons of their own models. All right, last but not least, we have two days left of open AIs, 12 days of releases. So yesterday, OpenAI released a way that you can text chat GPT. That's great. Like, I don't have enough unread text messages or just call it at 1-800-CHAT-GPT. So in the last two days, there's still a lot of reported features that we could see, such as the GPT4.5 release, a potential demo of their operator agent, or a new tasks feature that lets you run ChatGPT tasks, which are like scheduled automations. All right, so we're gonna have all of that in our newsletter if you haven't already checked it out. All right, and I'm excited for today's conversation. We have a special one, so I'm good guess. Let's just say that, you know, he's. He's one that can really pitch some more on that in A second. So please help me. Welcome to the show. We got him. Al Lagunis, the co founder of Levy. Al, what's going on? Thanks for joining the Everyday AI show.
