Transcript
A (0:00)
There you are. Come on in. Come on in. We almost have a quorum so we can redistrict. No, there won't be any redistricting today. Not here anyway. Texas? Yeah, maybe. All right, once your comments are working, we'll get busy with the show that you can't wait to hear. Good morning, everybody, and welcome to the highlight of human civilization. It's called Coffee with Scott Adams. And you've never had a better time. But if you'd like to take a chance on elevating your experience up to levels that no one could even understand with their tiny, shiny human brains, well, all you need for that is copper mugger glass, a tankard shell or stein, a canteen jugger flask, a vessel of any kind. Fill it with your favorite liquid. I like coffee. And join me now for the unparalleled pleasure of the dopamine. End of the day, the thing, thing that makes everything better is culty simultaneous sip. It happens. Now. Go. Pretty good. Really good. Not the best, but it's right up there. Well, the restaurant chain called Cracker Barrel has decided to go broke, but the way they'll get there is by going woke. So apparently they have a newish CEO, a woman who's quite gung ho for all things dei. And one of the things they did was they removed from their logo the old man who I always thought was a cracker, and then he was leaning on a barrel. So they got rid of the cracker and they got rid of the barrel. I don't know what's left. Well, if you had to guess, what is most likely, is it most likely that they're moved toward DEI and making a big deal about it and changing the logo and getting rid of the old white man on the logo, Is that going to help their bottom line? If you had to guess, would you guess? Well, this will all work out well. Let's check in with Target. Target stores who went through their own wokeness talk, friendly swimsuit kind of event. And I'm reading from Red State. Bob Hogue is writing in Red State that. Let's see, the Target CEO is leaving his post next year and it looks like they never really recovered from their wokeness drama. But here's the funny thing. The way CNN describes it is that the problem is a backlash to its retreat on dei. So if you get your news from cnn, it will say the problem with Target sales is not that people didn't like them being woke, but rather they didn't like when they were woke and then they became less woke. So it was the becoming less woke because there was such an uproar that that really hurt their sales. What do you think? Was it the being woke or was it the retreating from being woke? They hurt their sales? Well, probably both. My guess is that anytime you change anything, it gives somebody a reason not to not to shop there. But what it definitely didn't do is give somebody a reason to shop it if they didn't already have one. It could give you a reason not to shop either way because they got too woke or they retreated from the woke. But which of those things would cause you to buy more? Would you say, oh, Target's really woke now. I'm going to buy a few extra pairs of pants? No, it can only go one direction. Whether you go woke or you go less woke up. It can only cost you customers just because it's a change and there's no way there's an upside. So we'll see. Meanwhile, Kroger stores have announced that they're going to close multiple supermarkets in Washington state due to crime, according to the Gateway Pundit. Mike Lachance is writing about that. And what do you think of that? So Kroger has decided that instead of staying in the high crime area, they're going to get the F out of there. Huh. Well, that's some advice, isn't it? I wonder if they could get canceled for saying that they should get out of a high crime area. I feel like they should be canceled for that. No, no, just kidding. Don't. Don't hurt Kroger. But what will happen? Will Kroger's sales go up or down? Well, they'll have fewer sales in the high crime area, but they probably were losing money and employees too. Meanwhile, Bed Bath and Beyond, which at one point was bankrupt, but I guess got rescued by some big money entity, but they're trying to rebuild and they have announced that they will not build any stores in California. Can you even hold this in your head that California is uninvestable if you're a big company? They've just said there's over regulation and taxes and basically those two things, over regulation and taxes. So they say it's just not even worth it. It's too risky. So the two risky places to do business, three. Well, four if we count Ukraine, would be China, Ukraine, Gaza and California. But also apparently Washington state, But also Washington D.C. so there's a whole bunch of places you just don't want to be. And unfortunately I live in one of those places. So I'm thinking of Moving to Ukraine for the friendly business environment. Well, Sam Altman, head of ChatGPT, apparently, according to Zero Hedge, has hired some top Democrat operatives to help them grease the gears, so to speak. As Zero Hedge puts it, grease the gears with California politicians because they need to restructure the company and eventually go public, and they need California to be a friendly business environment. Do you know what will happen if they don't get what they want? This is in Politico, by the way. Well, will they leave California? What will their AI tell them to do? But it seems unbelievable to me that a company as big as OpenAI and ChatGPT that they have to hire people just to figure out how to navigate the Democrat cesspool that is California. That's not good. That's not good. What could you say about the governor of a state that's so poorly run that Bed Bath and Beyond is not willing to do business in this state? And ChatGPT had to hire expensive Democrat weasels to try to figure out how to do business with the state. What would happen to that governor? Well, obviously, obviously his political career would be. Oh, what? Oh, he is the highest polling person to be the presidential candidate. Oh, okay. So we'll talk a little bit later about how Democrats are not taking the best advice. But what about Walmart? Don't you think Walmart's having some issues with Wokeness or dei, or do you think they're having some issues with tariffs? Well, Walmart is once again, you know, arguably one of the most impressive companies in the history of the United States because their sales are up. So they've got a 4, 4.6% sales increase in the last three months. And that's even including the fact that they've got tariffs that are built into their prices now. Now, they have raised some of their prices because of tariffs, but only one third of their goods come from overseas. And they're not passing along the entire cost of the tariffs. They're absorbing some and passing some along, but it wasn't enough to decrease their sales. And apparently I haven't heard of them doing anything that would make anybody mad about DEI or about Trans or Wokeness or any of that. So somehow they've avoided all of that. Good job, Walmart. Impressive. Well, speaking of big companies doing stuff, Axios is reporting that Morgan Stanley did some data analysis, and this is what Morgan Stanley came up with. Now, I'm laughing because it used to be my day job at a big corporation to do financial estimates and projections and decide which path was the best One financially. So I have a little bit of appreciation of how accurate you can be in doing this, which is Morgan Stanley did an analysis of how much money AI could save the big companies and they said it could save them nearly $1 trillion a year. In reducing, I think, mostly employee costs. They came up with $1 trillion a year. And that's only the beginning. They say long term it could result in 13 to 16 trillion dollars in market value creation for the companies in the index. I figure that's the s and P500 index. I think that's what that means. All right, do you believe I'm giving it away by laughing at it? But do you believe that Morgan Stanley has somebody on their payroll that can estimate the trillions of dollars of impact of AI? No, no, they don't have anybody who knows how to do that. This is pure bullshit. There was somebody who was no doubt assigned the project. That's the sort of project I would have been assigned to. Do you think I would have not produced the number? Of course I would. If I had been working for Morgan Stanley and they said, scott, got an important assignment for you. It will be up to you to decide how much money could be saved by AI. And I'd be like, all right. And then I'd go off and I'd start making some assumptions. Well, let's assume 46% of all the companies fire 20% of their staff within eight months. Where did you get that assumption? Look over there. It's a deer. Change the subject. Yeah, no, you can't really do that kind of an estimate. It's entirely possible. The AI will just be wonderful and companies will make more money and all the people who lose their jobs will be instantly retrained and have AI as a buddy and they'll go off and make more. Oh, it's all possible. But if you tell me that anybody can estimate what's going to happen in even three years. No, no, nobody can do that. But Google's generative AI team according to Futurism Noor LCB is writing about this cbi. I don't know that there would be no point in getting a law degree or a medical degree if you are going to start today. And the reason is that I will just eat your lunch and you could get that expensive education. It might take seven to 11 years to become a practicing doctor, but by then there's almost no chance that AI won't do it better and cheaper and faster. You'll still need nurse type people, you know, to put on splints and do physical stuff. Well, I guess you could do a lot of that in hospitals, but in terms of analyzing something and prescribing something, I feel like I would agree that your regular doctors got some problems and lawyers too. But I'll point out that ChatGPT just had to hire some humans to help them navigate California. And I suspect that one of the big advantages of big law firms is that they have connections. They literally know the judge, their brother in law is in some political office. So I suspect that the big law firms that charge a lot and get the most powerful people out of trouble and most powerful companies, it probably is more about their weaselly ways and who they know and what they've done and who owes them a favor. And I don't know if AI can keep up with that. I mean, they would use AI But I suspect that the lawyers are going to get together and make, make it illegal to have an AI only lawyer. Imagine, imagine if you will, just a few years in the future where there's a accused felon who goes to trial and says, your honor, I'd like to exercise my right to have an AI attorney. We fettered all the documents and it's ready to go. And then the AI just sits there in a box and argues against maybe another AI Is that going to happen? I don't know. Because you would have to train your AI to be somewhat dishonest. Well, let's say dishonestly persuasive. Especially if you were the defense and your client was guilty. The only way your client can win is if your AI is a lying weasel. You know, just like a human would be defending you. So will it ever be legal for, for AI to be programmed to lie to the jury to get a guilty person off? I don't know. I, I feel like the existing lawyers are going to find ways to make it illegal to have an AI lawyer. Now, will the medical community do the same? Probably. I would say probably. It won't be long before you start seeing stories in the news about somebody who died because they took advice from AI oh, you know, that's coming, that it'll be, you know, those stories will be planted by, let's say, you know, some doctor, the AMA or some doctor, you know, benefiting organization. And suddenly your, your brain will think, wow, AI just keeps killing people with bad advice. Oh, it told him to take horse paste or whatever. And then you'll say, oh, I only want a human doctor. And it will all be fake. But the doctors will hire the human lawyers to make sure. That it's illegal to have an AI only doctor because it's far too dangerous. That's what they'll say. Well, there's a physicist who believes he has a theory, his name is Miguel Alcuberry, he has a theory for how to do faster than light engines. So sort of warp speed kind of thing, faster than light. And the way he would do it, since it's impossible to go faster than light, is instead of making the object go faster than light, he will bend space. That's his proposition. You could bend space so that there's, let's say, so that there's less of it in front of you than there is behind you or something like that. And then bending the space gives you the functional equivalent of traveling faster than light. But you're technically not, because within your, you know, your small local domain, you're not faster than light. It's just that you're bending space in front of you that you're not in yet and behind you. Now does that make sense? I don't know. I mean, I may not have explained it perfectly, but does it seem possible that you could bend space in front of you and behind you? I don't know how you do that. We don't know how to do that now. Right. So I wouldn't be holding my breath for waiting for that. But hey, you never know. Mario Novel found that story. You should follow Mario Norfolk on X. He does great summaries of the news every day. Elon Musk has made a provocative and non obvious prediction. He said that AI is going to obviously one shot the human limbic system. Now I don't know exactly what he means by that part, but the real prediction comes next. He said that said I predict counterintuitively that it will increase birth rate. Mark my words. And then he goes, also we're going to program it that way. Well, the only one he can program is his, you know, Grok Xai and I could certainly imagine that it would program it to optimize human reproduction, but I don't think the other AIs are going to necessarily do that, are they? And it also seems to me like that could be its own set of problems. I feel like maybe AI should just stay out of it. But hey, you know, he's obviously got a, he's done more thinking on this specific topic than I have, so he might have something. I'll be open minded on that. But why would AI increase birth rates? He does say it's counterintuitive, but then he doesn't help us out with the reasoning. Do you see it? How many of you? Is it because the AI will hypnotize us into reproducing? Is it because the AI will take away all our workload and we won't have much to do and we'll be staying home? And so it'll be like, well, if we're going to be home a lot, we won't have any problems watching the kids we don't need. So maybe it just makes life easier and maybe it makes it easier to afford things too. We might get to the point where energy and housing costs roll low because the robots are building the houses and we've solved energy by just having smarter nuclear power and stuff. So I don't think this is going to happen right away, but I can imagine getting to the point where if you're a family or let's just say you're married, that you wouldn't have anything to do unless you had kids. So might be that having families is the only thing that will have meaning because you won't be able to get meaning through work. The robots will be doing the work. So I think he might be right. As I'm thinking it through, if you could get to the point where people don't have to work and everybody has enough of the basics. Yeah, people will be bored and they're going to want to just have babies, probably. Well, did you know, according to Cel Press, that reading for pleasure in the US has decreased over the past 20 years? Do you think they needed to do a study of that? I feel like I would have known that. Isn't that purely because of alternative uses of our time? If you've got a phone in your hand, you don't need to read that much now. Personally, I read way more now than I did before computers because it was only rarely that I'd pick up a book. But you know, if, if you're on the Internet all day, if you're on X or you're reading stuff all day, I mean I read probably the equivalent of about a quarter or half of a book just getting ready to do this podcast. I mean the amount, the amount that I read in the past two hours is pretty, pretty large amount. So yeah, reading for pleasure. I was trying to remember the last time I read fiction for pleasure and I couldn't even remember. I think it was, you can help me out on this. I've read non fiction books, of course, but fiction for pleasure, probably the last one was the second Harry Potter book. So if you told Me what year that was, the second Harry Potter book when it just came out. I think this might be the last book I read for pleasure. That was a while ago. Anyway, according to Newsweek, some schools are going to test out. Schools in Florida are going to test out putting armed drones in schools to defend against school shooters. Now, when I say armed, I don't mean necessarily with bullets, but rather would have pepper spray and some kind of glass breaking device so it doesn't get trapped beyond a glass door, I guess. And what would happen is if there was a, if somebody did the secret button, presumably it would be the administrator who did it, then the drone would take off and it would be operated remotely by somebody who would know to do it and they would look for that shooter and at the very least they'd get more information about the shooter. But it could also interfere with them. So the drone could try peppering him and know he's going to have to turn. The shooter would have to turn its attention on the drone just so the drone didn't take it about. So that would be fun. That seems like a good idea. We'll know because you could deploy that drone in like five seconds. Well, I'm loving watching the bad advice that Democrats are giving to other Democrats. James Garvill was on some show talking about what the Democrats should have done when J.D. vance took his summer vacation, because I guess he went to England and a place called Oxfordshire took the family, and that's sort of an upscale place in England. And James Carville says that the Democrats should have hammered him because there are vacation spots in the United States that are not doing as well as they could be doing. And what's he doing taking his American money and wasting it overseas? And he says that they should have just been all over him on that and made a big deal about it. Is that some of the worst advice you've ever heard? How many people care where the vice President is taking his family on vacation? How many people care about that? Most Americans would be perfectly happy to take an overseas vacation. You know, different countries they might prefer. But is there any American who doesn't think that they would like to take an overseas vacation someday? And do we really think that we're all going to be taking the same kind of vacation as the president and the Vice President of the United States? It's ridiculous. It's just the worst advice. Can you imagine some Democrat voters like, well, you know, I was going to vote for Trump, but then I found out that J.D. vance and his family went on vacation In Oxfordshire, England. That changes everything. Are the Democrats at this lost? That. That seemed like good advice. Oh, my God. Oh, my God. In other news, apparently the Arctic sea ice. The Guardian was reporting this. There's a new study that says that the Arctic sea ice has not reduced in 20 years. Now, if you believed in climate change and you believe the planet's getting warmer and it might be. Might be getting warmer, but wouldn't you also predict that that warming would increase the ice loss? Well, apparently it didn't happen. However, instead of saying, oh, it looks like our prediction models are wrong because you can't go 20 years without losing some sea ice if the planet's getting warmer. No, instead, the climate people say that they have at least two climate models that would allow for such long pauses, including another 10 years. So they say that there are two existing credible climate models which would allow the planet to get warmer for 30 years, but the ice in the Arctic not to change for those same 30 years. Does that sound even a little bit like they know what they're doing and they got a handle on this thing? It sounds like a Dilbert response. Right? Well, yeah, my prediction model is no matter how warm it gets or for how long, the ice won't melt. All right, okay, got it.
