Transcript
A (0:00)
Today on the AI daily brief why AI has a pr problem the AI daily brief is a daily podcast and video about the most important news and discussions in AI.
A (0:15)
Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG Blitzy, Rovo and Robots and Pencils. To get an ad free version of the show go to patreon.com aidaily brief or you can subscribe on Apple Podcasts and if you are interested in sponsoring the show, send us a note @ SponsorsIDaily Brief AI welcome back to the AI Daily Brief. Today we are talking about a subject which I'm sure will be not a surprise at all to any of you. We are talking about AI's PR problem. There are a number of things going on and stories from the last couple of weeks which all point in a very similar direction. So today we're going to talk about those stories, what I think is at the root of these challenges, and some very first nascent thoughts on what we can do about it. Let's talk first about this Edelman study, Coursera founder Andrew Ng wrote. Separate reports by the publicity firm Edelman and Pew Research show that Americans and more broadly large parts of Europe and the Western world, do not trust AI and are not excited about it. Despite the AI community's optimism about the tremendous benefits AI will bring, we should take this seriously and not dismiss it. The public's concerns about AI can be a significant drag on progress and we can do a lot to address them. According to Edelman's survey, in the US, 49% of people reject the growing use of AI and 17% embrace it. In China, 10% reject it and 54% embrace it. Pew's data also shows many other nations much more enthusiastic than the US about AI adoption. Positive sentiment towards AI is a huge national advantage. On the other hand, widespread distrust of AI means individuals will be slow to adopt it, valuable projects that need societal support will be stymied, and populist anger against AI raises the risk that laws will be passed that hamper AI development. So let's talk about this trust study. Edelman publishes their Trust barometer every year, and this year, surprise surprise, the big theme was AI and kind of frankly, AI consternation of the type that Andrew was just talking about. This survey was conducted very recently, October 17th to the 27th, and boy, is this all telling a single story. Edelman's headlines include one that globally rejection for AI outweighs enthusiasm, with US respondents more than twice as likely to say they reject the growing use of AI than they are to embrace it. Even beyond AI, enthusiasm for innovation is not guaranteed. Trust in AI generally lags behind trust in technology. Now, the risk of being overly reductive. There is a very clear east west divide here, although frankly, probably a better way to put it would be developed economies versus developing economies. The survey interviewed people in five countries, Brazil, China, Germany, UK and the US with at least a thousand interview respondents per country. In Germany, the UK and the US a significantly higher number of people said they rejected AI versus embracing AI. In Germany it was 42 to 16, in the UK it was 46 to 18. And in the US we had the biggest gap, a 32 point difference with 49% of people saying they rejected AI and only 17% saying they embraced it. In Brazil and China it was the opposite. Brazil had 24% saying they rejected AI versus 35% embracing it, and China had 10% rejecting it and a full 54% embracing it. Edelman found a big income divide, with people who were lower in middle income being more likely to say that AI would leave people like them behind than those in the top 25%. Although in the US the numbers were high across the board, with even high income folks seeing 47% say that they feared that AI would leave people like them behind. This fear of getting left behind is I think, one of the key issues that we're going to have to contend with. Unsurprisingly, young people have more trust in AI, but U.S. young people are still distrustful, with only 4 in 10 U.S. young people trusting AI. And folks in Germany, the UK and the U.S. are very skeptical that AI is going to help any sort of issues, from climate change to work life to mental health to political polarization to poverty. In one bit of good news, there is a correlation between people being more informed about AI and having higher enthusiasm. Meaning, in other words, that the more we have people engage with it, the more we might have a more productive conversation. This reminded me of a report that I saw earlier, as tweeted by Business Insider reporter Brian Metzger. Senator Josh Hawley, one of the biggest AI critics in the Senate, told me this morning that he recently decided to try out ChatGPT. He said he asked a very nerdy historical question about the puritans in the 1630s. I will say Holly said it returned a lot of good information. Another bit of, I guess, good news, although sort of bad news, but maybe good news is that a lot of the problems with AI are perception problems rather than things that people have actually experienced. For example, among those who reject the growing use of AI, still only 18% said that they personally had had bad experiences with generative AI versus 70% who said they had not. In general, when it came to why people weren't willing to use the tools, motivation and access and intimidation, while prevalent, were less common than general trust issues. Unsurprisingly, the more that people have used AI, the more likely they are to report benefits in things like my speed at getting things done and my understanding of complex ideas and concepts. Again indicating that if we can get people to use these tools, it may change their perceptions of them. Another thing that becomes clear with this study though, is that it's not just AI generally, but also the way that companies and people are interacting with AI that is causing issues. When asked which potential impact of generative AI on society is more likely that business leaders are fully honest about job cuts or that business leaders aren't fully honest with employees about job cuts? Unsurprisingly, 7 in 10 folks in the US said that business leaders aren't being fully honest with employees about job cuts, which certainly is feeding into the anti AI narratives and something that I absolutely berate companies that pay me to come talk to them about. When people were asked what would increase their enthusiasm for using generative AI in work and life, two answers relating to employers scored highly 57% of US respondents said that their enthusiasm would be increased if they were getting high quality training through their employer about how to use AI effectively, and 59% said that their enthusiasm would increase if they felt sure their employer was using AI to increase productivity versus eliminate jobs. One of the things that I talk about all the time with any company who will listen is that you have to have an open and honest conversation with your employees about how your leadership is thinking about AI. That does not mean that you have to pretend that there's no situation in which changes in the technology landscape aren't going to impact certain roles and jobs. But to the extent that your company views AI as an opportunity creation technology, not just an efficiency and cost cutting technology, the more you can do to articulate that and be real with it, the better off you're going to be and the more employee buy in you're going to have. They also found that long term job security boosts likelihood to embrace AI. Among those who said that their job security due to AI was increasing, 50% said that they embraced AI, as opposed to just 21% who said their job security due to AI is decreasing. And by the way, for those who think that this is a partisan issue. It is actually wildly nonpartisan. Going back to that question of what would increase enthusiasm for using generative AI among workforce priorities, we heard about that high quality training, but they also asked about the idea of employers being required to retrain or redeploy employees that were displaced by Gen AI. And very similar percentages of left, center and right folks said that those things would increase their enthusiasm for Gen AI. On the training question, 60% of the left said it would increase their enthusiasm, 61% of the center and 67% of the right on the retraining requirement. You might think that the right, who historically have antipathy towards markets being forced to do anything, would be the lowest, but they're actually the highest once again at 60% as compared to the left's 59% and the center's 54%. But surely when it comes to government priorities like safety nets, we're going to see more of a divide, right? Not according to this study. When asked if an income safety net for those who lost their jobs to Gen AI would increase their enthusiasm, the center was the lowest at 57%, then the right at 59%, and then the left at 63%. All very similar numbers. And around government programs supporting the use of Gen AI, once again, the right was highest at 60%, increasing their enthusiasm as opposed to 57% for the left and 54 for the center. Getting at part of why I think some people are having a hard time with just the barrage of AI in every part of their life. Edelman concludes that people who distrust AI are more likely to say that AI is imposed on them. In the US, 48% of people who trust AI said that they feel that generative AI is being forced upon them whether they want it or not. And that jumps to 67% when it comes to those who distrust AI. Now, as we move into the Remedy section, it's clear that the pathway to changing this is not going to be through business leaders or government leaders, or probably even AI researchers. Instead, it's going to have to come from our peers. When asked how much they trust different groups to tell the truth about generative AI. In the US, government leaders came in even lower than CEOs, 24% compared to 27%. AI researchers were at 53%, still significantly lower than friends and family who were at 71%. All in all, this is a pretty bleak story about the state of AI perception in the US and other similarly developed countries. Now, as I mentioned, this is not the only story I've seen like this that kind of falls along these themes. Microsoft Satya Nadella recently talked about AI needing social permission to consume as much energy as it did. In an hour long interview with the CEO and Chair of Axel Springer, Satya Nadella said, at the end of the day, I think that this industry to which I belong needs to earn the social permission to consume energy because we're doing good in the world. Now. Nadella made a point to downplay the immediate impact of AI on power consumption, which I agree with. I think that's an overblown narrative in the immediate term, but did also note that the rapid growth of data centers is putting and of course will put a lot of pressure on the electric grid. Nadella argued that the only way the public will accept that pressure is is if it results in economic growth that is broad spread in the economy. Now as a total aside, one of the catastrophic failures in my estimation of the AI industry so far is particularly around the folks who are building out AI data centers. This is one of the more unique opportunities that any technology has ever had before to pair the destruction in creative destruction with creation right from the beginning. Normally those are two sequenced phases, with the destruction happening first and the creation only happening much later. At least when it comes to jobs and displacement. In this case, the infrastructure buildout should be a boon and a bonanza for the places where that infrastructure buildout is happening. It's an opportunity to employ local people to do retraining to subsidize costs for communities. It is a failure of imagination, of policy, of planning, of basically everything. You can imagine that instead of communities competing to have this infrastructure built there, they are instead protesting. If you are among my listenership, and I know some of you are out there who are in data center construction companies or the surrounding industry, you have so much more work to do and a unique opportunity to help us right this ship right from the beginning. Now, as I mentioned a couple other times recently, I'm also seeing the anti AI political discourse ratchet up heading into next year's midterms. Bernie Sanders recently published an op ed in the Guardian called AI Poses Unprecedented Threats. Congress Must Act Now. Sanders has become completely hint and pilled and is no longer just talking about job displacement, but the, quote, very real fear that in the not so distant future a super intelligent AI could replace humans in controlling the planet X risk is back on the menu, baby. Now this op ed reads like a blueprint for how the anti AI rhetoric, at least from the left, is going to go next year. It's got a big dose of billionaire blame. It connects Trump in the current White House with big tech. It talks about the impacts of AI on democracy. And of course it talks about job displacement, quoting folks like Elon Musk and Anthropic's Dario Amadei. As I was preparing this episode, I noticed that Florida Governor Ron DeSantis, who has also been getting increasingly loud in his AI and general tech antagonism, is putting together a proposal for what he's calling a Citizen Bill of Rights for AI. Now, one thing I will note is that even among folks who are firmly bullish AI, there are a fair number of things in this idea for a Bill of Rights that don't feel like they would be all that controversial across the spectrum from AI bulls to bears. Prohibiting AI from using people's name and likeness without their consent. Requiring notices when consumers are interacting with AI, Prohibiting companies from selling or sharing personally identifying information. Like I said, a lot of things that I think a lot of people could get together on. Now there is also in this a big whack against AI data centers, such as prohibiting utilities from charging residents more to support data center development. But even with that, I still think that there's probably more agreement than you might imagine. Now, I don't want to get fully into it today, but even mainstream media is noticing how this is becoming a bipartisan issue. NBC News recently pointed out that AI is creating odd bedfellows across parties.
