
Loading summary
A
This episode is brought to you by LinkedIn jobs. If you've ever hired for your organization, you know that finding the right person is everything. That's why LinkedIn Jobs has launched their new AI assistant, so you can feel confident you're finding top talent you can't find anywhere else. Great candidates are already on LinkedIn and according to LinkedIn, employees hired through LinkedIn are 30% more likely to stay for at least a year compared to those hired through the leading competitor. That's a big deal when every hire counts. Hiring doesn't have to be complicated. LinkedIn Jobs AI Assistant does the heavy lifting, filtering applicants and servicing only the best matches. Plus, it suggests 25 great fit candidates daily. So you can invite them to apply hire right the first time, post your job for free@LinkedIn.com TTD then promote it to use the new AI assistant, making it easier and faster to find top candidates. That's LinkedIn.com TTD to post for free. Terms and conditions apply.
B
This episode is brought to you by Cargurus. You know, sometimes I think about how good design solves real problems. And car shopping? That's a problem that desperately needs better design. The uncertainty of buying a car can be exhausting. Is this price fair? Is there a better deal two clicks away? You shouldn't need a detective's intuition to feel confident about a major purchase. That's where Cargurus comes in. They've redesigned the entire experience, ensuring a transparent and hassle free buying process. With more car listings than any other major online automotive marketplace in the US you can actually compare and find the best deal. Real data driven ratings, price drop alerts, verified dealers. It removes the confusion from the equation. It's no wonder similar web estimated traffic data shows Cargurus is the number one most visited car shopping site. Buy or sell your next car today with CarGurus@CarGurus.com Go to CarGurus.com to make sure your big deal is the best deal. That's C-A-R-G-U-R U S.com CarGurus.com this episode is brought to you by Ambetter Health Group. Health insurance can put businesses in a tough position. If you're a business owner, a CFO or an HR leader, this is probably going to sound familiar. It's fall and you find out your group health insurance premium will be more expensive next year, maybe by a lot. And as usual, you have to pick one carrier and a few plans for all of the employees. But they each have different medical needs, different budgets and different preferences for doctors. Plus, the carrier's network might not be strong where all employees live. Fortunately, there's a new approach. It's called an Ichra or icra and it's a game changer. Ichras make costs predictable with stable pre tax contributions and a larger risk pool. And they make health plans personal because employees can buy any plan that fits their needs from any carrier. You choose how much to contribute, they choose what works for them. It's about time, right? For coverage you control, plan on and Ichra. Learn more at ambetterhealth.comichra. You're listening to TED Talks Daily, where we bring you new ideas to spark your curiosity every day. I'm your host Elise Hu Is AI Changing the Very Way We Talk? Etymologist and content creator Adam Alexik sounds the alarm on how AI tools are influencing our behavior down to our very word choices. He encourages us to remember that these emerging tools are not neutral and how they are possibly rewiring the very underlying patterns of our thoughts and why. Afterward, I sat down with Adam to go beyond his talk and learn more about what sounding human even means anymore, the tools we'll need to build as we continue down this rapidly changing path, and more. Stick around after his talk for our conversation.
C
How sure are you that you can.
D
Tell what's real online?
C
You might think it's easy to spot an obviously AI generated image, and you're probably aware that algorithms are biased in some way. But all the evidence is suggesting that we're pretty bad at understanding that on a subconscious level. Take for example, the growing perception gap in America. We keep over and overestimating how extreme other people's political beliefs are. And this is only getting worse with social media because algorithms show us the most extreme picture of reality. As an etymologist and content creator, I always see controversial messages go more viral because they generate more engagement than a neutral perspective. But that means we all end up seeing this more extreme version of reality, and we're clearly starting to confuse that with actual reality. The same thing is currently happening with AI chatbots because you probably assume that ChatGPT is speaking English to you, and except it's not speaking English in the same way that the algorithm's not showing you reality. There are always distortions depending on what goes into the model and how it's trained. Like we know that ChatGPT says Delve at way higher rates than usual, possibly because OpenAI outsourced its training process to workers in Nigeria who do actually say delve more frequently over time though, that little linguistic overrepresentation got reinforced into the model even more than in the workers own dialects. Now that's affecting everybody's language. Multiple studies have found that since ChatGPT came out, people everywhere have been saying the word delve more in spontaneous spoken conversation. Essentially, we're subconsciously confusing the AI version of language with actual language. But that means that the real thing is ironically getting closer to the machine version of the thing. We're in a positive feedback loop with the AI representing reality, us thinking that's the real reality, and then regurgitating it so that the AI can be fed more of our data. You can also see this happening with the algorithm through words like hyper pop, which wasn't really part of our cultural lexicon until Spotify noticed an emerging cluster of similar users in their algorithm. As soon as they identified it and introduced a hyper pop playlist, however, the aesthetic was given a direction. Now people began to debate what did and did not qualify as hyper pop. The label and the playlist made the phenomenon more real by giving them something to identify with or against. And as more people identified with hyper population, more musicians also started making hyper pop music. All the while, the cluster of similar listeners and the algorithm grew larger and larger. And Spotify kept pushing it more and more because these platforms want to amplify cultural trends to keep you on the app. But that means we also lose the distinction between a real trend and an artificially inflated trend. And yet, this is how all fads now enter the mainstream. We start with the latent cultural desire, like maybe some people are interested in Matcha or Labubu or Dubai Chocolate. The algorithm identifies this desire and pushes it to similar users, making the phenomenon more of a thing. But again, just like how ChatGPT misrepresented the word delve, the algorithm is probably misrepresenting reality. Now more businesses are making Labubu content because they think that's the desire. More influencers are also making Labubu trends because we have to tap into trends to go viral. And yet the algorithm is only showing you the visually provocative items that work in the video format. TikTok has a limited idea of who you are as a user, and there's no way that matches up with your complex desires as a human being. So we have a biased input. And that's assuming that social media is trying to faithfully represent reality, which it isn't. Instead, it's only trying to do what's going to make money for them. It's in Spotify's interest to have you listening to Hyper Pop, and it's in TikTok's interest to have you looking at the boo boos because that's commodifiable. So once again we have this difference between reality and the representation of reality where they're actually constantly influencing one another. But it's incredibly dangerous to ignore that distinction because this goes beyond our language and our consumptive behaviors. This affects the world we see as possible. Evidence suggests that ChatGPT is more conservative when speaking the Farsi language, likely because the limited training texts in Iran reflect the more conservative political climate in the region. Does that mean that an Iranian ChatGPT user will think more conservative thoughts? We know that Elon Musk regularly makes changes to his chatbot Grok when he doesn't like how it's responding, and that he uses his platform X to artificially amplify his tweets. Does that mean that the millions of Grok and X users are subconsciously being trained to align with Musk's ideology? We need to constantly remember that these aren't neutral tools. Everything that ends up in your social media feed or in your chatbot responses is actually filtered through many layers of what's good for the platform, what's what makes money, and what conforms to the platform's incorrect idea about who you are. When we ignore this, we view reality through a constant survivorship bias, which affects our understanding of the world. After all, if you're talking more like ChatGPT, you're probably thinking more like ChatGPT as well. Or TikTok or Spotify. But you can fight this if you constantly ask yourself why? Why am I seeing this? Why am I saying this?
D
Why?
C
Why am I thinking this? And why is the platform rewarding this? If you don't ask yourself these questions, their version of reality is going to become your version of reality. So stay real.
B
Don't go away just yet. Stick around. My conversation with Adam is coming up right after a short break with word from our sponsors. Today's episode is brought to you by Wayfair. The holidays are here and if you're like me, there are still final touches to make and last minute games gifts to find. Thankfully, Wayfair heads made it incredibly easy to get everything I need fast. Recently I ordered a cozy faux rabbit throw blanket and a gorgeous velvet accent chair from Wayfair and both arrived so quickly with free delivery they have added the perfect finishing touches to my space for hosting family and friends this season. Whether you are prepping your guest room with fresh bedding. Finding gifts for everyone on your list we or adding those final decorative touches to celebrate the season. Wayfarer has everything in one convenient place. There's something for every style and every budget with fast shipping to get you what you need when you need it. Get last minute hosting essentials, gifts for all your loved ones and decor to celebrate the holidays. For way less head to Wayfair.com right now to shop all things home. That's W A Y-F A I R.com Wayfair every style, every every home this episode is brought to you by Cargurus. You know, sometimes I think about how good design solves real problems. And car shopping. That's a problem that desperately needs better design. The uncertainty of buying a car can be exhausting. Is this price fair? Is there a better deal two clicks away? You shouldn't need a detective's intuition to feel confident about a major purchase. That's where Cargurus comes in. They've redesigned the entire experience, ensuring a transparent and hassle free buying process. With more car listings than any other major online automotive marketplace in the US you can actually compare and find the best deal. Real data driven ratings, price drop alerts, verified dealers. It removes the confusion from the equation. It's no wonder similar web estimated traffic data shows Cargurus is the number one most visited car shopping site. Buy or sell your next car today with CarGurus@CarGurus.com Go to CarGurus.com to make sure your big deal is the best deal. That's C a r g u r u s.com cargurus.com. Congratulations on your talk. How are you feeling now that you're done?
D
Thank you. Feeling good.
B
Tell us what made you so curious about language in the first place and how you got hooked.
D
Yeah, wow. Etymology in particular. I always like to tell people it comes from the Greek word etymos, meaning truth. So you look at etymology and you're actually studying truth. You're studying how humans understand the world, how we relate that to other people. In sophomore year of high school, I read this etymology book. I got super into it and then I just started like really studying it more for myself. I started a little website in high school. I studied linguistics in college and then I was graduating with a linguistics degree and I was like, well, what do I do now? So I started making linguistics content and then actively sort of studying the language of the social media space as I was in it, and then wrote a book, Algo Speak, on how social media is changing language. And then that ended up getting me the TED Talk.
B
Is there a problem with social media changing language? Because our language has always evolved. This is a living and dynamic thing. Right. English or any other language in the world. So is there anything wrong with it?
D
No. And our language has always evolved around the constraints of a medium. Right before we had written history, we would rely on oral tradition. We would tell stories through rhyme and meter. And then we started writing things down. The places that used leaves to write things down developed curly scripts because that was better for the leaves. And the places that used clay and stone to write things down developed rigid scripts. So again, the medium is literally shaping language. We have chapter books, we have the Internet. Internet allows for this written replication of informal speech. It's again, kind of a paradigm shift in how we speak. And I think algorithms are that new paradigm shift. AI is a new paradigm shift. We're in this like really fast paced moment where our language is rerouting around these new mediums we're interacting with.
B
Yeah, you mentioned fast paced. As an etymologist, how are you managing the speed at which things are changing?
D
Yeah, well, it's, it's really good for me that I am kind of studying things in the open. I make videos about phenomena I'm observing and then I get like tagged in videos where people are using new words. It's sort of crazy in that sense, but I also have to be in it. I scroll TikTok for, for research. But you have to be in, in the milieu to really know what you're studying. Everything's contextual. There's an aesthetic that words are evolving through. There are communities that words are evolving from. You have to understand at least like a little bit about Internet culture to know about these communities, because there, there's so much depth to them.
B
Right. And it strikes me that everything is changing. Like we're talking just within a matter of days, a trend can emerge and then be gone and then be considered passe or old, which is at a much faster cadence and speed than, say, academia, where language was traditionally researched and linguistics was studied. What do you feel like are some trends this year in language that have come up and have really hit the zeitgeist that you have to explain the most?
D
Yeah. Wow. Well, we're definitely on the tail end of 6, 7. I feel like most people understand by now that that's this nonsensical interjection coming from meme communities originally parodying clip farming, but there's been so much going on. Yesterday I made a video about lowkenuely And I guarantee you by the time that this podcast airs, that word will already be passe. But it's like a combination of low key and genuinely and it's like true.
B
Oh, low Ken younly.
C
Yeah, okay.
D
It's not going to stick. It's like a meme word. But that's exactly kind of illustrating how quickly these words come and go. I really doubt that'll be around beyond like a month.
B
Okay. I'm trying to go through what I'm hearing in my house that sounds like nonsense because I have a 13 year old and a 10 year old and an 8 year old.
D
No matter how much you think you know, they know more. So I have friends who are middle school teachers. Sometimes they let me. They sit down in their middle school classrooms and that's where you really learn the culture. And then like Gen Z older people parody their language and then it becomes brain rot. But it starts with Gen Alpha.
B
Yeah. I do like being called chat though, instead of Mom.
D
Chat is funny. And that's definitely a phenomenon that's gotten way more popular this year. I definitely started seeing that around 2023, sort of as like a general vocative. What do we think? Chat, you know? And yeah, it sort of reflects the rise of streaming culture. And I've seen a lot of words come out of Twitch spaces back when Riz was popular. That came from Twitch.
B
Oh, okay. So when I'm referred to as chat, it's from like a live streamer typically saying like, hey, don't forget to subscribe.
D
Yeah. It's addressing an unknown audience. You know that there is an audience. You don't know who's in the audience. Chat is a catch all. There's a sort of collective unity to it. And then there's also a sort of. Yeah, the strange dynamic of digital surveillance where we really don't know who's going to see a message, where it's going to be distributed, even on the surface level. So right now this will go somewhere, but then what if it goes viral and then that will then go in directions you don't know. That's also what six seven was parroting at the core of it. Because it comes from this joke that you could go viral by saying 6, 7. So people said 6, 7 to go viral. It was a little self referential as a nod to itself. And then, yeah, drifts from NBA players who are saying this to go viral to Gen Alpha kids who are saying this to go viral. And then it goes off camera. And now the implied joke is still this possibility that a camera is watching you. And I think that's maybe a defining trend that I keep seeing, that we're kind of aware of this constant surveillance or Panopticon, and we're ironically performing for the algorithm when we say six, seven, in the early iteration of the joke. Now, of course, it just is layered into abstraction.
B
Yeah. I mean, there's the Panopticon element of it. But it also strikes me as fascinating that so many young people today, when you ask them what they want to be when they grow up, is a YouTuber. Right. Or to go viral or to be an influencer. So now our life aspirations aren't a particular virtue, but instead. Yeah, to be seen.
D
I think it's like 50% or something. It's not astronaut anymore. Right. We, we want to be seen, and there's more seeing going on. I kind of worry about the amount of seeing that we're doing. I was in Washington Square Park a few months back, and I saw somebody with like, the meta glasses trying to, like, make Riz content. And he was like, talking up girls, but, like, it's not a real flirtation. He's perform, he's, he's clip farming, you know?
B
Right.
D
I see, like, politicians saying stuff that they know will algorithmically go viral later, but in the present moment, they sacrifice, like a moment of decorum. I, I, you know, I do worry about what is the notion that we could all be perceived due to us. So on one end, you could act out more because you want to be perceived. On the other end, it makes you more docile because you're worried about being perceived. There are bad effects for society in both ways. And so we can use words like six, seven, or chat to point to what's happening in culture. And then from there we get into subjective territory. Right. But I do want to say that the words that people are using are merely a way to categorize reality. And in that sense, they are just a tool. A tool can be used for good or bad. You can draw your own conclusions about.
B
Culture in your talk. You focus on how LLMs, large language models, and chatbot AIs are affecting speech, affecting the way we talk. How do you think it's going to change our language practices in the future?
D
Yeah, mostly stuff is happening ambiently right now. So first we observed an increase in the Word Delve, because ChatGPT over represents the word delve. And, you know, maybe you heard of that. Maybe you heard that chatgpt says the word delve. Maybe you heard about, like, Sentence structures like, it's not just X, it's Y. And you're trying to avoid that, it's still going to affect you. There's so many other words that it uses at a slightly higher rate, like surpass or boast or garner. I don't know, I caught myself saying garner. I don't know why I'm saying. But you see a word being used around you and you use it more. That's how we adopt language. And ChatGPT and these other LLMs represent language as like a series of numerical kind of coordinates. And these representations are close, perhaps, to how we actually feel about words, but they homogenize it and they get a little bit wrong because representation can never be reality. So they mess things up. And now we have new studies coming out showing that people. Yes, in spontaneous spoken conversations, we're using the word surpass and boast more simply because we see it more, we absorb it. That's how I think AI is. Bots are going to be affecting our language. I'm more, I think, conscious of algorithms. Ideas really travel. You can visualize it like a virus infecting a population. It starts with a host, it goes to some early nodes in the network of social contagion, and then it diffuses further. Algorithms represent those social networks. They do literally accelerate ideas. So if we want to think about how ChatGPT is influencing language, you don't even have to be using AI to be affected by these words, because they're showing up all around us. I'm thinking about, oh, now we might be looking at more content on social media using the word delve or something. 14% of all research papers are now written with AI. We have, like, parliamentary speeches being written with AI. You're going to see it more, no matter how immune you think you are, and then you're going to start saying it more.
B
Yeah, it's this loop, right? It's this unending funhouse mirror or feedback loop of our language. We feed it. It feeds us back to us from, like, an aggregated data set.
D
Right.
B
So what are we losing? Yeah. If our language practices are sort of undifferentiated, we're losing the individual quirks or flair that can be in language or slang or intricacies of dialects. Are we losing connection to each other? Are we losing connection to a certain culture? What's the cost of this?
D
Yeah, with language, again, this is a way to reflect our reality, and our reality is not purely this algorithmic AI reality. You will have a different dialect, always with your close friends. With your family you will speak, you will code switch between your regional dialect and then this homogenized AI dialect, whatever that we're all talking in. But yeah, you'll always find different ways to communicate given the context. It's not a categorical homogenization of language, but in the domain of public speech, I think we are kind of traveling towards a norm. There's also a really good book by Kyle Chayka called Filter World about how algorithms sort of dilute culture down. I think that's very much happening. And so that will be happening with language. We have a language dying out every two weeks. There's only 7,000 in the world. Every two weeks one goes. And this was happening before, I think the Internet perhaps accelerated it. But globalization already kick started it by nationalizing and centralizing our languages. We were on this path since like the 1850s. But it's definitely happening even more with, with algorithms which are more of this force for homogeneity because there's an expectation that users have of like, oh, I want you to be speaking in American English or in British English or whatever. So that's like one effect that's certainly happening. I do not think it's going to be happening in every sphere of your life, but it's going to be influencing you. And that's really sad because with some of these dying languages are such incredible perspectives for looking at the world. There are like these different frames. I was just reading this, this book, Braiding Sweetgrass. I highly recommend there's this potomatomie word to be a Saturday. So like we don't have the verb idea of being a Saturday. But I really like that that is a frame you can look at Saturdays through different languages have different understandings of time and direction. And the more you condense down into this like sort of western centric view we got going on, you lose the color and the beauty of all these different ways we could look at the world.
B
Yeah. And sort of the richness and the diversity and the dynamism of being human. Right. So I guess my next question is, what do we do now? You know, you in your talk, you say that we are subconsciously now confusing the AI version of language with actual language. And then that means as we've been talking about, the real thing is getting closer to the machine version of the thing. What do we do about this? It seems like we have a collective action problem.
D
You can't avoid it. You simply cannot avoid it. Now I do think by being conscious about this, the more aware we are of what these platforms are doing to us, the more resistance we have. This is a virus. You are able to form your own kind of antibodies through media literacy. And it goes so much beyond language. I think language is the canary in the coal mine. That sort of proxy for greater cultural shifts that we can pay attention to because it tells us what's going on in society in that telling truth kind of way. But I'm worried about political shifts. I'm worried about social shifts. ChatGPT has different political leanings in each language because it represents the values of those countries differently. That's really concerning to me that there's a direction that we are being trained to think. When you interact with a platform like X, you got to know that Elon Musk is artificially amplifying his tweet so he could go more viral. Right. When you interact with a chatbot like Grok, we also know that Elon Musk changes what the chatbot says so that we align with his picture of reality. We need to know that so that we can maintain our reality.
B
Who do you think is responsible for not only continuing to have these conversations, but helping make sure that the next generations coming up are media literate and hip to what's happening here?
D
Yeah, and I also want to caveat. I do not think the responsibility should merely be on the consumer of media. I'm simply pushing that right now because I think that's the best thing we can do in this current cultural moment where it doesn't seem like we have any power against the platforms. The moment when we can start regulating these platforms, seriously, we should be doing that. But in the meantime, in our personal lives, the way that I have been navigating social media is trying to build that radical media literacy for myself. It's my dream that one day in 10th grade ELA class, along with poetry scansion, you have a unit for how to look at TikTok. And I know that sounds ridiculous, but it's not a joke. In the same way you should think about the news, you should think about the New York Times is not printing some stories because they're filtered through layers of manufactured consent. We should teach our kids about engagement, optimism, algorithms, and how these are working to trigger your reptile brain impulses, and they're not actually aligned with what you want. So all of these things, I think should be taught.
B
This is actually why I haven't joined the whole wait until 8th on phones. You know how there's this big campaign to keep kids off of screens. I actually worry that it's making them completely illiterate until they just jump in and, you know, jump into the deep end.
D
I would say it's pretty bad to go from 0 to 100 like that, but that's such a delicate question. Like, clearly it is bad for children to be looking at iPads at age 2. I would also really have to navigate that when I become a parent. As a parent, sure, yeah, it's a scary question to be grappling with, but I think, like, slowly integrating while teaching them lessons about this thing that's being shown. Oh, that's an AI generated video. That is not what things really look like. Or a lesson that I would always want to teach my kid is like, why did this show up on your for you page?
B
Right.
D
When you get a video, ask yourself that question, right? Think about what videos are not showing up. Because there's always a survivorship bias to what's filtered. The thing that's showing up is generating engagement. It's past content guidelines for the platform. It's targeted to the platform's idea of who you are, which is an incorrect idea in the same way that the word delve is incorrectly represented. And then along with that, strategies for how to remain present and mindful and literate. So I. I think the thing we least like about scrolling is that it makes us feel like we just wasted a bunch of time. I've personally sort of found ways to extract meaning and presence from being on social media, but it takes like, turning off cognitive frames that they're trying to trick you into, and that's part of it as well.
B
Okay, Adam Alexic, we could talk about all this stuff with you for so much longer, but this is an excellent extrapolation and building upon your talk. Okay, stick with us. I'm going to hit you with a few rapid fire general questions unrelated to your talk specifically, but you can tie it back if you want. All right, let's go. What does a good idea look like to you?
D
Something that is weird and different from what other people have done. And you have to draw on something that exists already, I suppose. But remix it in a new way. Yeah.
B
Collisions of ideas that have been previously out there. Right.
D
Drawing the connection between the data points that doesn't exist yet. Yeah, yeah.
B
I like kind of mixing and remixing for creativity. All right, what is a New Year's resolution or intention or a ritual of yours if you have.
D
Interesting. Okay, ritual. I do have a ritual. So it's become a yearly tradition that my birthday is on January 3rd and the Moby Dick Marathon in New Bedford, Massachusetts, is also on January 3rd. So I'm going with a bunch of friends to read Moby Dick for 25 hours.
B
What? Everybody just gets together and sits and reads.
D
It's terrible. It's wonderful. Yeah, it's a really painful experience, but I think it's the only way you can read Moby Dick. And you kind of go mad along with Captain Ahab.
B
So you were quietly reading in a crowd, or are you reading aloud popcorn style?
D
No, no. It's like there's one person chosen to read, and it goes for 25 hours. So I have, like, a 1:30am Reading slot. My friend John has a 5:30am Reading Slot. Yeah, this is the second year we're definitely making this a yearly thing.
B
Do you know your reading, like, which part of the book you're gonna be at or everybody's gonna collectively be at at your 1:30am reading slot.
D
I'm really hoping for the chapter Stub Kills a Whale.
B
Yes.
D
That's such a good one. But I. Yeah, so I don't know. We'll see. We'll see.
B
Okay, that's so funny. All right. What is a hobby or interest of yours unrelated to your work, that you love so much that you might be able to give a TED talk about it?
D
I don't believe in hobbies. I think hobbies are strange. I. I think, like, hobby is defined as this thing which is, like, slightly less serious than work, but more serious than your other leisure activities. And I want to treat everything in my life with the equal importance of work and leisure. I like sitting. I like eating. I like listening to music. But that sounds boring now. That. Now I sound boring. I do have, like, activities I do for fun, but I'm anti calling things hobby.
B
I like that. All right, that might be a hill you're willing to die on, because my next question is, what is an ant hill you'd be willing to die on? For example, I would die on the hill that, you know, one location's pizza is better than another location's pizza. Do you have anything that you feel like?
D
I hate the word content. And I know as a linguist, I'm not supposed to hate words. This is from my cultural critic perspective. I think it's so strange that we talk about making content because the word implies something that is contained, like the contents of a box or drawer. And now ask yourself, like, where is it being contained? It's being contained in the. The medium of social media. So TikTok is like the box and then your content is this thing held in the box. And that implies, first of all, that it's interchangeable with other pieces of content and that, you know, the content doesn't have anything special within it. Another reframe, potentially, is that your video is the container and your idea or message is the content. But we don't talk about it on that level. We abstracted a level up, and then it becomes this, like, commodifiable thing where you can talk about, oh, this is how you make better content. This is how you make content every day. And then you're like, you lose the plot of what you're trying to do, which is spread good ideas. And spreading ideas critically also means your idea leaves the platform and with content, it's contained. So it's very strange. I try to avoid calling myself a content creator and perhaps trying to reclaim the word influencer, because I know that's like, a little negatively coded, but it's what I'm trying to do. I want to influence people. Like, it's. It's like it'd be disingenuous to not say that.
B
Okay, all right. Linguistics influencer, author, etymologist, but certainly not merely a creator of content. Adam Oleksik, thank you so much for sitting down with us.
D
Thank you.
B
That was Adam Alexik speaking at Ted next 2025 and in conversation with yours truly, Elise Hu. If you're curious about Ted's curation, find out more@ted.com curationguidelines and that's it for today. TED Talks Daily is part of the TED Audio Collective. This episode was produced by Lucy Little and edited by Alejandra Salazar. The TED Talks Daily team includes Martha Estefanos, Oliver Friedman, Brian Greene, and Tanzika Sangmarnivong. Additional support from Emma Tobner and Daniela Balarezzo. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening.
A
This episode is brought to you by LinkedIn jobs. If you've ever hired for your organization, you know that finding the right person is everything. That's why LinkedIn Jobs has launched their new AI assistant. So you can feel confident you're finding top talent you can't find anywhere else. Great candidates are already on LinkedIn. And according to LinkedIn, employees hired through LinkedIn are 30% more likely to stay for at least a year compared to those hired through the leading competitor. That's a big deal when every hire counts. Hiring doesn't have to be complicated. LinkedIn Jobs AI Assistant does the heavy lifting, filtering applicants and surfacing only the best matches. Plus it suggests 25 great fit candidates daily, so you can invite them to apply, hire right the first time, post your job for free@LinkedIn.com TTD then promote it to use the new AI assistant, making it easier and faster to find top candidates. That's LinkedIn.com TTD to post for free. Terms and conditions apply.
B
Ted Talks Daily is sponsored by Capital One. In my house, we subscribe to everything music, tv, even dog food. And it rocks. Until you have to manage it all. Which is where Capital One comes in. Capital One credit card holders can easily track, block or cancel recurring charges right from the Capital One mobile app at no additional cost. With one sign in, you can manage all your subscriptions all in one place. Learn more@Capital1.com subscriptions terms and conditions apply.
D
Hey, it's Adam Grant from Ted's podcast Work Life, and this episode is brought to you by ServiceNow. AI is only as powerful as the platform it's built into. That's why it's no surprise that more than 85% of the Fortune 500 companies use the ServiceNow AI platform. While other platforms duct tape tools together, ServiceNow seamlessly unifies people, data, workflows and AI connecting every corner of your business.
C
And with AI agents working together autonomously.
D
Anyone in any department can focus on the work that matters Most. Learn how ServiceNow puts AI to work for people@servicenow.com.
Episode Title: Why are people starting to sound like ChatGPT?
Guest: Adam Aleksic (Etymologist, Content Creator, Author of "Algo Speak")
Host: Elise Hu
Date: December 18, 2025
In this provocative TED Talk and follow-up conversation, etymologist and content creator Adam Aleksic explores how AI tools—particularly large language models (LLMs) like ChatGPT—are subtly but profoundly shaping how we speak, think, and even understand reality. Drawing on his linguistic expertise and research, Aleksic cautions that these technologies are not neutral conduits but active agents in molding culture, perceptions, and language itself. The discussion moves from viral trends to deeper questions of identity, cognition, and how we can maintain our "realness" in an increasingly AI-mediated world.
Perception Gaps & Algorithmic Distortions (04:13–07:30)
Quote:
“We all end up seeing this more extreme version of reality, and we're clearly starting to confuse that with actual reality.”
—Adam Aleksic (04:36)
Feedback Loops in AI and User Behavior (07:30–08:57)
Quote:
“We're in a positive feedback loop with the AI representing reality, us thinking that's the real reality, and then regurgitating it so that the AI can be fed more of our data.”
—Adam Aleksic (07:54)
Examples of Algorithmically Created Trends
Subtle Linguistic Convergence (18:51–20:39)
Quote:
"You're going to see it more, no matter how immune you think you are, and then you're going to start saying it more."
—Adam Aleksic (20:39)
Loop of Aggregation and Homogenization
Host Insight:
“…it’s this unending funhouse mirror or feedback loop of our language. We feed it. It feeds us back to us from, like, an aggregated data set.”
—Elise Hu (20:39–20:50)
Language Has Always Adapted to Mediums (12:44–13:26)
Quote:
“Algorithms are that new paradigm shift. AI is a new paradigm shift. We're in this really fast paced moment where our language is rerouting around these new mediums we're interacting with.”
—Adam Aleksic (13:04)
Gen Alpha, Slang, and Performative Speech (14:38–17:54)
Quote:
“Now the implied joke is still this possibility that a camera is watching you. And I think that's maybe a defining trend that I keep seeing, that we're kind of aware of this constant surveillance or Panopticon, and we're ironically performing for the algorithm…”
—Adam Aleksic (16:37)
Language Death and Cultural Flattening (21:09–23:01)
Quote:
“We have a language dying out every two weeks…this was happening before, I think the Internet perhaps accelerated it. But…algorithms which are more of this force for homogeneity…”
—Adam Aleksic (21:47)
The Imperative to Question and Critique (09:06, 23:32–25:47)
Education and Regulation
Quote:
“It's my dream that one day in 10th grade ELA class, along with poetry scansion, you have a unit for how to look at TikTok. And I know that sounds ridiculous, but it's not a joke.”
—Adam Aleksic (25:07)
Gradual Digital Integration for Kids
On AI's Subtlety:
“Essentially, we're subconsciously confusing the AI version of language with actual language.”
—Adam Aleksic (06:30)
Paradox of ‘Content Creation’:
“I hate the word content. … It implies, first of all, that it's interchangeable with other pieces of content and that…the content doesn't have anything special within it.”
—Adam Aleksic (30:01)
Language as Canary in the Coal Mine:
“I think language is the canary in the coal mine. That sort of proxy for greater cultural shifts…”
—Adam Aleksic (23:32)
| Timestamp | Segment Description | |---------------|-------------------------------------------------------------------| | 04:13–09:22 | Adam’s TED Talk: How AI & algorithms distort language and reality | | 11:43–21:09 | Interview: Language evolution, viral trends, youth slang | | 21:09–23:32 | Risks of linguistic homogenization & loss of cultural nuance | | 23:32–27:18 | How to resist: Media literacy, education, and regulation | | 27:18–30:01 | Personal insights: Creativity, hobbies, and pet linguistic peeves |
Adam Aleksic paints a complex but compelling picture of how AI—far from being a neutral accessory—reshapes the fundamental ways we speak, think, and see ourselves. The challenge, he argues, is not to escape these technologies, but to remain vigilant and literate, constantly interrogating the structures influencing our thoughts and words. As language evolves ever faster under algorithmic guidance, the need for collective digital awareness—and perhaps systemic regulation—has never felt more urgent.
For more TED Talks and to learn about their curation guidelines, visit ted.com/curationguidelines.