LLMs are changing how we speak, say German researchers • The Register

Like it or not, ChatGPT and other large language models are changing the world, including affecting how we speak, claims a group of researchers, and the end results could be an erosion of linguistic and cultural diversity.

A team from the Max Planck Institute for Human Development in Germany published a non-peer-reviewed preprint copy of research they say detects that words that ChatGPT uses preferentially have started to appear more frequently in human speech since the bot was unleashed on the world in 2022. 

So-called “GPT words” include comprehend, boast, swift, meticulous, and the most popular, delve. After analyzing 360,445 YouTube academic talks and 771,591 podcast episodes, the team concluded words like delve, swift, meticulous, and inquiry were just a few examples of terms that began appearing in more podcasts and videos across various topics. 

If AI systems disproportionately favor specific cultural traits, they may accelerate the erosion of cultural diversity

The paper doesn’t go beyond statistically analyzing a few words to track their increased usage, and it doesn’t pass judgment over whether the shift is good or bad, but it’s enough to make the researchers ask an important question: Do LLMs have a culture that’s shaping ours?

“The uptake of words preferred by LLMs in real-time human-human interactions suggests a deeper cognitive process at play,” the researchers said, but they note the actual underlying adoption process remains unknown. 

That said, there could be long-term ramifications of AI-human interactions on how both use language, which could end up creating a “closed cultural feedback loop in which cultural traits circulate bidirectionally between humans and machines,” the researchers noted. 

While fascinating from a cultural evolution perspective, the thought is a concerning one to the researchers, who noted that, with sufficient time and reach, an AI/human linguistic discourse could lead to cultural homogeneity. 

“If AI systems disproportionately favor specific cultural traits, they may accelerate the erosion of cultural diversity,” the team wrote in their paper. “Compounding this threat is the fact that future AI models will train on data increasingly dominated by AI-driven traits, further amplified by human adoption, thereby reinforcing homogeneity in a self-perpetuating cycle.” 

In other words, it’s a swift but meticulous delve into how machines comprehend now, and it’s a boring, beige future of AI-defined communication standards later. 

That’s a pretty gloomy picture of the future of serving our AI overlords. Worse still, model collapse isn’t a safety valve – it’s an additional hazard.

“As specific patterns become monopolized, the risk of model collapse rises through a new pathway: even incorporating humans into the loop of training models might not provide the required data diversity,” the researchers said. 

In short, let’s not start taking our vernacular cues from a robot that has no idea what it’s actually saying, okay? ®

Leave a Comment