People are starting to “talk like AI,” according to OpenAI CEO Sam Altman.
While teachers and business leaders complain about people using AI chatbots to write and communicate — and more and more public figures use AI-controlled avatars to communicate on their behalf — even authentic human speech is starting to sound “fake” and “machine-like,” according to Altman, who posted his views Monday on X.
Altman isn’t just suggesting that people use AI a lot, influencing word choice and style of expression. He assumes it’sdriven by a variety of factors:
- People are adopting LLM-style language in their writing.
- The Extremely Online crowd tends to move in unison, drifting toward similar expressions and framing.
- Social platforms optimize for engagement, amplifying repetition and exaggeration.
- Creator monetization systems push people toward artificial formulas for how they say things.
- Actual bots are involved.
These trends make social network words and sentences seem fake, according to Altman.
It turns out that his claims about chatbots influencing human speech are supported by science.
The writing’s on the wall
Researchers from UCLA and the University of Copenhagen found that ChatGPT influences word choice. They studied more than 100,000 Reddit posts and comments between April 2023 and January 2024 and found that people picked up new words and phrases from the chatbot.
The research team, led by Carmela Zunino and Michael S. Bernstein, found words like “delve,” “showcase,” and “underscores” grew more common in online discussions — and many were rare before 2023. The team checked older data and found those words became popular right after ChatGPT became well-known.
They also looked at tech and AI-focused groups, finding even bigger changes.
But wait, you say. How do they know these linguistic twists aren’t from people copying and pasting from ChatGPT, rather than being influenced by it? The increase in GPT words was found in unscripted conversation — not just in scripted or edited material.
Even with people copying and pasting, it doesn’t matter. The paper notes the language change wasn’t only from ChatGPT users, but also from people who saw others repeat these words. People are influenced by chatbot word preference secondhand.
Speaking of speech
AI chatbots affect word choice when people are type things on social networks. They’re also changing the way we speak, according to another study.
Humans change how we speak based on whom we’re talking to. Subtle speech tones and inflections can reveal a lot about a person, such as their age, where they’re from or what group they belong to. When people converse, they often start to match each other’s speech, altering their speed, pronunciation, and other cues.
If a New Yorker talks to a Texan, the Texan will speak a bit more like a New Yorker after a while — and the New Yorker will talk a little more Texan.
New research by Éva Székely, Jūra Miniota, and Míša Hejná from KTH Royal Institute of Technology in Sweden and Aarhus University in Denmark, looked at how talking machines might change the way people talk. Their paper claimed that when machines talk to us with real-sounding voices, they don’t just give answers or help with tasks. They have their own specific accents and sound patterns.
The authors found that the same matching between two people can occur when a person talks to a machine with a realistic voice.
As we increasingly interact with spoken-language AI, we’re likely to change how we talk to sound a more like the machine. Synthetic voices that have certain ways of talking, such as a particular accent or style, could spread and change how people talk in general. Groups, companies, or even political teams might use machine-made voices to make some ways of talking seem better or more accepted.
What this all means
Altman is right: AI is changing the way we talk, and the implications are profound.
One possible consequence is the homogenization of language. Because AI favors a polished, standardized register, people increasingly adopt similar phrasing and cadence in writing and talking, flattening the richness of dialects, vernaculars, and cultural variations. The subtle quirks that once defined regional or communal identity now risk being diluted, replaced by a kind of synthetic “GPT English.” While this might make global communication clearer, it also makes it colder and more uniform.
But language doesn’t just reflect how we talk — it also shapes how we think. If AI-conditioned phrasing becomes the norm, thought patterns themselves could shift toward the syntax and cadence of machines. This would encourage clarity and efficiency, but diminish nuance, creativity, and narrative flow, which are often at the heart of human expression. Conversations and writing might start to feel less personal and more formulaic, aligning with the linguistic patterns favored by large language models (LLMs).
If standardized, machine-trained American English, or some even more homogenized average style of speaking come to dominate, it could become the prestige forms of speech in social and professional contexts. That could mean wider convergence around AI-like intonation, reduced diversity of global accents, and shifts in what society deems “trustworthy” or “professional” speech.
This carries social risks. If most AI voices favor one accent, that could lead to reinforced accent bias. People who don’t sound like AI might even face subtle discrimination in workplaces or customer-facing roles.
The stakes extend into education, where students are already submitting work shaped by “GPT English,” and teachers are adapting to the erosion of authentic voice in favor of AI-polished language. At work, polished machine-like phrasing might well become the standard expectation. People with regional or informal styles could appear less competent.
The reverse could also come about. It could be that a generic, AI-chatbot-influenced language could create “demand” for more appealing, more human, and more diverse ways of linguistic expression.
Either way, we’re about to find out.
Read the full article here