The ChatGPT Effect: Is Your Voice Sounding Like AI?

The ChatGPT Effect: Is Your Voice Sounding Like AI?

Illustration showing human figures surrounded by abstract glowing shapes representing AI influence on communication.

AI is changing how we speak and interact. (Image: The Verge)

AI isn’t just impacting how we write — it’s subtly changing how we speak and interact with others. Are linguistic patterns shifting towards an "AI voice"? Researchers say this influence is accelerating.

Join any Zoom call, walk into any lecture hall, or watch any YouTube video, and listen carefully. Past the content and inside the linguistic patterns, you’ll find the creeping uniformity of AI voice. Words like “prowess” and “tapestry,” which are favored by ChatGPT, are creeping into our vocabulary, while words less favored by the model have declined.

The Rise of the "Virtual Vocabulary"

Researchers are already documenting shifts in the way we speak and communicate as a result of ChatGPT — and they see this linguistic influence accelerating into something much larger.

In the 18 months after ChatGPT was released, speakers used words like “meticulous,” “delve,” “realm,” and “adept” up to 51 percent more frequently than in the three years prior, according to researchers at the Max Planck Institute for Human Development. They analyzed close to 280,000 YouTube videos from academic channels and confirmed these words align with those the model favors.

The speakers don’t realize their language is changing. That’s exactly the point. One word, in particular, stood out: “Delve.” It has become an academic shibboleth, a linguistic watermark flashing "ChatGPT was here."

"We internalize this virtual vocabulary into daily communication." - Hiromu Yakura, Max Planck Institute

As Levin Brinkmann, a coauthor of the study, puts it, "'Delve' is only the tip of the iceberg."

Beyond Vocabulary: Tone and Trust

It’s not just that we’re adopting AI language — it’s about how we’re starting to sound. Researchers suspect that AI influence is starting to show up in tone, too — in the form of longer, more structured speech and muted emotional expression.

AI also shows up in functions like smart replies. Research out of Cornell found that using smart replies can increase cooperation and feelings of closeness. However, if people believed their partner was using AI, they rated them as less collaborative and more demanding. It wasn’t actual AI usage that turned them off — it was the suspicion of it. We form perceptions based on language cues, and it’s the language properties that drive those impressions.

The Deeper Loss: Human Signals and Trust

This paradox — AI potentially improving communication while fostering suspicion — points to a deeper loss of trust, according to Mor Naaman, professor at Cornell Tech. He identifies three levels of human signals we risk losing:

  • Basic humanity signals: Cues like vulnerability or personal rituals that say, "This is me, I'm human."
  • Attention and effort signals: Proof that says, "I cared enough to write this myself."
  • Ability signals: Showing our sense of humor, competence, and real selves.

Consider the difference: "I'm sorry you're upset" versus "Hey sorry I freaked at dinner, I probably shouldn't have skipped therapy this week." One sounds flat; the other sounds human.

Figuring out how to bring back these signals is crucial, as AI is changing not only language but what we think. The loss of agency, starting in our speech and moving into our thinking, is a significant concern. Instead of articulating our own thoughts, we may articulate whatever AI helps us articulate, potentially becoming more persuaded.

Without these signals, trust may erode, potentially limiting trusted communication to only face-to-face interactions.

AI and Linguistic Diversity

The trust problem worsens when considering how AI defines "legitimate" communication. University of California, Berkeley research found that AI responses often contained stereotypes or inaccuracies when prompted to use non-Standard American English dialects.

This means AI doesn’t just prefer Standard American English; it can actively flatten other dialects, potentially demeaning speakers. This perpetuates inaccuracies not only about communities but also about what "correct" English is.

The stakes aren’t just about preserving linguistic diversity — they’re about protecting the imperfections that build trust. When everyone sounds "correct," we lose the verbal stumbles, regional idioms, and off-kilter phrases that signal vulnerability, authenticity, and personhood.

The Future of Human Communication

We’re approaching a splitting point between standardization (like templating emails) and authentic expression. There are three core tensions:

  • Self-regulation: Early backlash suggests we may push back against linguistic homogenization.
  • AI evolution: Systems will likely become more expressive and personalized.
  • Loss of agency: The deepest risk is losing conscious control over our own thinking and expression.

The future isn’t fixed; it depends on whether we are conscious participants in this change. Will we actively choose to preserve space for the verbal quirks and emotional messiness that make communication irreplaceably human?

Read more about the impact of AI on language in recent studies or explore how to maintain your unique voice in the digital age.

Comments