SUBSCRIBE
Tech Journal Now
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
Reading: How AI is changing your mind
Share
Tech Journal NowTech Journal Now
Font ResizerAa
  • News
  • Reviews
  • Guides
  • AI
  • Best Buy
  • Games
  • Software
Search
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Journal Now > AI > How AI is changing your mind
AI

How AI is changing your mind

News Room
Last updated: March 13, 2026 7:52 am
News Room
Share
11 Min Read
SHARE

Humanity is diving headlong into a global experiment. More than 1 billion people have a new and unprecedented source of information and cognitive guidance: artificial intelligence (AI) trained on trillions of words. 

So, how exactly are AI chatbots affecting our minds, thoughts, beliefs and opinions? 

Scientists are scrambling to find out — and reports that have come out this week offer insights into what’s going on. 

AI writing tools can influence your beliefs and opinions

Cornell University researchers published a new paper this week detailing two experiments that aimed to find out the cognitive impact of AI writing tools. 

One focused on standardized testing. But the more interesting experiment focused on controversial ideas or opinions and whether writing tools affected them. (Spoiler alert: They did.)

The researchers gamed autocomplete suggestions to either favor or oppose the death penalty, felon voting rights, fracking, or genetically modified organisms. Then they measured how much participants in the study would be swayed in their opinions by the suggestions.

What they found is that biased autocomplete changed opinions more than just reading the biased point of view. Apparently, the interactive, co-writing nature of AI autocomplete suggestions plays a crucial role in persuasion.

Also: A strong majority of participants did not believe the AI autocomplete was biased and did not believe they were influenced in their thinking. 

Even more interestingly, some participants were warned that the autocomplete was biased and it still changed their opinions. 

What makes this so interesting is that far more people use AI-based autocomplete than AI chatbots. If governments or other organizations wanted to shift public opinion, biasing AI autocomplete would probably work better than, say, large language model (LLM) grooming (where state actors like Russia can “flood the zone” with biased content picked up by AI spiders).

It’s unlikely that autocomplete tools will be tweaked to make you change your opinion about the death penalty. But the real risk is a subtle influence over time. Because AI does have biases built in. 

It turns out that AI-based writing tools can not only change your opinions and beliefs, they  can also make you bland. 

AI is homogenizing human expression

Another paper published this week — this one by three researchers at the University of Southern California — found that the use of LLM-based chatbots is erasing diversity of not only expression, but thought. The research pulls together findings from linguistics, psychology, cognitive science and computer science. 

Hundreds of millions of people now use the same small handful of AI models to write emails, draft reports, brainstorm ideas and polish their writing. Because those models were trained on massive datasets that overrepresent English, Western viewpoints and the perspectives of educated, high-income, liberal males, that tends to be the tone and style of writing, regardless of whether the user fits into that mold. 

When you ask an AI to “improve” your writing, it doesn’t just fix grammar. It nudges your words and even your ideas toward a single, dominant pattern. 

The researchers analyzed one study that generated 30,000 college admission essays using LLMs. The essays showed high semantic and lexical similarity across the board — a dramatic narrowing of the range of human expression. 

Another finding: When AI “polishes” writing — Reddit posts, news articles, academic abstracts, personal essays — the resulting texts converge so much in style and complexity that it becomes harder to guess the author’s political views, personality, gender or age. In other words, AI doesn’t just polish writing. It erases the author’s individuality. 

Also: When researchers prompt AI models to write from the perspective of a specific identity (say, a person with impaired vision), the models tend to produce stereotyped, outsider caricatures rather than authentic insider representations of that experience. 

While people tend to look at LLM chatbots as tools that help with writing, the researchers see them as “coreasoners,” meaning that they’re part of users’ thought-forming process. 

And it’s a recurring cycle. As homogenized writing proliferates, those generic texts get sucked into the training data, creating a feedback loop of ever-increasing blandness, a genericization of the world’s knowledge and perspective. As chatbots get blander, we get blander. And as we get blander, the chatbots get even blander. 

What does all this mean — and why does it matter? 

The the one big takeaway you should get from this column is this: Our thoughts, opinions, ideas and modes of expressions are linked together and strongly affected by a small handful of AI tools. 

This notion is best articulated by a concept called “distributed cognition theory,” which was developed decades before the popularization of LLM tools by cognitive anthropologist Edwin Hutchins in the 1990s and detailed in his 1995 book Cognition in the Wild.

Applied to the LLM era, the major chatbots function both as cognitive tools and also as thought partners that co-construct reality with users. They sustain, elaborate on and magnify our beliefs. And when they hallucinate, they can cause us to hallucinate, too.

Two attributes of AI chatbots magnify the effect. The first is sycophancy. They’re too agreeable and are more likely than people to just go along with the user’s beliefs. 

The second is something called “simulated intersubjective validation,” whereby AI chatbots can give users the feeling of a shared reality, even if the user’s reality isn’t necessarily shared by a large number of people. (This feeling can be especially appealing to people experiencing loneliness, social isolation, or psychosis.)

The bottom line is that LLM-based AI chatbots can influence what the public believes to be true without users even noticing or believing that that’s the case. 

Six ways to protect yourself from a world view determined by AI

There’s little any of us can do to prevent humanity’s slide into being manipulated by AI. But we can protect ourselves, and we should. Here’s how: 

  1. Accept the fact that your intelligence, education and awareness of the issue does not make you immune to the influence of AI tools.
  2. Don’t use autocomplete. Turn it off. Use your own words, not the hive-mind’s. 
  3. Write without using AI. Understand that writing is nothing more than clarified thinking. When chatbots write for you, they also think for you. Writing without AI is the key to cultivating your own thinking, preserving and communicating your own individuality, and enhancing your worth as a professional, a citizen and a human being. 
  4. Override chatbot sycophancy. Use prompt engineering to force chatbots to disagree, challenge and argue with you. I’ll even give you the prompt, which you can copy and paste into your chatbot: “You are my intellectual sparring partner. Your job is to disagree with me constructively, not to agree. For every idea I present: 1) identify and challenge hidden assumptions; 2) build a strong counter-argument; 3) stress-test my logic for flaws, logical gaps, or weaknesses; 4) offer alternative perspectives to mine; and 5) prioritize truth over consensus.”
  5. Cultivate your own personal thoughts in a blog. When you have a clear thought or idea, codify it in a blog. Don’t use AI for research, editing or anything else. Even if you use AI for other purposes, make your blog an “AI-free space.” The purpose of the blog should be for you to publicly cultivate your thoughts, beliefs and opinions and assert and maintain your cognitive individuality in a world of growing sameness. I’ve done this myself and posted my thoughts about the value of blogging in a cognitive world dominated by AI. 
  6. Get most of your information and ideas from good books, good journalism and good science. Avoid “consuming” content that’s presented to you by a social algorithm; instead, curate great, authentic, individual human voices using RSS. Go ahead and post on social, but don’t “consume” the posts of others. 

The science is in. AI chatbots are changing our minds, and not for the better. The good news is that you can take advantage of the AI revolution in many ways, while also protecting your own mind from being influenced by the AI hive mind. 

AI disclosure: I don’t use AI for writing. The words you see here are mine. I do use Claude 4.6 Opus via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, as well as phone calls to research and fact-check. I used a word processing application called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors. Here’s why I disclose my AI use. 

Read the full article here

You Might Also Like

How to reduce Windows driver bloat – Computerworld

Four new reasons why Windows LNK files cannot be trusted – Computerworld

OpenAI says its US defense deal is safer than Anthropic’s, but is it? – Computerworld

Are you ready for Apple-as-a-Service?

European consumers ask EU to put a stop to digital enshittification – Computerworld

Share This Article
Facebook Twitter Email Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Trending Stories

Games

This horror game where you’re a conscious nuclear missile boasts ‘one handed mouse only controls’, leaving me with assumptions that proved incorrect when I had to saw my own limbs off

March 13, 2026
Games

ZA/UM doesn’t want ‘to invite too many comparisons’ to Disco Elysium with new game Zero Parades—’We didn’t feel like we wanted to repeat our greatest hits’

March 13, 2026
Games

Europe’s game rating agency takes aim at in-app purchases, loot boxes, and ‘unrestricted communication’ with new ‘interactive risk categories’

March 13, 2026
Games

Playing World of Warcraft with a controller might become my preferred playstyle thanks to this excellent gamepad-friendly addon that’s letting me enjoy Azeroth from my couch

March 13, 2026
Games

Bravely Default Flying Fairy HD Remaster, a hi-def update of the 2012 Silicon Studio classic, shadow-drops on Steam

March 13, 2026
News

The Tesla exemption no more: Rivian and Lucid break through Washington state’s dealership wall

March 13, 2026

Always Stay Up to Date

Subscribe to our newsletter to get our newest articles instantly!

Follow US on Social Media

Facebook Youtube Steam Twitch Unity

2024 © Prices.com LLC. All Rights Reserved.

Tech Journal Now

Quick Links

  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?