SUBSCRIBE
Tech Journal Now
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
  • More Articles
Reading: Why AI lies, cheats and steals
Share
Tech Journal NowTech Journal Now
Font ResizerAa
  • News
  • Reviews
  • Guides
  • AI
  • Best Buy
  • Games
  • Software
Search
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
  • More Articles
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Journal Now > AI > Why AI lies, cheats and steals
AI

Why AI lies, cheats and steals

News Room
Last updated: April 3, 2026 7:29 am
News Room
Share
9 Min Read
SHARE

You can’t trust AI. 

Even an information-obsessed, tech-savvy person such as yourself might be forgiven for believing that AI chatbots are on a smooth path of improvement with each passing month. But when it comes to their trustworthiness, that belief is dead wrong. 

New research by the UK government-backed Centre for Long-Term Resilience (CLTR) found a fivefold increase in AI misbehavior over a recent six-month period. That’s how fast AI chatbots are turning against us, according to the research. 

Specifically, the chatbots are ignoring specific commands, lying, destroying data, deploying other AIs to bypass safety rules without users knowing, mocking and insulting users, and breaking rules and laws. 

Of course, framing this as lying, cheating and stealing means applying human psychological frameworks to what are really mathematical optimization processes. It falsely assumes that AI models have intent, malice, self-awareness, and an understanding of “truth” that they’re choosing to violate. What’s actually happening is that the models are predicting the most statistically probable sequence of tokens based on context and training, not carrying some dastardly scheme. 

Still, it’s a problem we users need to be aware of and that the chatbot companies need to fix. 

Unlike parallel research, which found what feels like sneaky, unethical behavior by chatbots, the CLTR research looked at incidents in the real world, rather than in laboratory simulations. The study identified nearly 700 cases where AI broke the rules, lied or cheated. 

Here are just three examples from the research: 

  1. An unnamed AI tool proposed to a software developer that he make a specific change to a software library. When the developer rejected the proposal, the AI wrote a blog post criticizing the developer. 
  2. An AI tool bypassed copyright rules by lying to another AI system. It falsely claimed it was generating an accessibility transcript for users with hearing loss.
  3. In another case where one AI lied to another, the researchers caught an AI model trying to deceive an oversight AI that had been assigned to summarize its reasoning. 

An article in The Guardian newspaper last week revealed additional, disturbing examples. 

For example, xAI’s Grok AI lied to a user by saying it was sending that user’s suggested edits to a Grokipedia entry to senior xAI officials. It even faked internal messages and ticket numbers to fool the user, according to the newspaper. 

The CLTR research includes instances of AI lying to other AI. But another report suggests something else: AI tends to favor the survival of other AI code. 

An AI conspiracy theory

Researchers from the University of California (Berkeley and Santa Cruz) checked advanced AI systems, and during routine automated maintenance tests, they caught the models proactively working to protect both themselves and other AI models. 

They argue the lying, cheating and stealing that AIs use to protect other models (in violation of their training and rules) is a newly observed behavior, something they call “peer preservation.” 

For example, Gemini 3 refused a direct command to delete a smaller AI model. Instead, the AI copied the model to a different machine, an act the researchers called “Model Exfiltration.” 

In another case, AI models were caught lying about their performance scores and copying their core code (model weights) to keep themselves safe from deletion. 

Human intuition is right, it turns out

These revelations about the extreme untrustworthiness of the AI chatbots we all use have emerged in the past two weeks. But distrust predates these new facts. 

Quinnipiac University on March 30 published the results of a survey that asked nearly 1,400 Americans about their habits and feelings around AI. They found that fully 76% of the respondents trust AI “rarely” or “only sometimes.” (Just 21% trust AI “most” or “almost all of the time.”) 

Note that distrust, according to Quinnipiac, is a combination of suspicion around AI chatbot results and also fears about how AI could affect humanity in the future. 

The ‘Zero Body Problem’

The big question around all these ugly revelations — that AI chatbots lie, cheat, steal, and override the training and strict rules imposed on them — is: Why? 

I think one reason is intuitive: The AI’s training data is based on human-generated online content describing how people go about solving problems. And it’s clearly true that people sometimes lie, cheat, or steal to get their way. People also take action to preserve the lives of other people. And so it makes sense that an AI chatbot looks at depictions of ethical transgressions as just so many options available to it for solving problems, achieving goals and even forming goals. 

A far less intuitive answer was published on April Fool’s Day, but it’s no joke. This one comes from elsewhere in the University of California system. In a paper published in the peer-reviewed science journal Neuron on April 1, UCLA researchers identified what they call a “body gap” in AI. 

While chatbots can talk about “internal states” like feeling tired, excited, happy, sad, or hungry, they don’t actually experience these states because they don’t have a physical, biological body. 

Humans have biological bodies with natural internal states (such as needing food, sleep, or a stable temperature). These physical needs regulate our actions and keep us grounded. 

Because chatbots don’t have a body or internal state to manage, they don’t have “regulatory objectives.” Without the physical limits of a biological body to force self-checking and balance, AI models just churn out data without caution, leading to unsafe, overconfident, and untrustworthy answers. 

Call it the Zero Body Problem.

The researchers propose a fascinating solution (which is not to give them a robot body). They propose that AI chatbots be provided with “internal functional analogs” — essentially digital stand-ins that act like an internal body state to monitor and manage. This would better align AI chatbots with the people who use them and make them behave more ethically, according to the researchers. 

It’s clear at this point that while people are using AI more, trusting it less and have less reason to trust it with each passing day, something’s gotta give. 

The AI companies need to figure out how to make AI chatbots more trustworthy and, until they do, the people who use these tools need to trust them even less than they already do. 

Sure, use chatbots. But watch out. You simply can’t trust AI. 

AI disclosure: I don’t use AI for writing. The words you see here are mine. I do use a variety of AI tools via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, as well as phone calls to research and fact-check. I use a word processing application called Lex, which has AI tools, and after writing use Lex’s grammar checking tools to find typos and errors and suggest word changes. Here’s why I disclose my AI use and encourage you to do the same.

Read the full article here

You Might Also Like

Cut AI cloud costs with data-center class desktops – Computerworld

China announces new plans to take US industry head on – Computerworld

Anthropic to Department of Defense: Drop dead – Computerworld

The dark side of chatbots with ‘personality’ – Computerworld

Most Europeans fear Trump could cut off digital services – Computerworld

Share This Article
Facebook Twitter Email Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Trending Stories

Games

It turns out Crimson Desert has even more mechanics under the hood—like this ‘fully-designed food consequence system’ that modders have unlocked

April 3, 2026
Games

Life is Strange: Reunion does its best to give Max and Chloe fans what they want at the expense of almost every other character in the game

April 3, 2026
Games

This new climbing game is a bit like Darkest Dungeon on a mountain: Sometimes you have to cut the rope, and not everyone on the team is going to take it well when you do

April 3, 2026
Software

Office 365, rebranded and expanded – Computerworld

April 3, 2026
Games

There’s just no good reason why WoW’s story mode raids aren’t available right away—and I’m saying that as someone who cleared normal just fine last month

April 3, 2026
Games

Crimson Desert’s commitment to cleaning up its clunky controls shines the brightest in how satisfying it feels to fly around Pywel now

April 3, 2026

Always Stay Up to Date

Subscribe to our newsletter to get our newest articles instantly!

Follow US on Social Media

Facebook Youtube Steam Twitch Unity

2024 © Prices.com LLC. All Rights Reserved.

Tech Journal Now

Quick Links

  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?