SUBSCRIBE
Tech Journal Now
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
  • More Articles
Reading: AI clones: the good, the bad, and the ugly
Share
Tech Journal NowTech Journal Now
Font ResizerAa
  • News
  • Reviews
  • Guides
  • AI
  • Best Buy
  • Games
  • Software
Search
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
  • More Articles
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Journal Now > AI > AI clones: the good, the bad, and the ugly
AI

AI clones: the good, the bad, and the ugly

News Room
Last updated: May 8, 2026 7:43 am
News Room
Share
9 Min Read
SHARE

AI is capable of mimicking a real person. It’s clear this capability exists, and the ethics of using AI for this purpose are often very clear. But increasingly, new applications are leading to ethically murky results. 

The good

For example, the CEO of a company, or a politician, could choose to create a clone using AI tools, creating a chatbot plus an avatar — a digital twin — that can interact with people on their behalf. Silicon Valley is big on the idea: Meta’s Mark Zuckerberg and LinkedIn co-founder Reid Hoffman are working on, or have already created, digital twins of themselves. 

Cloned politicians include Pakistan’s Imran Khan, who used an authorized voice clone to campaign from prison, and New York City Mayor Eric Adams, who used voice-cloned robocalls to speak with constituents in languages like Mandarin and Yiddish.

This kind of use case is probably ethical — as long as the people interacting know that they’re dealing with a digital clone and not a real person. 

The bad

The flip side of ethical uses for AI-generated clones is the non-consensual (and therefore unethical) cases. And of these, there are already many. For instance:

  • In 2019, the first widely documented case occurred when scammers used AI to mimic the voice and German accent of a parent company’s executive, successfully tricking the CEO of a UK energy firm into transferring €220,000 into a fraudulent account.
  • In 2023, an Arizona mother, Jennifer DeStefano, was targeted by extortionists who used an AI clone of her 15-year-old daughter’s voice to demand a $1 million ransom.
  • And in 2024, a finance worker at a multinational firm in Hong Kong was tricked into transferring $25 million after attending a video conference call featuring deepfake recreations of his CFO and several other colleagues. 

Other unethical, non-consensual uses for AI cloning include deepfake videos, where a celebrity’s face is superimposed on a porn actor. In all the above examples, the ethics are clear. This is all very wrong. 

But with China leading the way in the emergence of AI clones, the ethics are becoming far murkier. 

And the ugly

One emerging trend involves workers using specialized software to build digital versions of their bosses or colleagues. The most prominent project driving this trend is Colleague Skill, which was posted in late March by its creator, a 24-year-old Shanghai-based engineer named Zhou Tianyi. 

Colleague Skill and its forks and copycats, which tend to be open source, enable people to upload chat histories, emails, and internal documents to create a functional persona that mimics a specific coworker’s professional expertise and communication style. The technology stack includes tools like Claude, Kimi, ChatGPT, DeepSeek API, OCR (Tesseract), and sentiment analysis modules.

Colleague Skill uses a person’s past communications to build a talking replica of their personality. If you think of a regular AI as a general student who knows a little bit about everything, this tool acts like a specialized mask that forces the AI to behave like one specific individual. 

In other words, it produces a chatbot with the knowledge and patterns of speech of a real person. 

Colleague Skill started as a satirical commentary on AI-driven layoffs. But some employees began using it in earnest to clone their colleagues. There are several stated reasons for doing so, including retaining institutional knowledge and having an instant sounding board to “discuss” plans and ideas with. 

A similar motivation is the use of AI to clone bosses, so employees can better predict how that boss might react to the employees’ work. 

In most of these instances, according to reports out of China, the creation of the boss-bot or colleague clone is nonconsensual. 

Is non-consensually basing a custom chatbot on a colleague or boss unethical? 

And then it got personal (and weird)

Tianyi, creator of Colleague Skill, later forked it into something called Ex-Partner Skill. The idea is to re-create a former partner with AI so the user can continue the relationship. 

It operates on the same technical engine but applies it to a much more personal part of life. Users upload photos, social posts, chat logs and other content. The AI chatbot can then mimic the former partner’s tone, catchphrases, and subtle linguistic nuances, something that, “truly sounds like them — speaks with their catchphrases, replies in their style, remembers the places you went together.”

This allows a person to simulate conversations with someone who is no longer in their life.

If Colleague Skill is in a grey area, Ex-Partner Skill is in a darker grey area. 

(Note: many of the original repositories for Ex-Partner Skill have been removed from public view in China or “sanitized” after regulatory pressure. But the framework reportedly continues to circulate in private developer circles, and similar tools are increasingly used for “digital resurrection.”)

Ethically, the concept feels like it exists on a wide spectrum somewhere between therapy at one end and revenge porn at the other. (It’s like revenge porn in the sense that when “content” consensually made by two people for one purpose is later used consensually by one person in a way that the other person might find objectionable.)

Or maybe it’s closer to the “deathbot” phenomenon, where an AI-generated simulation provides a fake version of the dearly departed. (In both cases, the user interacts with a digital twin of someone who is no longer present in one’s life.) In fact, some people in China are using Ex-Partner Skill as a deathbot for a deceased loved one. 

The lack of consent feels like an ethical lapse. But we don’t consider it unethical to think about, remember, imagine conversations with, or journal about ex-partners — or dead family members. 

Boosters of the Ex-Partner Skill idea say that conversations with digital exes are therapeutic. They point out that because it’s private, it’s not harassment or stalking or an invasion of privacy. Instead, they argue, it helps with personal reflection and emotional healing.

As for people who have died, according to Chinese media reports, some users say the tool gives them a sense of closure and allows them to say the things they wish they could have said to the real person. But is it really closure if one person is still obsessively trying to interact — or pretend to interact — with the other person?

It’s healthy to communicate. But it’s not communication when a person is by themselves talking to no one and sending messages to a person who never gets those messages.

While ex-bots are a thing these days in China, the trend is showing up elsewhere. Some Character.AI users outside of China have created chatbots based on ex-partners, even though  the company has changed its Terms of Service to explicitly ban the creation of bots using the likenesses of private individuals without their permission. 

The emergence of nonconsensual cloning of coworkers, bosses and ex-partners is a new challenge to our sense of right and wrong, and yet another way AI is challenging us to step up and figure out how to respond.

Read the full article here

You Might Also Like

20 tricks for more efficient Android messaging – Computerworld

AI venture funding to shoot up this year as bubble looms – Computerworld

Hungarian government email passwords exposed ahead of election – Computerworld

Fleet hopes to be the MDM provider for the AI Era – Computerworld

AI often doesn’t deliver ROI for IT departments either

Share This Article
Facebook Twitter Email Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Trending Stories

Games

Old School RuneScape needs ‘to do a bunch of work to stop the game exploding soon’, as a problem from 2018 rears its ugly head

May 8, 2026
Games

Forza Horizon 6 car list: All the vehicles confirmed so far

May 8, 2026
Games

Despite Gothic Remake’s glow-up, it’s still the scrappy eurojank RPG it always was, and that’s fine with me

May 8, 2026
Games

This bloody vampire FPS with Splatoon movement tech is the best $5 I’ve spent on Steam this week

May 8, 2026
Games

It’s time to set the record straight: Doomguy’s armor does not have a tummy window, his shirt just got ripped

May 8, 2026
Games

One guy broke 2 different Fallout 4 speedrun world records 9 times in 2 months

May 8, 2026

Always Stay Up to Date

Subscribe to our newsletter to get our newest articles instantly!

Follow US on Social Media

Facebook Youtube Steam Twitch Unity

2024 © Prices.com LLC. All Rights Reserved.

Tech Journal Now

Quick Links

  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?