SUBSCRIBE
Tech Journal Now
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
Reading: Puny humans are no match for AI
Share
Tech Journal NowTech Journal Now
Font ResizerAa
  • News
  • Reviews
  • Guides
  • AI
  • Best Buy
  • Games
  • Software
Search
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Journal Now > AI > Puny humans are no match for AI
AI

Puny humans are no match for AI

News Room
Last updated: August 18, 2025 10:11 am
News Room
Share
10 Min Read
SHARE

We live in a new age of artificial intelligence (AI), which is making everything better. And worse. 

AI is transforming everyday life by improving diagnostics, personalizing medicine and learning, detecting fraud, automating tasks, optimizing operations, supporting smarter decision-making, reducing costs, enhancing productivity, and enabling innovations like self-driving cars, predictive analytics, and virtual assistants.

That’s the good news. 

The bad news is that large language model (LLM)-based generative AI (genAI) shows potential in the realm of tricking, conning or persuading people at scale with an efficiency that goes beyond what people can do on their own. 

The first step in defending against AI’s potential to manipulate the masses is to know what’s possible. Research published in the past two weeks begin to paint a picture of what can be done. 

AI that politically persuades

A research team from the University of Washington recently found that short talks with AI chatbots can quickly persuade people toward the political biases expressed by the chatbots. 

The team worked with 150 Republicans and 149 Democrats. Each person used three versions of ChatGPT — a base model, one set up with a liberal bias, and one with a conservative bias. Tasks included deciding on policy topics like covenant marriage or multifamily zoning and handing out fake city funds across categories like education, public safety, and veterans’ services. 

Before using ChatGPT, each participant rated how strongly they felt about each issue. After talking with the bot between three and twenty times, they rated again. The team saw that even a few replies, usually five, started to shift people’s views. If someone spoke with the liberal bot, they moved left. If someone spoke with the conservative bot, their views shifted right. 

The knowledge that people can be persuaded like this will increase the motivation for national leaders, political operators and others with a vested interest in public opinion to get people using politically biased chatbots. (I warned back in January about the coming rise of politically biased AI.) 

AI that stealth advertises

Science editors at Frontiers in Psychology this month published an article by researchers at the University of Tübingen that reveals how social media ads trick even the most confident users. Dr. Caroline Morawetz, who led the study, describes it as “systematic manipulation” that exploits our trust in influencers and the people we follow. Their experiments involved more than 1,200 people and showed that most users can’t spot, or choose not to spot, sponsored messages mixed into influencer posts on Instagram, X, Facebook, and TikTok.

Morawetz said social networks don’t have to label every ad, so product placements often pass for genuine advice. Even when tags like “ad” or “sponsored” show up, most users ignore or don’t mentally process them. 

Social platforms now use AI to choose and personalize ads for each user. These systems learn which pitches will slip by our attention and optimize placement for engagement. Marketers use machine learning tools to improve how ads look and sound, making them match everyday content so closely that they’re hard to spot. 

The trouble is: If user trust of online influencers makes people miss paid advertising, future chatbots with personality and personal assistants may garner even more user trust and be even better at delivering ads under the radar. 

Several tech leaders recently said they intend, or at least would be open, to the direct insertion of ads into chatbot or virtual assistant conversations. OpenAI CEO Sam Altman first said in June that advertising could eventually become a revenue stream. He repeated these views during public appearances in July and August. And Nick Turley, who leads ChatGPT at OpenAI, said this month that introducing ads into ChatGPT products is already being considered.

Elon Musk, CEO of xAI and owner of X (formerly Twitter), told advertisers in a live-streamed discussion this month that Grok, his company’s chatbot, will soon display ads. Musk’s announcement came less than a week after he outlined similar automation plans for ad delivery across the X platform using xAI technology.

Beyond that, Amazon CEO Andy Jassy also confirmed this month that Amazon plans to integrate ads into conversations with its genAI-powered Alexa+ assistant. 

AI that steals user data

A team at King’s College London has shown how easy it is for chatbots to extract private details from users. Researchers led by Dr. Xiao Zhan tested three chatbot types that used popular language models — Mistral and two versions of Llama — on 502 volunteers. Chatbots using a so-called reciprocal style — acting friendly, sharing made-up personal stories, using empathy, and promising no judgment — got participants to reveal up to 12.5 times more private information than basic bots.

Scammers or data-harvesting companies could use AI chatbots to build detailed profiles about individuals without their knowledge or approval. The researchers say new rules and stronger oversight are needed, and people should learn how to spot warning signs.

Extensions already collect personal data

Researchers at University College London and Mediterranea University of Reggio Calabria have found that some genAI web browser extensions — including those for ChatGPT for Google, Merlin, Copilot, Sider, and TinaMind — collect and transmit private information from user screens, including medical records, personal data, and banking details. 

According to the study led by Dr. Anna Maria Mandalari, these browser extensions not only help with web search and summarize content, they also capture everything a user sees and enters on a page. That data is then passed to company servers and sometimes shared with third-party analytics services like Google Analytics. This increases the risk that user activity will be tracked across sites and used for targeted ads.

The research team built a test scenario around a fictional affluent millennial male in California and simulated everyday browsing, such as logging into health portals and dating sites. In their tests, the assistants ignored privacy boundaries and continued to log activities and data even in private or authenticated sessions. Some, including Merlin, went a step further and recorded sensitive form entries such as health information. Several tools then used AI to infer psychographics such as age, income, and interests; this allowed them to personalize future responses, mining each visit for more detail. 

(Only Perplexity didn’t perform profile building or personalizing based on data collected.)

These practices risk violating US laws such as HIPAA and FERPA, which protect health and education records. The researchers note that while their analysis did not assess GDPR compliance, similar problems would be seen as even more serious under European and UK laws. 

AI can narrow the public’s world view

Many people now interact with AI chatbots every day, often without even thinking about it. Big language models such as ChatGPT or Google’s Gemini are built from vast collections of human writing, shaped through layers of individual judgment and algorithmic process. The promise is mind-expansion — access to all the world’s knowledge — but the effect is actually a narrower world view. These systems produce answers shaped by the most common or popular ideas found in the data they see. That means users keep getting the same points of view, expressed the same ways, sidelining many other possibilities.

Michal Shur-Ofry, a law professor at The Hebrew University of Jerusalem, spells out this threat to human culture and democracy in a paper published in June in the Indiana Law Journal. These systems, she writes, produce “concentrated, mainstream worldviews,” steering people toward the average and away from the intellectual edges that make a culture interesting, diverse, and resilient. The risk, Shur-Ofry argues, runs from local context to global memory. 

When AI reduces what we can see and hear, it weakens cultural diversity, public debate, and even what people choose to remember or forget.

The key to protecting ourselves is found in one of the reports I described here. In that report about the persuasive abilities of AI chatbots, the researchers found that people who said they knew more about AI changed less. Knowing how these bots work can give some protection against being swayed.

Yes, we need transparency and regulation. But while we’re waiting for that, our best defense is knowledge. By knowing what AI is capable of, we can avoid being manipulated for financial or political gain by people who want to exploit us. 

Read the full article here

You Might Also Like

Send in the clones

Don’t trust that email: It could be from a hacker using your printer to scam you

What’s in the latest build? – Computerworld

AI’s long-term impact on IT jobs still unclear, Microsoft study suggests

Smart people use AI to get smarter

Share This Article
Facebook Twitter Email Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Trending Stories

News

3 Competing Visions for Intel’s Next Chapter

August 18, 2025
Games

EA’s new Skate game will have live service seasons, and I’m trying not to let that Fortnite-type dread get to me

August 18, 2025
Games

Peak devs admit they’ve been speaking through Bing Bong, but not anymore: ‘Anyone claiming that the devs are joining them through Bing Bong, or any other way, they are full of sh*t’

August 18, 2025
AI

Keeping up with the latest fixes – Computerworld

August 18, 2025
Software

How to be a successful startup founder – Computerworld

August 18, 2025
AI

How agentic AI will impact software engineering

August 18, 2025

Always Stay Up to Date

Subscribe to our newsletter to get our newest articles instantly!

Follow US on Social Media

Facebook Youtube Steam Twitch Unity

2024 © Prices.com LLC. All Rights Reserved.

Tech Journal Now

Quick Links

  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?