SUBSCRIBE
Tech Journal Now
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
  • More Articles
Reading: Beware of headlines touting impossible AI benefits, analysts warn
Share
Tech Journal NowTech Journal Now
Font ResizerAa
  • News
  • Reviews
  • Guides
  • AI
  • Best Buy
  • Games
  • Software
Search
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
  • More Articles
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Journal Now > AI > Beware of headlines touting impossible AI benefits, analysts warn
AI

Beware of headlines touting impossible AI benefits, analysts warn

News Room
Last updated: March 31, 2026 4:07 pm
News Room
Share
6 Min Read
SHARE

It’s no big deal, you’d think, that researchers have found a way to reduce the computing requirements for one of the many steps involved in training an AI model to help robots manipulate simple geometric objects.

Yet such is the concern about the rising cost of powering data centers for AI applications that this one small and largely unremarkable finding prompted breathless headlines such as “100x Less Power: The Breakthrough That Could Solve AI’s Massive Energy Crisis.”

Don’t believe the hype

No-one’s disputing the researchers’ findings, but reports about them may be somewhat exaggerated: “The leap from the research conducted in the arXiv study to the conclusion in the associated news articles is the stuff of myth. It’s the kind of hype that Gartner warns clients to avoid,” said Gartner VP analyst Nader Henein.

The researchers, from Human-Robot Interaction Lab at Tufts University in the US and the Center for Vision, Automation, and Control in Vienna, Austria, compared the training cost and performance of vision-language-action (VLA) models with that of a neuro-symbolic architecture using PDDL-based symbolic planning, reporting the results in a paper, The Price Is Not Right: Neuro-Symbolic Methods Outperform VLAs on Structured Long-Horizon Manipulation Tasks with Significantly Lower Energy Consumption. The paper has been accepted for presentation at the IEEE International Conference on Robotics and Automation.

Yuri Goryunov, who is the CIO for consulting firm Acceligence, also questioned whether the study’s energy-saving findings are applicable to broader problems in the enterprise.

“The ‘100x less power’ headline is misleading. What the researchers actually showed is that a rule-based system uses less energy than a neural model on a single puzzle. And it was in simulation, with the rules hand-coded by experts in advance,” Goryunov said. “That’s not a breakthrough. That’s a calculator beating a supercomputer at arithmetic.”

Goryunov argued that “the savings disappear the moment you hit real-world complexity. Disparate data sources and messy inputs, ambiguous situations without clear rule sets, or actually any domain where the rules aren’t already obvious. And someone still has to write all those rules.”

The researchers did not respond to a request for comment — but they likely wouldn’t disagree with Goryunov. In their conclusion, they state, “These results highlight important trade-offs between end-to-end foundation-model approaches and structured reasoning architectures. For manipulation tasks governed by explicit procedural constraints, incorporating symbolic structure can yield substantial advantages in reliability, data efficiency, and energy consumption.”

Some of these discussed hypothetical new approaches to AI do have potential, Goryunov said, specifically citing research work done by Google. “Google’s approach is to make the AI we’re already running dramatically cheaper and faster. Tufts’ approach is to replace it with something architecturally different for a narrow class of tasks. From an enterprise standpoint, there’s no contest. You can deploy Google’s findings tomorrow through your existing model providers. Tufts requires you to rewrite your architecture, hand-code your domain rules, and hope your problem looks like a puzzle.”

The benefits of short-termism

Nathan Marlor, the head of data and AI at Irish consulting firm Version 1, said that even though the Tufts research may not have immediate applicability to enterprise IT deployments, it could impact pricing negotiations with hyperscalers.

“For enterprise IT there’s nothing to do here. Nobody’s building PDDL planners in-house. But the cost angle matters if you’re watching AI compute bills climb and vendors keep telling you the answer is more GPUs. This is one more reason to push back on that,” Marlor said. “If hybrid architectures prove out more broadly, it shows up downstream as cheaper inference and lower cloud bills. But that’s on the platform and hyperscalers to figure out and not enterprise IT teams.”

Another consultant, Brian Levine, executive director of FormerGov, agrees that the Tufts report could color how IT views future AI pricing.

Enterprise IT executives “should absolutely track this space, not because they’ll deploy these models next quarter, but because the economics of AI are getting even more volatile. Enterprises need to stay flexible with their AI vendors,” Levine said. “This market can pivot on a dime. Locking yourself into a single hyperscaler’s stack or a single model architecture is a recipe for regret when breakthroughs like this start to commercialize.” Levine advocated staying flexible and avoiding long-term obligations. “This is a reason to avoid overcommitting to any one vendor’s roadmap. The ground under AI is shifting faster than most procurement cycles. The winners will be the CIOs and orgs that build for portability, negotiate for flexibility, and assume that today’s state of the art may look outdated sooner than anyone expects.”

Read the full article here

You Might Also Like

After OpenClaw backlash, Quill bets on security-by-design agentic AI – Computerworld

Trump’s federal AI policy framework aims to undercut state laws – Computerworld

Microsoft backtracks on Copilot Chat access in M365 apps – Computerworld

Researchers propose a self-distillation fix for ‘catastrophic forgetting’ in LLMs – Computerworld

Job disruption by AI remains limited — and traditional metrics may be missing the real impact – Computerworld

Share This Article
Facebook Twitter Email Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Trending Stories

News

Seattle VR gaming studio Polyarc announces ‘significant’ layoffs – GeekWire

March 31, 2026
Games

Bungie patches Marathon’s slide cancel movement tech, says no movement freaks allowed: ‘Unbounded movement, while expressive and clip-worthy, is ultimately unhealthy for the pace of play’

March 31, 2026
News

‘I don’t think anything will ever replace Rec Room’: Fans shocked by gaming platform shutdown

March 31, 2026
Games

Vaporizer swarms are wiping out squads en masse as Arc Raiders diehards finally get the tougher Arc they were asking for

March 31, 2026
News

Latest Meta layoffs target 168 employees in Washington state

March 31, 2026
Games

I opened my own store in Stardew Valley so now Pierre has to buy my seeds. MY SEEDS

March 31, 2026

Always Stay Up to Date

Subscribe to our newsletter to get our newest articles instantly!

Follow US on Social Media

Facebook Youtube Steam Twitch Unity

2024 © Prices.com LLC. All Rights Reserved.

Tech Journal Now

Quick Links

  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?