SUBSCRIBE
Tech Journal Now
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
Reading: Anthropic’s US gov’t lawsuit says federal action “unprecedented and unlawful”
Share
Tech Journal NowTech Journal Now
Font ResizerAa
  • News
  • Reviews
  • Guides
  • AI
  • Best Buy
  • Games
  • Software
Search
  • Home
  • News
  • AI
  • Reviews
  • Guides
  • Best Buy
  • Software
  • Games
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Journal Now > AI > Anthropic’s US gov’t lawsuit says federal action “unprecedented and unlawful”
AI

Anthropic’s US gov’t lawsuit says federal action “unprecedented and unlawful”

News Room
Last updated: March 10, 2026 4:11 am
News Room
Share
9 Min Read
SHARE

Anthropic on Monday fought back against the US federal government’s determination that it is a supply chain risk, suing the feds and arguing to a California federal judge that the government is being inconsistent and contradictory.

“The Constitution confers on Anthropic the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety. The government does not have to agree with those views. Nor does it have to use Anthropic’s products,” the lawsuit filing said. “But the government may not employ the power of the State to punish or suppress Anthropic’s disfavored expression.”

The White House has used strong political terms to cast Anthropic as less than patriotic. A White House statement Monday referred to Anthropic as “a radical left, woke company,” and said, “our military will obey the United States Constitution [and] not any woke AI company’s terms of service.”

But Anthropic said that its resistance to two items in the government’s contract– autonomous lethal warfare and mass surveillance of Americans–was entirely technical, based on Anthropic testing showing that “Claude cannot safely or reliably perform those functions.”

“Anthropic has never tested Claude for those uses. Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare,” the lawsuit said. 

Unexplained inconsistency

The filing also argued that the government’s decision was “arbitrary, capricious, and an abuse of discretion” because “Anthropic had been one of the government’s most trusted partners until its views clashed with the Department’s.”

The filing added: “Until the Department [of Defense] raised this threat, no government official had ever raised a concern with Anthropic about potential supply chain vulnerabilities. On the contrary, the government has consistently provided the security clearances that are necessary for Anthropic’s personnel to perform classified work. Those clearances remain in place today. Moreover, in 2024 Anthropic became the first frontier AI lab to collaborate with the Department of Energy to evaluate an AI model in a Top Secret classified environment.”

The filing also said that the Department of Defense (DoD) “has recognized Claude’s capabilities as ‘exquisite.’ [DoD] suggested that Claude was so vital to our national defense that it needed to be commandeered under the Defense Production Act. And [Defense Secretary Pete Hegseth] has ordered that ‘Anthropic will continue to provide’ its services to the Department of War [a secondary name for the DoD] for up to six months. The ‘unexplained inconsistency’ between simultaneously designating Anthropic’s services a supply chain risk vulnerable to ‘sabotage’ or other ‘subversion’ by a foreign adversary while directing those services to be used for up to six months for national security purposes demonstrates the arbitrariness of the Secretary’s final decision.”

Analysts were of mixed opinions about the implications for enterprise IT leaders, although most said it would force politics into a technological decision.

Uncharted territory

“For Gartner clients, this falls under geopolitical tension, which factors into an organization’s purchasing priorities. In this case, it will likely hurt Anthropic with their government contracts even if the supply chain risk designation is quashed by the courts,” said Nader Henein, a Gartner VP analyst.

“On the other hand,” he observed, “it may help them with non-US buyers who will view their stance as a reassuring sign. When it comes to the wider industry, European clients are paying close attention to the signatories of the EU AI Act code of conduct, which is still missing some notable names such as DeepSeek, Xai and Meta.”

Cole Cioran, managing partner of the Canadian Public Sector at Info-Tech Research Group, added the implications of this will likely go far beyond the courts.

“Anthropic’s challenge to the Pentagon’s supply-chain risk label is more than a legal dispute. It’s a shot that will echo around the world for as long as this is before the courts,” Cioran said. “The debate over how democratic nations will govern AI in the context of sovereignty, security, and ethics has needed a challenge like this to drive clearer standards.”

He pointed out that for countries like Canada, where digital sovereignty and responsible AI “sit at the center of national strategy,” this case becomes “a litmus test for principled leadership.” Anthropic CEO Dario Amodei’s decision to stand firm shows that Anthropic is prepared to defend its principles publicly, despite “an unprecedented national security designation” that could materially restrict its access to US defense markets.

Cioran suggested that this will eventually be a good thing for Anthropic.

“As the proceedings drag on, as they inevitably will, time becomes an asset for Anthropic rather than a liability. In geopolitics, the clock beats the gavel, just as the US vs Microsoft case transformed the company from an aggressive monopoly into a trusted partner. My prediction is that the longer this case runs, the more it will define what credibility looks like for AI vendors on the global stage,” Cioran said.

“This resilience will resonate with governments that require vendors to demonstrate their adherence to core values such as inclusive development practices, environmental protection, and ethical AI governance. However, before Amodei’s stand, vendors largely relied on asserting their own ethical standing,” he said. “Now that Anthropic has taken a stand, evaluators will know what evidence looks like.”

However, Acceligence CIO Yuri Goryunov said one interpretation of the government’s position is that its resistance to Anthropic is because it doesn’t want to risk an AI system interfering with or second-guessing military personnel. But, he noted, if that was truly the concern, it would likely mean a ban of all vendors selling agentic or generative AI systems, because that risk exists for all.

“We are entering uncharted territory here, and this situation requires careful legal and technical assessment. Ultimately, this is about control—who possesses it, and how they exercise it. If a technology is designated a supply chain risk to national security because it is not aligned with US military objectives, several risks emerge,” Goryunov said. “The system might arbitrarily decide to disclose sensitive payment information to the public or an adversary if it determines that such an action would lead to a morally better outcome.”

Nonetheless, the Trump administration’s anti-regulatory advocates need to be consistent, said cybersecurity consultant Brian Levine, executive director of FormerGov, and a former federal prosecutor.

“We can’t have it both ways. If we don’t want heavy‑handed government regulation, then we need to support responsible self‑regulation. Otherwise, we’re sleepwalking into a technological dystopia of our own making,” Levine said. “For organizations, embedding safety constraints isn’t just the ethical choice—it’s the smart economic one. CIOs and CISOs should prioritize vendors that are willing to self‑regulate and they should also maintain backup providers in case sudden or arbitrary government actions disrupt access to their preferred AI platforms.”

And, Levine said, from a purely legal perspective, the government’s position doesn’t make much sense. The fact that Anthropic couldn’t agree to all of the contractual terms “in no way makes them a supply chain or national security risk.”

Read the full article here

You Might Also Like

Most businesses aren’t truly ready for AI – Computerworld

Workers challenge ‘hidden’ AI hiring tools in class action with major regulatory stakes. – Computerworld

What really caused that AWS outage in December? – Computerworld

AI introduction can lead to employee burnout – Computerworld

OpenAI acquires Torch Health to boost its healthcare offerings – Computerworld

Share This Article
Facebook Twitter Email Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Trending Stories

Games

Still humming with life 21 years later, Konami’s Master of Epic is a wonderful time capsule of the experimental early MMORPG era

March 10, 2026
Games

Counter-Strike’s ‘X-ray scanner’ for loot containers is coming to Germany later this month

March 10, 2026
Games

Bungie has learned its lesson about vaulting content in Destiny 2 and won’t be so cruel with Marathon: ‘It doesn’t matter when you join’

March 10, 2026
AI

Job disruption by AI remains limited — and traditional metrics may be missing the real impact – Computerworld

March 10, 2026
Games

Marvel Rivals is enacting ‘Victim Compensation Protocol’ for everyone who has been negatively affected by the incentivised throwing epidemic: ‘The system will automatically restore lost points’

March 10, 2026
Games

Cyberpunk 2077 redemption complete: You can now literally crap yourself to death with an alarmingly well-thought-out mod

March 10, 2026

Always Stay Up to Date

Subscribe to our newsletter to get our newest articles instantly!

Follow US on Social Media

Facebook Youtube Steam Twitch Unity

2024 © Prices.com LLC. All Rights Reserved.

Tech Journal Now

Quick Links

  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?