Artificial intelligence company Anthropic filed two landmark federal lawsuits against the US government on Monday, alleging unlawful retaliation after it refused to allow its Claude AI model to be used for lethal autonomous weapons and mass surveillance of American citizens.
The dispute escalated rapidly after the Pentagon demanded in February that Anthropic remove all safety restrictions on Claude for "all lawful uses." Anthropic agreed to most terms but drew two firm lines: Claude would not be deployed to control autonomous weapons without human oversight, nor would it be used to conduct large-scale surveillance of US citizens.
Defense Secretary Pete Hegseth subsequently labelled Anthropic a "supply chain risk" — a designation typically reserved for companies with ties to China — effectively barring the company from all federal government contracts. President Trump separately ordered all government personnel to stop using Anthropic's products.
"These actions are unprecedented and unlawful," Anthropic said in its court filing. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."
The White House dismissed the lawsuit, saying its goal was ensuring the military operates "not under any woke AI company's terms of service." Anthropic filed its suits in the US District Court for Northern California and the DC Circuit Court of Appeals.
India's Reliance Backs America's First New Oil Refinery in 50 Years
Mar 11, 2026
BCCI Rewards Historic T20 World Cup Winners with INR 131 Crore Prize
Mar 10, 2026
Congresswoman Watson Coleman Introduces Bill to Repeal Trump's Costly H-1B Restrictions
Mar 09, 2026
Bay Area's Largest Holi Celebration FOG Holi Brings Colors, Culture, and Community Together in Fremont
Mar 10, 2026
India Lift T20 World Cup Title After Defeating New Zealand
Mar 08, 2026