Anthropic’s AI app, Claude, is surging to the top of global download charts — while the company wages a legal battle against the Pentagon for designating it a national security risk.
In a complaint filed Monday in the U.S. District Court for the Northern District of California, Anthropic claims the federal government launched an unprecedented campaign against the company after it stood by its safety restrictions. Anthropic says it doesn’t want its AI to be used for lethal autonomous warfare or mass surveillance of Americans.
“Anthropic brings this suit because the federal government has retaliated against it for expressing that principle,” the complaint states. “When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to ‘IMMEDIATELY CEASE all use of Anthropic’s technology.'”
The fallout has been swift and wide-ranging. The General Services Administration terminated Anthropic’s government-wide contract. The Treasury Department, the Federal Housing Finance Agency, the State Department, and other government agencies announced they were cutting ties with the company.
Yet the controversy appears to have done little to dampen public enthusiasm for Anthropic’s products. If anything, users are more enthusiastic now Anthropic is going head to head with the Trump administration.
The company says it is now adding more than one million new users every day globally — breaking its own signup records every day since the dispute erupted.
Mashable Light Speed
Claude currently holds the top spot on Apple’s App Store in 16 countries, surpassing both OpenAI’s ChatGPT and Google’s Gemini in more than 20 markets, according to data from AppFigures.
The lawsuit marks the culmination of mounting tensions between Anthropic and the Department of Defense, which the Trump administration calls the Department of War. The company had a major contract that made its generative AI systems the most used across the Pentagon.
That relationship unraveled when Defense Secretary Pete Hegseth pushed to dramatically expand AI’s role throughout the military, and wanted unrestricted access to AI technologies. The effort required every AI company with Pentagon contracts to renegotiate its agreements.
But because Anthropic had become the military’s dominant AI provider — with Claude reportedly the only advanced model allowed to operate on classified systems — the company found itself at the center of a contentious standoff with Hegseth and Trump.
The breakdown was as much about clashing personalities as competing principles, according to the New York Times. Pentagon Chief Technology Officer Emil Michael, a former Uber executive, grew increasingly frustrated with Anthropic CEO Dario Amodei throughout weeks of negotiations.
As talks deteriorated, Michael began negotiating a fallback deal with OpenAI — a company whose CEO, Sam Altman, had been actively courting the Trump administration. Hours after the Pentagon’s deadline passed without a deal, Altman announced that OpenAI had reached an agreement with the Defense Department.
The lawsuit argues the government’s actions — including Trump’s directive ordering every federal agency to immediately stop using Anthropic’s AI, and Secretary Hegseth’s designation of the company as a supply chain risk — violate the First Amendment, as well as the Fifth Amendment’s due process protections, and the Administrative Procedure Act.
Anthropic’s filing notes that the supply chain risk label has historically been reserved for foreign companies believed to pose a threat to national security. It has never before been applied to an American firm. The company is asking the court to declare the government’s actions unlawful, and to issue a permanent injunction blocking their enforcement.
