SpaceWar.com - Your World At War
US Defense Dept gives Anthropic Friday deadline to drop AI curbs
Washington, United States, Feb 24 (AFP) Feb 24, 2026
The US Defense Department has given AI company Anthropic until Friday to agree to unrestricted military use of its technology or face being forced to comply under emergency federal powers, a senior official said Tuesday.

Anthropic chief executive Dario Amodei met personally with Defense Secretary Pete Hegseth at the Pentagon on Tuesday, with the company saying he "expressed appreciation for the Department's work and thanked the Secretary for his service."

At the heart of the conflict is Anthropic's refusal to let its Claude models be used for the mass surveillance of US citizens or in fully autonomous weapons systems.

"We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do," the company said in a statement.

But after the meeting, the Pentagon delivered a stark ultimatum: agree to unrestricted military use of its technology by 5:01 pm (22:00 GMT) Friday or face being forced to comply under the Defense Production Act.

The Cold War-era law, last used during the Covid pandemic, grants the federal government sweeping powers to compel private industry to prioritize national security needs.

The Pentagon also threatened to label Anthropic a supply chain risk, a designation usually reserved for firms from adversary countries that could severely damage the company's ability to work with the US government and reputation.

The senior Pentagon official pushed back on the company's concerns, insisting the Defense Department had always operated within the law.

"Legality is the Pentagon's responsibility as the end user," the official said, adding that the department "has only given out lawful orders."

Officials also confirmed that an exchange regarding intercontinental ballistic missiles had taken place between Anthropic and the Pentagon, underscoring the sensitivity of the applications at the heart of the dispute.

The Pentagon confirmed that Elon Musk's Grok system had been cleared for use in a classified setting, while other contracted companies -- OpenAI and Google -- were described as close to similar clearances, piling competitive pressure on Anthropic to fall in line.

Anthropic was contracted alongside those companies last year to supply AI models for a range of military applications under a $200 million agreement.

Anthropic was founded by former OpenAI employees in 2021 on the premise that AI development should prioritize safety -- a philosophy that now puts it on a collision course with the Pentagon and the White House.

arp-wd/aha

GOOGLE


ADVERTISEMENT




Space News from SpaceDaily.com
AI-driven solar model aims to extend space weather warnings
Rocket re-entry pollution measured in atmosphere for first time
Simple collapse may build cosmic snowman worlds

24/7 Energy News Coverage
Simulations reveal how plasma flow steers fusion reactor exhaust
Brain learns faster from rare rewards than from repetition
UCSB scientists bottle the sun with liquid battery

Military Space News, Nuclear Weapons, Missile Defense
Axelspace to supply imagery for Japan defense satellite network
MTN to deliver secure SpaceX government satcom for defense customers
MDA Space forms 49North to expand Canadian defence capabilities

24/7 News Coverage
Global rock weathering model highlights path to slower warming
Ice age volcanoes linked to ancient Atlantic current shutdowns
AI mapping sharpens global view of human development gaps



All rights reserved. Copyright Agence France-Presse. Sections of the information displayed on this page (dispatches, photographs, logos) are protected by intellectual property rights owned by Agence France-Presse. As a consequence, you may not copy, reproduce, modify, transmit, publish, display or in any way commercially exploit any of the content of this section without the prior written consent of Agence France-Presse.