Pentagon-Anthropic AI dispute Erupts Over Guardrails and National Security

0
19
Pentagon-Anthropic AI dispute

The Pentagon-Anthropic AI dispute burst into public view Friday night, laying bare a high-stakes clash over artificial intelligence, national security and who gets to draw the ethical boundaries of modern warfare.

Hours after the Trump administration severed ties with the AI startup, Anthropic CEO Dario Amodei told CBS News in an exclusive interview that his company still wants to work with the military — but only if its red lines are respected.

“We are still interested in working with them as long as it is in line with our red lines,” Amodei said.

Signup for the USA Herald exclusive Newsletter

Red Lines in the Age of Algorithms

At the heart of the standoff is Anthropic’s demand for explicit safeguards preventing the military from using its Claude AI model for mass surveillance of Americans or for fully autonomous weapons.

The Pentagon insists it wants access to Claude for “all lawful purposes,” maintaining it has no interest in either application that raised Anthropic’s concerns.

The confrontation escalated when the military handed Anthropic a Friday evening ultimatum: accept the terms or lose access to lucrative Defense Department contracts. The sides remained far apart.

President Donald Trump then ordered federal agencies to “immediately” stop using Anthropic’s technology. Defense Secretary Pete Hegseth went further, labeling the firm a “supply chain risk” and directing military contractors to cut ties.

Anthropic is the only company whose AI model is deployed on the Pentagon’s classified networks.

“Our position is clear,” Amodei said. “We have these two red lines. We’ve had them from day one. We’re not going to move on those red lines.”

He added that if both sides can eventually “see things the same way,” an agreement could still be possible — arguing that collaboration would serve U.S. national security.