Surveillance Fears and Autonomous Weapons
Amodei described the restrictions as “narrow exceptions,” noting the company has no evidence the military has triggered them.
Still, he warned that AI could enable mass surveillance in ways not previously possible, especially if the government purchased commercial data and deployed advanced analytics. “Things may become possible with AI that weren’t possible before,” he said, cautioning that technology is advancing faster than the law.
Autonomous weapons present another flashpoint. In theory, AI systems could select and strike targets without human oversight. Amodei said Anthropic is not categorically opposed to such weapons, particularly if adversaries develop them first. But he stressed that reliability remains insufficient and that oversight is essential.
“We don’t want to sell something that we don’t think is reliable,” he said, adding that flawed systems could endanger U.S. personnel or civilians. Unlike human decision-making in warfare, accountability for fully autonomous systems remains murky.
The Pentagon counters that federal law already prohibits mass domestic surveillance and that internal military policies restrict autonomous weapons. Therefore, officials argue, additional written AI-specific guardrails are unnecessary.
Pentagon Chief Technology Officer Emil Michael told CBS News, “At some level, you have to trust your military to do the right thing.” He added that the U.S. must remain prepared for adversaries such as China and said the department cannot promise in writing that it would limit its defensive capabilities.
Michael said the military offered written acknowledgments of existing legal and policy constraints. Anthropic contends those assurances were buried in legal language that left room to bypass the guardrails.
