On Monday, Anthropic filed its lawsuit against the Department of Defense over being designated as a supply chain risk. Hours later, nearly 40 employees from OpenAI and Google — including Jeff Dean, Google’s chief scientist and Gemini lead — filed an amicus brief in support of Anthropic’s lawsuit, detailing their concerns over the Trump administration’s decision and the technology’s risks and implications.
The news follows a dramatic few weeks for Anthropic, in which the Trump administration labeled the company a supply chain risk — a designation typically reserved for foreign companies that the government deems a potential risk to national security in some way — after Anthropic stood firm on two red lines regarding acceptable use cases for military use of its technology: domestic mass surveillance and fully autonomous weapons (or AI systems with the power to kill with no human involvement). Negotiations broke down, followed by public insults and other AI companies stepping in to sign contracts allowing “any lawful use” of their technology.
The supply chain risk designation not only prevents Anthropic from working on military contracts, it also blacklists other companies if they used Anthropic products in their line of work for the Pentagon, forcing them to uproot Claude if they wished to maintain their lucrative contracts. As the first model cleared for classified intelligence, however, Anthropic’s tools are already deeply integrated into the Pentagon’s work — so much so that just hours after Defense Secretary Pete Hegseth announced the designation, the U.S. military reportedly used Claude in the campaign that killed the leader of Iran, Ayatollah Ali Khamenei.
The amicus brief seeks to make the points that Anthropic’s supply chain risk designation “is improper retaliation that harms the public interest” and that the concerns behind Anthropic’s red lines “are real and require a response.” It also makes the point that Anthropic’s two red lines are worth revisiting, stating that “mass domestic surveillance powered by AI poses profound risks to democratic governance — even in responsible hands” and that “fully autonomous lethal weapons systems present risks that must also be addressed.”
The group behind the amicus brief described themselves as “engineers, researchers, scientists, and other professionals employed at U.S. frontier artificial intelligence laboratories.”
“We build, train, and study the large-scale AI systems that serve a wide range of users and deployments, including in the consequential domains of national security, law enforcement, and military operations,” the group wrote. “We submit this brief not as spokespeople for any single company, but in our individual capacities as professionals with direct knowledge of what these systems can and cannot do, and what is at stake when their deployment outpaces the legal and ethical frameworks designed to govern them.”
On the domestic mass surveillance front, the group said that though data on American citizens exists everywhere in the form of surveillance cameras, geolocation data, social media posts, financial transactions, and more, “what does not yet exist is the AI layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus.” Right now, they wrote, these data streams are siloed, but if AI were used to connect them, it could combine “face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously.”
When it comes to lethal autonomous weapons specifically, the group said that they can be unreliable in new or unclear conditions that don’t align with the environment they were trained in — meaning that they “cannot be trusted to identify targets with perfect accuracy, and they are incapable of making the subtle contextual tradeoffs between achieving an objective and accounting for collateral effects that a human can.” Additionally, the group wrote, lethal autonomous weapons systems’ potential for hallucination means that it’s important for humans to be involved in the decision-making process “before a lethal munition is launched at a human target” — especially since the system’s chain of reasoning is often not available to operators and unclear even to the system’s developers.
The group behind the amicus brief wrote, “We are diverse in our politics and philosophies, but we are united in the conviction that today’s frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions.”













