Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.
The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.
Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid.
Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security.
Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.
Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.”
At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.”
Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.
When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software.
Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.