Anthropic, the Pentagon, and Claude’s Split Personality

by Suzanne Nossel
Patrick Sison / AP
Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026.

Anthropic implicitly acknowledges the two faces of Claude: one with the firm ethical constraints embodied in its constitution, and a second available to do just about anything the Pentagon says—just as long as it can do it well.

Artificial intelligence (AI) seems almost omnipotent, but that does not mean it can be all things to all people. The capabilities of large language models like OpenAI’s ChatGPT and Anthropic’s Claude are breathtaking, and expanding in ways both dazzling and dangerous. But not even these chatbots’ staggering powers of reasoning can spare frontier model makers from tough choices. 

Anthropic’s ongoing dispute with the Pentagon has revealed tensions between the company’s self-conception and public positioning, and the ubiquity and profits it chases. As AI proliferates, such strains will intensify; if past is prologue, opportunity and expansion will win the day, sidelining ideals. Those who cheer Anthropic for standing on principle should also recognize that AI ethics and safety are hardly guaranteed and, whether consciously or not, may soon be shunted aside–even by their supposed champions.

For now, Anthropic and its founder and CEO Dario Amodei are being hailed as heroes. The company earned respect for refusing to give in to the Trump administration’s demand that the company drop its insistence upon contractual provisions that would ensure that the models not be used to support killings without human involvement or in the mass surveillance of Americans. When Anthropic refused to do so, the Trump administration severed ties with the company, banning its use across the federal government and declaring it a “supply chain risk,” forcing other government contractors and partners to shun the company. The designation, previously reserved only for enterprises tied to foreign adversaries, drastically raises the cost of what Amodei has described as a matter of “conscience.”

Immediately after the rift, Claude became the United States’ most-downloaded free app from Apple’s App Store, with a reported million new Claude users a day. The company also won support from celebrities and civil libertarians. By contrast, after Anthropic’s chief competitor, OpenAI, swooped in to capture the former’s lost business with the Pentagon, the backlash was swift. OpenAI founder and CEO Sam Altman claimed that the company had secured the guarantees that had eluded Anthropic. According to Altman, OpenAI would constrain ChatGPT’s use for unmanned killings and mass surveillance through its own usage constraints, rather than codified contractual protections. Accused by Amodei of disingenuous “safety theater,” in a leaked internal memo, OpenAI suffered several setbacks: losing a high-profile executive, receiving a protest letter from employees, and seeing a nearly 300 percent spike in uninstalls of its platform.

Inescapable AI dilemmas 

At a surface level, corporate ethics seemed to win in the court of public opinion and the commercial playing field. The government’s rash and punitive approach, decried by other tech giants, has made Anthropic into a valiant David fighting the Trump administration’s Goliath. But the truth is more complicated and less certain. Anthropic is a tech colossus, valued at $380 billion. Anthropic’s business is mostly enterprise driven as opposed to consumer facing, meaning that the loss of Pentagon contracts and what could add up to billions of dollars in ancillary corporate business could devastate the company. On March 9, Anthropic filed two lawsuits against the Trump administration, challenging the company’s “supply chain risk” designation. Legal analysts are split on Anthropic’s chances of prevailing given the judiciary’s customary deference to the executive branch on matters of national security. If Anthropic cannot get a court to block a measure that, by its own account, is doing the company “irreparable harm,” it’s not clear that Amodei will hold out permanently against Pentagon demands at the potential expense of Anthropic’s position as an industry leader.

The controversy lays bare deeper and inescapable dilemmas facing any AI company that aspires to strong ethics. 

The controversy also lays bare deeper and inescapable dilemmas facing any AI company that aspires to strong ethics. Amodei has styled himself a champion of responsible AI, resigning from his position at OpenAI in late 2020 over safety concerns and cofounding Anthropic with the promise it would avoid actions that were “inappropriate, dangerous, or harmful.” Just weeks before the Pentagon controversy, Anthropic released a new “constitution” for Claude–an ambitious 80-page document intended to guide the model on how to be safe, ethical, compliant and helpful, but also caring, compassionate and wise. Evoking a parent’s hopes for a child, the Constitution reads, “we want Claude to have the values, knowledge, and wisdom necessary to behave in ways that are safe and beneficial across all circumstances.” Yet that constitution contains a consequential caveat: “operators may wish to alter these default behaviors … and we think Claude should generally accommodate this:”

Alongside his commitment to safety, Amodei has often stressed his patriotism, underscoring that he regards Anthropic’s work with the US government as a civic duty. In rejecting Secretary of Defense Pete Hegseth’s demands, Amodei wrote: “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.” Amodei’s passion, coupled with carve-outs in company policies, paved the way for Anthropic to become a provider of choice to the Pentagon, with Claude used for a range of deadly functions otherwise proscribed by the company’s usage policy. While the policy would ordinarily preclude Claude’s use in the destruction of critical infrastructure, interference with the operation of military bases, or work on “weaponization and delivery processes for the deployment of weapons,” Anthropic’s complaint against the Pentagon explains that those limitations do not apply when Claude is serving the military, adding that in light of “the government’s unique needs and capabilities” the model is “less prone to refuse requests that would be prohibited in the civilian context.” Anthropic’s complaint maintains that its beliefs are “fully compatible with responsible use of Claude by the Department of War . . . including autonomously” for things including offensive cyber and military operations.

The complaint’s description of Anthropic’s flexible approach to serving the US military seems to go well beyond the letter of the company’s more restrictive public-facing usage policy. That document sets out only a narrow exception to its enumerated rules, stating that “with carefully selected government entities, we may allow foreign intelligence analysis in accordance with applicable law,” before going on to affirm that “all other use restrictions in our Usage Policy, including those prohibiting use for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations, remain.” The apparent tension between Anthropic’s written policy and its actual practice to date with the Pentagon points to the slippery quality of rules purporting to govern technologies with rapidly evolving capabilities and uses. 

Principle vs. pragmatism

The strain between Anthropic’s dedication to safety and serving the military broadly burst into public view earlier this year when Anthropic employees voiced alarm after learning that Claude had been deployed in the US raid to capture Venezuelan strongman Nicolás Maduro. After several decades of long wars, shadowy battles against terrorism, pitched public debates over the role of the US military and its weapons, and, further back, revelations of torture and black sites, it is hard to imagine how Anthropic would not have foreseen that embedding a highly versatile technology deep within the Pentagon would pose difficulties for any system that stakes its reputation on ethics, safety, care, and compassion. Anthropic is also a close partner of software company Palantir, which has come under fire for powering the US Immigration and Customs Enforcement (ICE) crackdown and other aggressive forms of law enforcement. While Palantir’s leadership staunchly defends such practices, Anthropic’s leaders could hardly have imagined that with bedfellows like those, they could remain above the fray.

Despite the warning signs, Anthropic seemed caught off guard by the Pentagon’s insistence that the outer bounds of the law, rather than company policy, should dictate how Claude could be used. The company has equivocated on whether it objects to Claude’s potential use in unmanned lethal operations and mass surveillance on principled or merely prudential grounds. On the one hand, Anthropic argues in its complaint that it is being punished for holding to “core principles.” The company has brought a First Amendment claim arguing that the supply-chain risk designation is intended “to punish Anthropic for adhering to its views” and is an attempt to “silence its views on safe AI.” The company states bluntly that it is “unwilling to agree to Claude’s use for mass surveillance of Americans” because doing so would “pose unique risks for civil liberties.”

Yet elsewhere the complaint boasts that, even amidst its clash with the company, the US military relied on Claude in its attack on Iran—an assault that some Democratic congressional leaders insist was illegal. Anthropic also indicates elsewhere that its objection to the Pentagon’s demands are less a matter of principle than a pragmatic determination of what Claude is able to do competently at present, given technical constraints. The company asserts that its “clear-eyed, expertise-driven understanding of its own technology’s current limits” are being mischaracterized by the Defense Department as “purported ideological extremism.” In a late February public statement, Amodei described the Pentagon’s desired uses as “simply outside the bounds of what today’s technology can safely and reliably do.” He added that mass surveillance systems pose “significantly greater potential” to make and amplify mistakes than traditional techniques, and that “frontier AI systems are simply not reliable enough to power fully autonomous weapons”—even as he also offered to work with the Pentagon to overcome such limitations.

It is unclear whether Anthropic is arguing that automated killings without human oversight and mass surveillance of Americans are fundamentally intolerable, or simply areas where Claude is not yet up to the task.

It is thus unclear whether Anthropic is arguing that automated killings without human oversight and mass surveillance of Americans are fundamentally intolerable, or simply areas where Claude is not yet up to the task—the implication being that 12 or 18 months from now, an updated version of the model might willingly oblige. It’s possible that Anthropic is seeking to preserve its options, allowing for interpretations of its refusal that might be either permanent or mutable. Anthropic implicitly acknowledges the two faces of Claude: one with the firm ethical constraints embodied in its constitution, and a second available to do just about anything the Pentagon says, just as long as it can do it well.

A familiar trajectory

Claude’s creators want their vaunted model to seem safe and gentle enough to serve as a trusted companion to vulnerable people, and also badass enough to abet a Pentagon mission that Hegseth has described as “death and destruction from the sky all day long.” While such Janus-like capabilities might be possible for platforms styled purely as utilities, like Microsoft Word or Google Chrome, Claude is intentionally designed to project a “personality” and “character” aimed in part to make the model “more interesting to talk to.” Claude’s multiple personalities—from on-screen therapist to architect of “Epic Fury”—seem bound to clash, ending in disillusionment on some, if not all, sides.

Given the genuflections to the White House by many lawmakers, media executives, universities, and law firms, it’s tempting to cheer on any powerful institution that challenges the administration’s overreach. But we should not rest easy relying on the inclinations of individual executives at private companies as a last line of defense against the risks of AI. Experience is rife with examples of well-intentioned technology executives who get caught up in marketplace incentives and find themselves responsible for shocking breaches of trust. Facebook was invented in a college dorm room to bring people together, but it came to rely on a data harvesting business model that fueled the Cambridge Analytica privacy scandal. Google pledged not to do evil, only to be exposed as trying to secretly accommodate Chinese censors.

Amodei, of all people, knows this trajectory all too well, having witnessed now archrival Altman of OpenAI transform his company from a humanitarian nonprofit to a sharp-elbowed, profit- hungry juggernaut. While supporters hope that Anthropic sticks to its guns, the push of cutthroat competition and the pull of boundless commercial opportunities have a way of winning out. As recently as February, Anthropic dropped a series of much-touted safety guarantees, saying the measures “did not make sense” given that competitors were “blazing ahead.”

In the absence of regulation or independent oversight, an individual moral compass is easily thrown off by the gravitational pull of industry leadership and executive superstardom. Anthropic the company was named after a concept that puts humans at the center of life in the universe. When it comes to the fight over the future of AI, the greatest power of machines may lie in their ability to exploit the human frailty embodied not only by their users, but by their creators.


The Chicago Council on Global Affairs is an independent, nonpartisan organization and does not take institutional positions. The views and opinions expressed in this commentary are solely those of the author.

About the Author
Lester Crown Senior Nonresident Fellow, US Foreign Policy and International Order
Suzanne Nossel headshot
Suzanne Nossel, principal of Smart Power Strategies and a leading voice on free expression issues, stepped down at the end of 2024 as CEO of PEN America. She is the author of "Dare to Speak" (2020) and "Is Free Speech Under Threat" (2024) and serves on Meta's global oversight board. She previously held roles at Human Rights Watch, Amnesty International USA, and at the State Department.
Suzanne Nossel headshot