Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Who Decides How America Uses AI in War? | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Who Decides How America Uses AI in War?

As artificial intelligence becomes central to national security, experts grapple with a technology that remains unpredictable, unregulated, and increasingly powerful.

Curt Langlotz headshot
Amy Zegart headshot
Michele Elam headshot
Jennifer King
+1
Curtis Langlotz
Link copied to clipboard!
March 30, 2026
Curt Langlotz headshot
Amy Zegart headshot
Michele Elam headshot
Jennifer King
+1
Curtis Langlotz
March 30, 2026
Share:
Link copied to clipboard!

Related News

AI’s Growing Role as Scientific Peer Reviewer
Andrew Myers
Mar 25, 2026
News
illustration of a book turning into a computer with medical and science concepts

Stanford computer scientist James Zou is exploring how AI can accelerate scientific research and peer review. His finding: AI excels at spotting gaps, but judgment calls still need humans.

News
illustration of a book turning into a computer with medical and science concepts

AI’s Growing Role as Scientific Peer Reviewer

Andrew Myers
Sciences (Social, Health, Biological, Physical)Mar 25

Stanford computer scientist James Zou is exploring how AI can accelerate scientific research and peer review. His finding: AI excels at spotting gaps, but judgment calls still need humans.

Stanford Scholars Train Generative AI To Be Better Creative Collaborators
Nikki Goth Itoi
Mar 10, 2026
News
Skilled comic artist creating comic book on computer

The team is building a shared “conceptual grounding” so that artists can steer models with precision.

News
Skilled comic artist creating comic book on computer

Stanford Scholars Train Generative AI To Be Better Creative Collaborators

Nikki Goth Itoi
Mar 10

The team is building a shared “conceptual grounding” so that artists can steer models with precision.

What Your Phone Knows Could Help Scientists Understand Your Health
Katharine Miller
Mar 04, 2026
News
Woman using social media microblogging app on her smart phone

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

News
Woman using social media microblogging app on her smart phone

What Your Phone Knows Could Help Scientists Understand Your Health

Katharine Miller
HealthcareMar 04

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

AI is still an unpredictable emergent technology. We don’t always know how these systems work, a challenge known as the black-box problem, and we can’t always trust that the outcomes are free from hallucinations, bias, and other issues.

Yet despite these limitations, AI is playing an increasingly prominent role in matters of national security, defense, and warfare. AI-powered systems have been deployed in Ukraine’s defense efforts and assisted the Pentagon in capturing former Venezuelan President Nicolás Maduro.

More recent headlines have raised critical questions about its use in the U.S. Anthropic was reportedly excluded from government contracts amid a broader dispute over restrictions on how its AI systems could be used, including for domestic surveillance and fully autonomous weapons. The Pentagon subsequently scuttled the contract with Anthropic and instead contracted with OpenAI.

These developments leave Americans asking urgent questions. What is the appropriate use of AI in defense? What guardrails should constrain these powerful systems, who has the authority to establish those rules, and how can they be meaningfully enforced?

Here, five faculty and fellows from the Stanford Institute for Human-Centered AI and Stanford Data Science share their perspectives on the ethical use of AI in war and defense.

AI Companies Shouldn’t Dictate Defense Policy—But Nor Should They Be Punished for Their Values

Military use of AI is necessary for the U.S. to stay ahead of its adversaries, who no doubt will adopt these technologies. The current commercial AI models have different value systems and political orientations. And that’s a good thing—pluralism is important for AI models just as it is for citizens. The government should make AI procurement decisions based on which product provides the best value. And military strategy shouldn’t be determined by private companies. But the government shouldn’t threaten an AI company’s existence simply because the values of the company’s leaders or its AI models differ from those of government leaders.

The controversy relates in part to poorly defined terminology and outdated regulations that aren't keeping pace with the government's ability to acquire and analyze data. For example, the public's common understanding of prohibited "mass surveillance" is probably much broader than the federal regulatory interpretations of that term. A prohibition on mass surveillance might prevent the government from gathering data that tracks citizen behavior, but government analysis of similar personal data procured from commercial sources might be permitted.

There is a similar problem with the term “fully autonomous.” My experience in medicine (another life-and-death setting where AI is deployed) suggests the meaning of human-in-the-loop isn't always clear. Is a human meaningfully in the loop when a fighter pilot can invoke an autopilot that enables a jet to make split-second maneuvering and targeting decisions on its own? We need more discussion and debate about such important questions.

Our current regulations are being strained by recent advances in AI. A decade ago human analysts could sift through only a tiny fraction of all our gathered intelligence. Should the same rules apply now that AI can do the sifting work of thousands of human analysts? In medicine we acknowledge that AI performance can change dramatically when exposed to new sources of data. Shouldn't we ensure the fairness and accuracy of systems that could put individuals at risk of being falsely targeted?

The military integration of language models is a strategic necessity for the U.S. to stay ahead of adversaries who are already adopting these technologies. The government should prioritize "best value" in decision-making rather than outsource decisions to the proprietary logic of private companies. But we all benefit from a diverse landscape of models with different training architectures and optimization goals, so all potential suppliers should be allowed to compete. Pluralism is important for language models just as it is for citizens.

— Curt Langlotz, Professor of Radiology, Medicine, and Biomedical Data Science, Senior Associate Vice Provost for Research at Stanford University, and Senior Fellow at Stanford HAI

Five Things to Consider About the Anthropic-Pentagon Standoff

This isn’t just about AI ethics: AI guardrails matter, but so do ethical imperatives surrounding national defense, democratic accountability, and who gets to decide how military capabilities are used to defend the nation.

Unelected companies shouldn’t set defense policy: There is a serious ethical question about whether one company, elected by nobody, with its own normative agenda and global financial interests should dictate how the U.S. government carries out its most essential role—protecting the lives of its citizens.

Anthropic’s talk of “mass surveillance” is misleading fear-mongering: The Pentagon has never said it intends to violate laws or policies regarding mass surveillance of Americans. The Pentagon does not conduct general domestic policing. And its intelligence capabilities are directed abroad, at terrorists and other foreign adversaries. Note the word “foreign.” These activities are constrained by law, procedural safeguards, and oversight.

The moral high ground is a dangerous place: Anthropic CEO Dario Amodei has taken a hard line in favor of keeping humans in the loop for military uses of AI. That sounds reassuring but shouldn’t be. AI is already outperforming humans in some combat scenarios. Rigid constraints about humans in the loop could disadvantage the U.S. against AI-enabled adversaries—even for purely defensive actions like intercepting an incoming nuclear missile autonomously.

The precedent-setting risks are high: Allowing a defense contractor to dictate how its products are used sets a troubling precedent. It’s akin to a company making a component of the F-35 demanding a veto over how that aircraft is employed in war—on the theory that only it understands the dangers of its own handiwork. “Go ask Dario if that’s OK” is not a workable military doctrine.

— Amy Zegart, Morris Arnold and Nona Jean Cox Senior Fellow at the Hoover Institution and Stanford HAI Associate Director

Read Zegart’s essay, “You Should Have Moral Qualms about Anthropic's Qualms,” in Freedom Frequency.

Of CEOs and Secretaries of War: A Call for a Whole-of-Society Approach

 “The real problem is not whether machines think but whether men do.”

—B.F. Skinner

“We must learn to sit down together and talk about a little culture …”

—Sylvia Wynter

The dramatic public struggle between Anthropic and the Pentagon is a microcosm of many of the brewing debates over AI ethics, but in particular it made vivid for everyone the very high stakes of just who should control a technology’s design, development, and deployment, including its intended or (un)intended use. On the one hand, it seems self-evident that a government of elected officials—not a handful of corporate entities, whatever their ethical or philosophical views—should decide how, when, and where technologies as immensely powerful and consequential for humanity as AI are used.

But what does it mean when a government suggests it can claim a private company’s technology it wants as its own—either to nationalize it, as the Trump administration suggested it could and might do—or to seek to immolate it, designating it as a supply-chain security risk for both government and commercial use, as War Secretary Pete Hegseth did in his fit of pique at Anthropic CEO Dario Amodei? (The contradictions of claiming Claude is too dangerous to use while at the same time using it in the capture of former Venezuelan President Nicolás Maduro and in the war with Iran have not been lost on anyone.) 

As a result, many of my students interested in launching startups, in creating technologies, ask: If I create something that doesn’t happen to align with a current government’s policies or preferences, will the government simply take it from me or destroy me? Amodei did not want Claude used in ways he believes would violate civil rights and human rights—mass domestic surveillance and autonomous drones. My students think about mitigating unwanted second or unintended use of their products, whether through design, embedding into the tech itself, or through policy, but in those scenarios, they usually imagine bad actors or malicious intent. What to do, my students ask, if they feel their own government would reserve the right to violate human rights with the use of some technology they developed? Are they then, as creators, implicated in that (mis)use? It is a crisis that both Einstein and Oppenheimer struggled with over nuclear power.

Stanford HAI admirably asks for an Ethics & Society Review (ESR) for those applying for project funding at scale—it goes beyond the IRB, which takes into account only individual human subjects in order to consider, more broadly and as thoroughly as one can, the anticipated societal impact of one’s work and what one will do to mitigate harms. There is a related argument over the ethics of general purpose versus purpose-driven AI, of course, and some, like OpenAI CEO Sam Altman, have controversially claimed no one can anticipate general use technology harms before a product has been released “into the wild,” but ESRs can be an important exercise in thinking more deeply about what makes for so-called responsible, transparent, trustworthy, and accountable AI beyond a simplistic ethics audit. I fully appreciate how problematic are those reassuring adjectives—which seem to mollify, by rhetorical fiat, consumer anxieties about irresponsible, opaque, deceptive, and unaccountable AI. Nonetheless, ESRs, created at the outset not the backend of a project, remain an important opportunity to think meaningfully about, maybe even take some ownership over, just what might happen to the things we put into the world for good or ill. I require an expanded version of HAI’s ESR now for my courses’ final projects.

But what, then, is the value of ESRs in the face of presidential power to define, on its own, what constitutes, for instance, “legal use” of AI? Or when deference to military priorities and national security imperatives requires conceding proprietary rights to one’s technology or compromising one’s ethical compass or moral values? These are obviously not new philosophical issues. But the fight between Anthropic and the Pentagon makes clear the real-world immediacy of such questions and the impatience of the DOW with discussion: As Maureen Dowd recently put it in a New York Times op-ed, the Pentagon bum-rushed Anthropic with a choice: “Be extorted or blacklisted.” It is unsurprising that Trump did not consult Congress before declaring war with Iran. This administration’s adoption of Zuckerberg’s move-fast-and-break-things ethos, in which they act seemingly unilaterally and at whatever breathless speed suits their whim, renders functionally impotent the legal and ethical deliberative processes so needed to make informed judgments about AI use.

Some argue those judgments must lie ultimately with the executive branch. But what to do when tech industry lobbyists have purchased (in mind-numbing amounts) such unprecedented access to and outsize influence on presidential decision-making about AI? It is naïve to think that regulation and governance will make the U.S. a loser in an international AI arms race, that  industry mantras “regulation impedes innovation” and  “governance castrates competition” are animated solely by a concern for some higher social good or belief in civilizational progress. After all, those narratives, many of which were incubated in industry marketing divisions, serve a for-profit motive that directly benefits from a rush to market and a push to deploy.

In that context, the deliberations and critical reflection that should rightly occur in a democratic society about the ethical design and uses of AI are artificially made to seem unacceptably “slow.” Yet deliberations of that importance and impact deserve a timetable at a human pace, not one in which we seem to be constantly whipped into a perpetual fast-forward. Moreover, decisions about who decides what about AI should not be a presidential prerogative or belong solely to CEOs, but would ideally include civil society, technologists, academe (including the humanities and social sciences as well as STEM fields), philanthropy, as well as all branches of government—a whole-of-society approach. Surely, all of us are stakeholders in such a profoundly transformational technology.

— Michele Elam, William Robertson Coe Professor of Humanities in the English Department at Stanford University and Stanford HAI Senior Fellow

Why Anthropic's Rejection of Pentagon Deal Reveals AI's Privacy Crisis

Anthropic’s recent refusal to meet the Pentagon’s demands was remarkable for many reasons, but the one that caught my attention was the company’s resistance to having its AI used for the “mass domestic surveillance of Americans.” This was likely not at the top of the reasons that many would have predicted once the skirmish became public. But it is highly revealing of a topic that has yet to focus the public’s attention: AI’s power to obliterate our data privacy, whether at the hands of the government or by private companies.

The U.S. government’s ability to surveil its citizens through their internet activities was largely limited until the 9/11 terrorist attacks, when the National Security Agency secretly expanded its ability to surveil non-U.S. persons by tapping directly into the telecommunications networks that carry internet traffic. Until these actions were ruled illicit by a federal judge, the NSA’s warrantless wiretapping of all U.S. internet communications, including those of U.S. citizens,  was the federal government’s attempt to achieve “total information awareness” with the goal of preventing future foreign terrorist attacks. The NSA crossed the line with the wholesale collection of all U.S. internet communications, whether or not they involved a foreign target. This violated the Fourth Amendment rights of U.S. citizens. Whistleblower Edward Snowden’s revelations exposed this program in 2013.

During the same era, however, popular trends made it possible for the government to acquire information about individuals that was once much harder to obtain: so-called “open-source intelligence” using commercially available data acquired through a combination of social media and public records, the public internet, and data produced by our use of websites and apps, often collected and sold without our explicit consent. This data is in turn purchased, aggregated, and sold by data brokers. The explosion of personal data available online made it possible for individuals to be profiled and tracked through their public movements online and offline. While the Fourth Amendment protects individuals from surveillance by the U.S. government without probable cause, open-source intelligence lowered the bar to learning more about specific persons without needing a warrant. It also made it possible for purchasers of such data to build inferences about both individuals and groups through social network membership, online behavioral data, and mobile location data.

Today, the public is largely unaware that one of the buyers of their commercial “surveillance” data is the federal government. Recent actions by ICE against protestors in Minnesota and other states, as well as targeted removals of immigrants by the agency, have demonstrated how the Trump administration is using this data. Senator Ron Wyden of Oregon recently introduced a bipartisan bill to limit the government’s purchase of brokered data for domestic intelligence.

How does this relate to Anthropic’s refusal to the Pentagon? As a privacy researcher, I have become increasingly concerned about the capabilities of large language models (LLMs) for violating our individual privacy as these tools have evolved. Learning that mass surveillance was one of the concerns that prevented this deal from being consummated was not exactly surprising but revealing nonetheless. LLMs are trained on such massive amounts of data, including personal information on public and semi-public websites (such as social media), public records, and now increasingly the data we reveal through chatbots, it would be surprising if these tools did not lower the bar for individual surveillance. If the public finds that there are limits to what they can discover about themselves or others through LLMs, it’s because developers have implemented guardrails to attempt to head off this type of usage. In short, LLMs have these capabilities, but what stand between us and their usage for profiling and surveillance are the companies’ choices to not cross this line. Anthropic made this distinction, and not surprisingly OpenAI followed given how unpopular the possibility proved to be with the public.

Of course, these actions have not prevented similar tools from being used for population-level and individual data gathering and surveillance. Palantir, which primarily provides data intelligence services for the government sector, has willingly contracted with the U.S. government to provide the type of services that the large commercial providers are reluctant to allow with their products. To that extent, the companies with feet in both the consumer/commercial and government worlds have to answer to their consumer customers.

Anthropic’s decision to state that they would never allow their tools to be used to surveil the American public was the first explicit public confirmation that their tools can be put to uses that will upend individual privacy. The question remains whether we as a society are content to leave that discretion to them alone and whether we want the U. S. government to use these tools for surveillance purposes.

— Jennifer King, Privacy and Data Policy Fellow at Stanford HAI

Protecting Against AI-Designed Biological Threats

For academics who are not soldiers and who are not in charge of defense, engaging with AI and defense applications requires us to think beyond our usual expertise. My expertise is in biosecurity and biodefense. I design drugs—or more precisely, I create software that helps design drugs. But there have already been papers showing that this very same software that can design drugs can design toxins that can harm people.

We've known bad actors could misuse AI, but it wasn't always universally available and easily accessible. Because of LLMs, that knowledge now is available. And so we have to rethink fundamental questions: Do I always publish all of my greatest and latest algorithms for making biological matter, or do I need a new approach?

My answer so far remains yes, we should continue to publish. Even though these new algorithms and ideas are the latest and greatest now, AI moves so quickly that it will only get better and you need to tell the world that this capability exists and will improve. Both good and bad actors will know this, but if you don't share it publicly, people won't be able to use it, think about it critically, test it, and ensure it works as well as you believe. Instead, you'll have a tool that only you and a few others have examined, and if you really need it someday, you won't know if it's ready.

However, this publication approach assumes that companies capable of manufacturing these materials are screening inputs before production. Most people in the biosecurity world recognize there's a significant firewall between designing something dangerous and actually synthesizing that material in a laboratory. Companies that synthesize proteins or DNA are now building internal screening systems to identify concerning sequences—if something looks like anthrax, they won't make it. The problem is that AI tools can obfuscate these signatures. We know what anthrax looks like, but we don't know all the permutations that AI could generate that remain anthrax-like while evading our detection tools. There's a substantial research effort underway to detect these Trojan-horse DNA sequences and proteins, which is critically important.

Beyond synthesis companies, another choke point involves specialized chemical reagents. Even if someone attempts synthesis in their garage, they'll need to order materials—and for the most part, highly specialized materials. People who sell chemicals and reagents need robust customer vetting processes. We may encourage vendors to maintain registries of who purchased what. This doesn't necessarily constrain purchases, but establishes accountability. This will likely be voluntary initially, and there could be bad actors who don't participate, but even then it creates a red flag. If someone refuses to participate, we can monitor them more closely than we might have otherwise.

We've also published work on data security levels, establishing four tiers: data that's safe to share anonymously, data that's fine to share openly but with tracking of recipients, data requiring credentials demonstrating legitimate research needs, and data that should remain confidential. These emerging best practices provide a framework for responsible development.

This is clearly a global challenge. Biosecurity threats—indeed, all weapons—don't respect political or geographic boundaries. The ideal solution would be a single global agreement, though we all know how difficult those are to achieve. Each region—the EU, Asia, North America, South America—should be addressing these challenges. I have faith that people in these regions who understand these technologies, at least among non-extremist groups, share an interest in establishing broad guidelines and guardrails. They can then apply pressure and intelligence to rogue actors through the mechanisms I've discussed: monitoring purchases and activities, conducting reconnaissance and intelligence operations, much as we do for nuclear weapons.

This is an area where regulation serves clear national interests. In biosecurity and biological applications specifically, minimum performance standards for companies would be valuable, and I think companies are interested in some regulation. Further, we need civilian control of these decisions, policy statements about how we use these tools and when we never use these tools. We made policies about nuclear tests—we don’t have to be exceptionalist about AI in many cases. The nuclear battles of the ‘50s, ‘60s, and ‘70s give us a lot of precedence.

I think this might even be a bipartisan issue. Far be it for me to declare something bipartisan, but we all share an interest in not being poisoned or infected.

— Russ Altman, Kenneth Fong Professor of Bioengineering, Genetics, Medicine, Biomedical Data Science and (by courtesy) Computer Science, and Stanford HAI Associate Director