Of CEOs and Secretaries of War: A Call for a Whole-of-Society Approach
“The real problem is not whether machines think but whether men do.”
—B.F. Skinner
“We must learn to sit down together and talk about a little culture …”
—Sylvia Wynter
The dramatic public struggle between Anthropic and the Pentagon is a microcosm of many of the brewing debates over AI ethics, but in particular it made vivid for everyone the very high stakes of just who should control a technology’s design, development, and deployment, including its intended or (un)intended use. On the one hand, it seems self-evident that a government of elected officials—not a handful of corporate entities, whatever their ethical or philosophical views—should decide how, when, and where technologies as immensely powerful and consequential for humanity as AI are used.
But what does it mean when a government suggests it can claim a private company’s technology it wants as its own—either to nationalize it, as the Trump administration suggested it could and might do—or to seek to immolate it, designating it as a supply-chain security risk for both government and commercial use, as War Secretary Pete Hegseth did in his fit of pique at Anthropic CEO Dario Amodei? (The contradictions of claiming Claude is too dangerous to use while at the same time using it in the capture of former Venezuelan President Nicolás Maduro and in the war with Iran have not been lost on anyone.)
As a result, many of my students interested in launching startups, in creating technologies, ask: If I create something that doesn’t happen to align with a current government’s policies or preferences, will the government simply take it from me or destroy me? Amodei did not want Claude used in ways he believes would violate civil rights and human rights—mass domestic surveillance and autonomous drones. My students think about mitigating unwanted second or unintended use of their products, whether through design, embedding into the tech itself, or through policy, but in those scenarios, they usually imagine bad actors or malicious intent. What to do, my students ask, if they feel their own government would reserve the right to violate human rights with the use of some technology they developed? Are they then, as creators, implicated in that (mis)use? It is a crisis that both Einstein and Oppenheimer struggled with over nuclear power.
Stanford HAI admirably asks for an Ethics & Society Review (ESR) for those applying for project funding at scale—it goes beyond the IRB, which takes into account only individual human subjects in order to consider, more broadly and as thoroughly as one can, the anticipated societal impact of one’s work and what one will do to mitigate harms. There is a related argument over the ethics of general purpose versus purpose-driven AI, of course, and some, like OpenAI CEO Sam Altman, have controversially claimed no one can anticipate general use technology harms before a product has been released “into the wild,” but ESRs can be an important exercise in thinking more deeply about what makes for so-called responsible, transparent, trustworthy, and accountable AI beyond a simplistic ethics audit. I fully appreciate how problematic are those reassuring adjectives—which seem to mollify, by rhetorical fiat, consumer anxieties about irresponsible, opaque, deceptive, and unaccountable AI. Nonetheless, ESRs, created at the outset not the backend of a project, remain an important opportunity to think meaningfully about, maybe even take some ownership over, just what might happen to the things we put into the world for good or ill. I require an expanded version of HAI’s ESR now for my courses’ final projects.
But what, then, is the value of ESRs in the face of presidential power to define, on its own, what constitutes, for instance, “legal use” of AI? Or when deference to military priorities and national security imperatives requires conceding proprietary rights to one’s technology or compromising one’s ethical compass or moral values? These are obviously not new philosophical issues. But the fight between Anthropic and the Pentagon makes clear the real-world immediacy of such questions and the impatience of the DOW with discussion: As Maureen Dowd recently put it in a New York Times op-ed, the Pentagon bum-rushed Anthropic with a choice: “Be extorted or blacklisted.” It is unsurprising that Trump did not consult Congress before declaring war with Iran. This administration’s adoption of Zuckerberg’s move-fast-and-break-things ethos, in which they act seemingly unilaterally and at whatever breathless speed suits their whim, renders functionally impotent the legal and ethical deliberative processes so needed to make informed judgments about AI use.
Some argue those judgments must lie ultimately with the executive branch. But what to do when tech industry lobbyists have purchased (in mind-numbing amounts) such unprecedented access to and outsize influence on presidential decision-making about AI? It is naïve to think that regulation and governance will make the U.S. a loser in an international AI arms race, that industry mantras “regulation impedes innovation” and “governance castrates competition” are animated solely by a concern for some higher social good or belief in civilizational progress. After all, those narratives, many of which were incubated in industry marketing divisions, serve a for-profit motive that directly benefits from a rush to market and a push to deploy.
In that context, the deliberations and critical reflection that should rightly occur in a democratic society about the ethical design and uses of AI are artificially made to seem unacceptably “slow.” Yet deliberations of that importance and impact deserve a timetable at a human pace, not one in which we seem to be constantly whipped into a perpetual fast-forward. Moreover, decisions about who decides what about AI should not be a presidential prerogative or belong solely to CEOs, but would ideally include civil society, technologists, academe (including the humanities and social sciences as well as STEM fields), philanthropy, as well as all branches of government—a whole-of-society approach. Surely, all of us are stakeholders in such a profoundly transformational technology.
— Michele Elam, William Robertson Coe Professor of Humanities in the English Department at Stanford University and Stanford HAI Senior Fellow