Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
A Legal Approach to “Affirmative Algorithms” | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

A Legal Approach to “Affirmative Algorithms”

Date
November 09, 2020
Ken Hammond

Solutions to fix algorithmic bias could collide with law. Two scholars propose a solution. 

As AI and predictive algorithms permeate ever more areas of decision making, from setting bail to evaluating job applications to making home loans, what happens when an algorithm arbitrarily discriminates against women, African-Americans, or other groups?  

It happens all the time.

Amazon famously discarded a resume-reviewing system because it penalized women — probably a legacy of gender-skewed hiring patterns. Similarly, an AI model used by courts to predict recidivism incorrectly labelled Black defendants as “high risk” at twice the rate as whites.

Unfortunately, warns Daniel Ho, the William Benjamin Scott and Luna M. Scott Professor of Law at Stanford University and associate director of the Stanford Institute for Human-Centered Artificial Intelligence, many of the proposed solutions to fix algorithmic bias are on a collision course with Supreme Court rulings on equal protection.

In a new paper by Ho and Alice Xiang, the head of Fairness, Transparency, and Accountability Research at the Partnership on AI, a research group that focuses on responsible AI, the authors warn that many of the strategies for increasing fairness clash directly with the high court’s push for “anti-classification.” That’s the principle of remaining “blind” toward categories like race, gender, or religion.

In one key case, the Supreme Court rejected the University of Michigan’s attempt to give a modest statistical boost to applicants from under-represented communities. In another, the high court ruled against the city of New Haven, Conn., which had thrown out the results of a test for firefighters because no African-American candidates would have been promoted. As Chief Justice John Roberts summed up the issue in another case, “The way to stop discrimination on the basis of race is to stop discriminating on the basis of race.”

That poses a big legal obstacle for fixing biased algorithms, note Ho and Xiang, because most of the strategies involve adjusting algorithms to produce fairer outcomes along racial or gender lines. Since many minority students may have less time and money for SAT coaching classes, for example, it could make sense to lower the relative weight of SAT scores in evaluating their college applications.

But because such adjustments would be deemed race-based classifications, the authors say, they will be at risk of being overturned in court.

“The adjustments to algorithmic systems come very close to the University of Michigan’s 20-point boost, which the Supreme Court rejected,” Ho says. “The machine learning community working on algorithmic fairness hasn’t had close exchanges with the legal community. But when you put the two together, you realize there’s a collision.”

Adds Xiang, “It was striking to see how much of the machine-learning literature is legally suspect. The court has taken a very strong anti-classification stance. If actions are motivated by race, even with the ostensible goal to promote fairness, it probably won’t fly.”

At the same time, the authors warn, the demand for “blindness” could make algorithmic bias even worse.

That’s because machine learning models pick up on all kinds of correlations or proxies for race that may have no real-world significance but become part of the decision process.

Amazon’s resume-reviewing model, for example, didn’t distinguish between male and female applicants. Instead, it “learned” that the company had hired very few engineers who had come from women’s colleges. As a result, it down-weighted applications that mentioned women’s colleges.

“It’s hard to be fair if you’re not aware of an algorithm’s potential impact on different subgroups,” says Ho. “That’s why blindness can often be a significantly inferior solution than what machine learners call ‘fairness through awareness.’”

The good news is that Ho and Xiang see a possible solution to the legal morass.

A separate strand of affirmative action case law, tied to government contracting, has long permitted explicit racial and gender preferences. Federal, state, and local agencies create set-asides and bidding preferences for minority-owned contractors, and those preferences have passed legal muster because the agencies could document their own histories of discrimination.

Ho says that advocates for fairness could well justify race- or gender-based fixes on the basis of past discrimination. Like the Amazon system, most AI models are trained on historical data that may incorporate past patterns of discrimination.

What the law emphasizes, Ho says, is that the magnitude of the adjustment should track the evidence for historical discrimination.  The government contracting precedent allows for explicit quantification, thus making fair machine learning more feasible.

Put another way, say Ho and Xiang, the jurisprudence of government contract law may offer an escape from the trap of blindness and a path toward fairness through awareness.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Ken Hammond
Share
Link copied to clipboard!
Contributor(s)
Edmund L. Andrews
Related
  • Coded Bias: Director Shalini Kantayya on Solving Facial Recognition’s Serious Flaws
    Katharine Miller
    Sep 14
    news

    We need ‘guidelines around transparency and laws that balance Big Tech’s power.’

Related News

AI Challenges Core Assumptions in Education
Shana Lynch
Feb 19, 2026
News

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

News

AI Challenges Core Assumptions in Education

Shana Lynch
Education, SkillsGenerative AIPrivacy, Safety, SecurityFeb 19

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

AI Sovereignty’s Definitional Dilemma
Juan Pava, Caroline Meinhardt, Elena Cryst, James Landay
Feb 17, 2026
News
illustration showing world and digital lines and binary code

Governments worldwide are racing to control their AI futures, but unclear definitions hinder real policy progress.

News
illustration showing world and digital lines and binary code

AI Sovereignty’s Definitional Dilemma

Juan Pava, Caroline Meinhardt, Elena Cryst, James Landay
Government, Public AdministrationRegulation, Policy, GovernanceInternational Affairs, International Security, International DevelopmentFeb 17

Governments worldwide are racing to control their AI futures, but unclear definitions hinder real policy progress.

America's 250 Greatest Innovators: Celebrating The American Dream
Forbes
Feb 11, 2026
Media Mention

HAI Co-Director Fei-Fei Li named one of America's top 250 greatest innovators, alongside fellow Stanford affiliates Rodney Brooks, Carolyn Bertozzi, Daphne Koller, and Andrew Ng.

Media Mention
Your browser does not support the video tag.

America's 250 Greatest Innovators: Celebrating The American Dream

Forbes
Computer VisionGenerative AIFoundation ModelsEnergy, EnvironmentEthics, Equity, InclusionFeb 11

HAI Co-Director Fei-Fei Li named one of America's top 250 greatest innovators, alongside fellow Stanford affiliates Rodney Brooks, Carolyn Bertozzi, Daphne Koller, and Andrew Ng.