Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
The Challenges of Facial Recognition Technologies | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

The Challenges of Facial Recognition Technologies

Date
April 23, 2020
REUTERS/Thomas Peter

As the technology expands, we must consider the legal and societal implications, says one scholar.

We look at our phones to unlock our screens, tag friends in photos on social media, even validate our identities at ATMs by staring into the camera. At the same time, these technologies can track us in public settings, identify our criminal histories, and gauge our reactions to advertising, all beyond our notice.

“Facial recognition technologies are widely deployed,” says Marietje Schaake, international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. “We don’t always know it is happening.” 

As the technology seeps into our everyday lives, Schaake, former Member of the European Parliament from the Netherlands and the international policy director at Stanford’s Cyber Policy Center, says we must consider questions around bias, data collection and ownership, and our rights to privacy. She discussed her views on these questions during a Stanford HAI seminar on March 20.

How It Works, Who Uses It

We all have unique facial features. Facial recognition algorithms can analyze these facial features — say, the distance between our eyes — and turn these features into a mathematical formula to compare against other faces’ formulas. A software’s match success can be impacted by anything from the size of its dataset to the variety and quality of images used. 

The fast-growing industry — a $5 billion market projected to double by 2025 — was adopted early on by law enforcement: In 2016, 50% of Americans were captured in law enforcement facial recognition databases, which are also being used in at least 10 countries in the EU, Schaake says. 

But the software reaches beyond law enforcement to a wide variety of other industries:

  • Travel: U.S. customs relies on facial recognition for its biometric exit program, and 17 airports currently deploy the technology, with plans to scan 97% of departing passengers by 2024.

  • Finance: Some ATMs in Japan use facial recognition instead of cards and PINs, while Amazon Go scans customers to charge them virtually at its cashier-less stores.

  • Health care: Facial recognition software can identify patients and diagnose genetic conditions.

  • Education: Some public school systems in the U.S. are implementing the technology to track people who have been banned from campus or to record class attendance.

  • Military: The U.S. is experimenting with software that can identify people in the dark using heat signatures on the skin.

  • Social services: Some homeless shelters have experimented with the technology to help identify people seeking services who don’t have other forms of identification.

Legal, Societal, and Policy Implications

As the use and power of this technology continues to expand, it raises a number of serious questions, Schaake notes. 

First, what are the consequences for privacy and anonymity? Should people have a presumption of privacy when they attend sporting events, take a walk in a public park, visit a hospital, or pick up our child at school? 

“What one person might think is legitimate use, another person might think of as abusive or a blanket violation of rights,” she says.

Another issue of growing concern is bias, when software that’s often trained on white, male images frequently misidentifies women and people of color. 

Questions also arise around proper oversight of the people creating and implementing the technology. If a police force’s outdated facial recognition software produces bad results, who’s responsible? Should private companies be held accountable if their technologies are used for nefarious ends?

Clearview AI, which scraped billions of photos online, claimed to be working exclusively with law enforcement agencies, but reports linked the company to clients ranging from Saudia Arabia and the United Arab Emirates to Walmart and the NBA. A technology that’s ostensibly “designed for research or medicine can be used elsewhere for repression,” Schaake says. It is essential to consider the context in which facial recognition systems are used. “There are not enough guardrails,” she says.

Legislative Challenges

The European Union had initially been rumored to be moving to block the use of facial recognition systems, but backtracked from that. The picture in the United States is splintered, with responses varying by municipality. San Francisco, Oakland, and Somerville, Mass., are among the cities to have entirely banned its use.

In Sweden, a municipality was fined for testing a facial recognition system to track attendance in schools, while a proposed bill in the New York state assembly would prohibit its use in education settings. 

Meanwhile, the U.S. Senate introduced the Commercial Facial Recognition Privacy Act, which would require businesses to obtain explicit permission from individuals before collecting their facial data or sharing it with third parties. It is also imaginable for regulations of facial recognition systems in the context of geopolitical concerns; for example, by export or foreign direct investment restrictions. 

As governments are working on laws to protect their citizens, a number of court cases have already challenged the technology. Facebook settled a lawsuit in Illinois for $500 million for harvesting users’ photos without consent. The attorney general of Vermont sued Clearview AI for collecting photo information without permission, and the ACLU is suing several law enforcement agencies, including the Department of Homeland Security and Immigration and Customs Enforcement, to obtain information over their use of facial recognition software. 

But even as these cases wend their way through the judicial system, the development of this technology continues to accelerate. “Will politics come forward and curb this technology first,” Schaake asks, “or will we see jurisprudence and court cases mapping out the playing field in which this technology can be used?”

Marietje Schaake spoke as a guest of the Stanford Institute for Human-Centered Artificial Intelligence. She is also teaching a spring course at Stanford on AI and the rule of law from a global perspective.To learn more about upcoming HAI speakers and events, sign up for the email newsletter or visit the events page.

REUTERS/Thomas Peter
Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch
Related
  • Michal Kosinski: Living in a post-privacy world
    the ​Stanford Engineering Staff
    May 08
    news
    Your browser does not support the video tag.

    Algorithms that dig into our digital lives to predict behavior have become a hot topic. Michal Kosinski talks about the pros and cons of life in a hyper-connected world.

Related News

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems
Nikki Goth Itoi
Feb 27, 2026
News

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

News

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

Nikki Goth Itoi
Generative AIHealthcarePrivacy, Safety, SecurityComputer VisionSciences (Social, Health, Biological, Physical)Feb 27

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

AI Challenges Core Assumptions in Education
Shana Lynch
Feb 19, 2026
News

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

News

AI Challenges Core Assumptions in Education

Shana Lynch
Education, SkillsGenerative AIPrivacy, Safety, SecurityFeb 19

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

AI Sovereignty’s Definitional Dilemma
Juan Pava, Caroline Meinhardt, Elena Cryst, James Landay
Feb 17, 2026
News
illustration showing world and digital lines and binary code

Governments worldwide are racing to control their AI futures, but unclear definitions hinder real policy progress.

News
illustration showing world and digital lines and binary code

AI Sovereignty’s Definitional Dilemma

Juan Pava, Caroline Meinhardt, Elena Cryst, James Landay
Government, Public AdministrationRegulation, Policy, GovernanceInternational Affairs, International Security, International DevelopmentFeb 17

Governments worldwide are racing to control their AI futures, but unclear definitions hinder real policy progress.