We look at our phones to unlock our screens, tag friends in photos on social media, even validate our identities at ATMs by staring into the camera. At the same time, these technologies can track us in public settings, identify our criminal histories, and gauge our reactions to advertising, all beyond our notice.
“Facial recognition technologies are widely deployed,” says Marietje Schaake, international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. “We don’t always know it is happening.”
As the technology seeps into our everyday lives, Schaake, former Member of the European Parliament from the Netherlands and the international policy director at Stanford’s Cyber Policy Center, says we must consider questions around bias, data collection and ownership, and our rights to privacy. She discussed her views on these questions during a Stanford HAI seminar on March 20.
How It Works, Who Uses It
We all have unique facial features. Facial recognition algorithms can analyze these facial features — say, the distance between our eyes — and turn these features into a mathematical formula to compare against other faces’ formulas. A software’s match success can be impacted by anything from the size of its dataset to the variety and quality of images used.
The fast-growing industry — a $5 billion market projected to double by 2025 — was adopted early on by law enforcement: In 2016, 50% of Americans were captured in law enforcement facial recognition databases, which are also being used in at least 10 countries in the EU, Schaake says.
But the software reaches beyond law enforcement to a wide variety of other industries:
- Travel: U.S. customs relies on facial recognition for its biometric exit program, and 17 airports currently deploy the technology, with plans to scan 97% of departing passengers by 2024.
- Finance: Some ATMs in Japan use facial recognition instead of cards and PINs, while Amazon Go scans customers to charge them virtually at its cashier-less stores.
- Health care: Facial recognition software can identify patients and diagnose genetic conditions.
- Education: Some public school systems in the U.S. are implementing the technology to track people who have been banned from campus or to record class attendance.
- Military: The U.S. is experimenting with software that can identify people in the dark using heat signatures on the skin.
- Social services: Some homeless shelters have experimented with the technology to help identify people seeking services who don’t have other forms of identification.
Legal, Societal, and Policy Implications
As the use and power of this technology continues to expand, it raises a number of serious questions, Schaake notes.
First, what are the consequences for privacy and anonymity? Should people have a presumption of privacy when they attend sporting events, take a walk in a public park, visit a hospital, or pick up our child at school?
“What one person might think is legitimate use, another person might think of as abusive or a blanket violation of rights,” she says.
Another issue of growing concern is bias, when software that’s often trained on white, male images frequently misidentifies women and people of color.
Questions also arise around proper oversight of the people creating and implementing the technology. If a police force’s outdated facial recognition software produces bad results, who’s responsible? Should private companies be held accountable if their technologies are used for nefarious ends?
Clearview AI, which scraped billions of photos online, claimed to be working exclusively with law enforcement agencies, but reports linked the company to clients ranging from Saudia Arabia and the United Arab Emirates to Walmart and the NBA. A technology that’s ostensibly “designed for research or medicine can be used elsewhere for repression,” Schaake says. It is essential to consider the context in which facial recognition systems are used. “There are not enough guardrails,” she says.
The European Union had initially been rumored to be moving to block the use of facial recognition systems, but backtracked from that. The picture in the United States is splintered, with responses varying by municipality. San Francisco, Oakland, and Somerville, Mass., are among the cities to have entirely banned its use.
In Sweden, a municipality was fined for testing a facial recognition system to track attendance in schools, while a proposed bill in the New York state assembly would prohibit its use in education settings.
Meanwhile, the U.S. Senate introduced the Commercial Facial Recognition Privacy Act, which would require businesses to obtain explicit permission from individuals before collecting their facial data or sharing it with third parties. It is also imaginable for regulations of facial recognition systems in the context of geopolitical concerns; for example, by export or foreign direct investment restrictions.
As governments are working on laws to protect their citizens, a number of court cases have already challenged the technology. Facebook settled a lawsuit in Illinois for $500 million for harvesting users’ photos without consent. The attorney general of Vermont sued Clearview AI for collecting photo information without permission, and the ACLU is suing several law enforcement agencies, including the Department of Homeland Security and Immigration and Customs Enforcement, to obtain information over their use of facial recognition software.
But even as these cases wend their way through the judicial system, the development of this technology continues to accelerate. “Will politics come forward and curb this technology first,” Schaake asks, “or will we see jurisprudence and court cases mapping out the playing field in which this technology can be used?”
Marietje Schaake spoke as a guest of the Stanford Institute for Human-Centered Artificial Intelligence. She is also teaching a spring course at Stanford on AI and the rule of law from a global perspective.To learn more about upcoming HAI speakers and events, sign up for the email newsletter or visit the events page.