Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Should Self-driving Cars and Care-bots all Come with Black Box Recorders? | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Should Self-driving Cars and Care-bots all Come with Black Box Recorders?

Date
September 28, 2021

An international team of experts argues that autonomous systems of all kinds should learn from the aviation industry.

Every commercial airplane carries a “black box” that preserves a second-by-second history of everything that happens in the aircraft’s systems as well as of the pilots’ actions, and those records have been priceless in figuring out the causes of crashes.

Why shouldn’t self-driving cars and robots have the same thing? It’s not a hypothetical question.

Federal transportation authorities are investigating a dozen crashes involving Tesla cars equipped with its “AutoPilot” system, which allows nearly hands-free driving. Eleven people died in those crashes, one of whom was hit by a Tesla while he was changing a tire on the side of a road. Yet every car company is ramping up its automated driving technologies. Walmart is partnering with Ford and Argo AI to test self-driving cars for home deliveries, and Lyft is teaming up with the same companies to test a fleet of robo-taxis.

Read the paper, Governing AI Safety through Independent Audits

 

But self-directing autonomous systems go well behind cars, trucks, and robot welders on factory floors. Japanese nursing homes use “care-bots” to deliver meals, monitor patients, and even provide companionship. Walmart and other stores use robots to mop floors. At least a half-dozen companies now sell robot lawnmowers.  (What could go wrong?)

And more daily interactions with autonomous systems may bring more risks. With those risks in mind, an international team of experts — academic researchers in robotics and artificial intelligence as well as industry developers, insurers, and government officials — has published a set of governance proposals to better anticipate problems and increase accountability. One of its core ideas: a black box for any autonomous system.

“When things go wrong right now, you get a lot of shoulder shrugs,” says Gregory Falco, a co-author who is an assistant professor of civil and systems engineering at Johns Hopkins University and a researcher at the Stanford Freeman Spogli Institute for International Studies. “This approach would help assess the risks in advance and create an audit trail to understand failures. The main goal is to create more accountability.”

The new proposals, published in Nature Machine Intelligence, focus on three principles: preparing prospective risk assessments before putting a system to work; creating an audit trail — including the black box — to analyze accidents when they occur; and promoting adherence to local and national regulations.

The authors don’t call for government mandates. Instead, they argue that key stakeholders — insurers, courts, customers — have a strong interest in pushing companies to adopt their approach. Insurers, for example, want to know as much as possible about potential risks before they provide coverage. (One of the paper’s co-authors is an executive with Swiss Re, the giant re-insurer.) Likewise, courts and attorneys need a data trail in determining who should or shouldn’t be held liable for an accident. Customers, of course, want to avoid unnecessary dangers.

Companies are already developing black boxes for self-driving vehicles, in part because the National Transportation Safety Board has alerted manufacturers about the kind of data it will need to investigate accidents. Falco and a colleague have mapped out one kind of black box for that industry.

But the safety issues now extend well beyond cars. If a recreational drone slices through a power line and kills someone, it wouldn’t currently have a black box to unravel what happened. The same would be true for a robo-mower that runs amok. Medical devices that use artificial intelligence, the authors argue, need to record time-stamped information on everything that happens while they’re in use.

The authors also argue that companies should be required to publicly disclose both their black box data and the information obtained through human interviews. Allowing independent analysts to study those records, they say, would enable crowd-sourced safety improvements that other manufacturers could incorporate in their own systems.

Falco argues that even relatively inexpensive consumer products, like robo-mowers, can and should have black box recorders. More broadly, the authors argue that companies and industries need to incorporate risk assessment at every stage of a product’s development and evolution.

“When you have an autonomous agent acting in the open environment, and that agent is being fed a whole lot of data to help it learn, someone needs to provide information for all the things that can go wrong,” he says. “What we’ve done is provide people with a road map for how to think about the risks and for creating a data trail to carry out post-mortems.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Edmund L. Andrews

Related News

Struggling DNA Testing Firm 23andMe To Be Bought For $256m
BBC
May 19, 2025
Media Mention

Stanford HAI Policy Fellow Jennifer King speaks about the data privacy implications of 23andMe's purchase by Regeneron.

Media Mention
Your browser does not support the video tag.

Struggling DNA Testing Firm 23andMe To Be Bought For $256m

BBC
Privacy, Safety, SecurityMay 19

Stanford HAI Policy Fellow Jennifer King speaks about the data privacy implications of 23andMe's purchase by Regeneron.

Closing the Digital Divide in AI
Shana Lynch
May 19, 2025
News

Large language models aren't effective for many languages. Scholars explain what's at stake for the approximately 5 billion people who don't speak English.

News

Closing the Digital Divide in AI

Shana Lynch
May 19

Large language models aren't effective for many languages. Scholars explain what's at stake for the approximately 5 billion people who don't speak English.

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments
Scott Hadly
May 09, 2025
News

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

News

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments

Scott Hadly
Privacy, Safety, SecurityMay 09

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.