Skip to main content Skip to secondary navigation
Page Content
Image
young woman traveling by plane during COVID 19, wearing N95 face mask, using a digital tablet in airport waiting area.

In 2021, Stanford HAI researchers launched a new center focused on foundation models; created a new repository for open-source medical datasets; released cutting-edge research in robotics, cognitive science, and other sectors; and shared their insights on explainable AI, mental health, and spycraft. Here are the year’s most-read stories.

1. Peter Norvig: Today’s Most Pressing Questions in AI Are Human-Centered

Stanford HAI hired renowned AI expert Peter Norvig to help build out our education programming, with a particular focus on diversity and inclusion. Here he discusses his experience in industry, what human-centered AI education looks like, and how to broaden access for students beyond Stanford.

2. State of AI in 10 Charts

When our AI Index released in March 2021, we summarized the main findings in 10 easy-to-understand charts. Here’s what happened in 2020 in AI education, hiring, policy, investment, and other sectors.

3. The Open-Source Movement Comes to Medical Datasets

In August, the Stanford Center for Artificial Intelligence in Medicine & Imaging launched a new free repository of medical imaging datasets in hopes of spurring crowd-sourced AI applications in health care.

4. Stanford Researchers Build $400 Self-Navigating Smart Cane

The white cane is a simple but crucial tool that assists people with visual impairments in making their way through the world. Researchers at Stanford University are hoping to improve upon that tool by building an affordable robotic cane that guides people with visual impairments safely and efficiently through their environments using tools from autonomous vehicles.

5. Re-Imagining Espionage in the Era of Artificial Intelligence

In the AI era, sophisticated intelligence can come from almost anywhere – armchair researchers, private technology companies, commercial satellites, and ordinary citizens who livestream on Facebook. But the U.S. intelligence community – a collection of 18 spy agencies across the government – is behind this technology curve, says national security expert Amy Zegart.

6. Introducing the Center for Research on Foundation Models (CRFM)

This summer, Stanford HAI launched the new Center for Research on Foundation Models, an initiative that brings together more than 175 researchers across 10+ departments at the university to understand and build a new type of technology that will power artificial intelligence systems in the future.

7. The 2021 AI Index: Major Growth Despite the Pandemic

What happened in AI in 2020? Technologists made significant strides in massive language and generative models; the United States witnessed its first drop in AI hiring ever – pointing to a maturation of the industry – while hiring around the world increased; more dollars flowed to government use of AI than ever before, while colleges and universities offered students double the AI courses from a few years ago. These were some of the biggest trends in the in-depth, interdisciplinary report that published in March.

8. A Psychiatrist’s Perspective on Social Media Algorithms and Mental Health

Today there are over 3.78 billion social media users worldwide, with each person averaging 145 minutes of social media use per day. We’re beginning to see the harmful impact on mental health: loneliness, anxiety, fear of missing out, social comparison, and depression. Considering this impact, how can we create empathetic design frameworks to improve compassion online?

9. How Bodies Get Smarts: Simulating the Evolution of Embodied Intelligence

A team of researchers at Stanford wondered: Does embodiment matter for the evolution of intelligence? And if so, how might computer scientists make use of embodiment to create smarter AIs? To answer these questions, they created a computer-simulated playground where arthropod-like agents dubbed “unimals” learn and are subjected to mutations and natural selection. Their findings suggest embodiment is key to the evolution of intelligence.

10. Should AI Models Be Explainable? That Depends. 

What do we mean when we say AI should be explainable? One Stanford scholar notes that there are different levels of interpretability when it comes to models. He advocates for clarity about the different types of interpretability and the contexts in which it is useful: “It is essential that model developers be clear about why an explanation is needed and what type of explanation is useful for a given situation.”

11. Future of Work: Beyond Bossware and Job-Killing Robots

The public conversation around AI’s impact on the labor market often revolves around the job-displacing or job-destroying potential of increasingly intelligent machines. Less attention is paid to another significant problem: the dehumanization of labor by companies that use what’s known as “bossware” – AI-based digital platforms or software programs that monitor employee performance and time on task. To discourage companies from both replacing jobs with machines and deploying bossware to supervise and control workers, we need to change the incentives at play.

12. A New Approach to Mitigating AI’s Negative Impact

Too often, we consider AI’s impact after the fact. This year, for the first time at Stanford University, HAI worked with scholars on a new program requiring AI researchers to evaluate their proposals for any potential negative impact on society before being green-lighted for funding. The Ethics and Society Review (ESR) requires researchers seeking funding to consider how their proposals might pose negative ethical and societal risks, to come up with methods to lessen those risks, and, if needed, to collaborate with an interdisciplinary faculty panel to ensure those concerns are addressed before funding is received.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more