Skip to main content Skip to secondary navigation
Page Content
Image
Woman reads on a kindle

Automation and meaning, data labeling, future of work, carbon emissions, deep-fake detection: This year’s top research stories from the Stanford Institute for Human-Centered Artificial Intelligence crossed all disciplines. Here are the eight stories that resonated most with our readers. 

Humans in the Loop: The Design of Interactive AI Systems

In this thoughtful perspective on AI and human creativity, Stanford music professor Ge Wang explores why fully automating a pursuit such as, say, music-making neglects the meaning we derive from the process of creating it. Rather than removing human involvement from a task, what if we selectively include humans? The result: a process that harnesses the efficiency of intelligent automation while retaining a greater sense of meaning. 

Stanford Spin-Out Snorkel AI Solves a Major Data Problem

Even the most sophisticated companies and academics struggle with a key aspect of machine-learning technology: the time it takes to efficiently hand-label large volumes of training data. A project originating from the Stanford AI Lab could solve the bottleneck problem by enabling the systems themselves to label the data. 

AI’s Carbon Footprint Problem

Training an off-the-shelf AI language processing system produces 1,400 pounds of emissions – or about the amount produced by flying one person roundtrip between New York and San Francisco. Now imagine building and training a system from scratch. “As machine learning systems become more ubiquitous and more resource intensive, they have the potential to significantly contribute to carbon emissions,” says Stanford PhD student Peter Henderson. “But you can’t solve a problem you can’t measure.” He developed a system to track ML’s carbon efficiency. 

Using AI to Detect Seemingly Perfect Deep-Fake Videos

A new tool can recognize minute mismatches between the sounds people make and the shapes of their mouths. But Maneesh Agrawala, the Stanford computer science professor who developed it, says no long-term technology solution will solve the deep-fake problem. “As the technology to manipulate video gets better and better, the capability of technology to detect manipulation will get worse and worse,” he says. “We need to focus on nontechnical ways to identify and reduce disinformation and misinformation.”

How Work Will Change Following the Pandemic

COVID-19 has introduced a “new normal” to many of us working remotely, and some of those changes are going to be permanent, warns Stanford Digital Economy Lab director Erik Brynjolfsson. He examined 950 occupations to understand what occupations will be most impacted by automation and surveyed 50,000 workers about the future of remote work. “After the pandemic, we’re going to have a new economy that has a lot more people doing remote work and a lot more people using machine learning,” Brynjolfsson said. “It makes sense for managers today to think about what kinds of skills they want for that economy of the future.”

Is GPT-3 Intelligent? A Directors’ Conversation with Oren Etzioni

In this Directors’ Conversation, HAI’s John Etchmendy interviews Allen Institute CEO Oren Etzioni about GPT-3 (impressive behaviors but not intelligence), the trolley problem (edge cases that don’t focus on the main value), artificial general intelligence (far, far from our current reality), adversarial versus cooperative AI, and other fascinating philosophical questions. 

A Fitness App with a Story to Tell: Can Narrative Keep Us Moving?

Getting healthy? You have your pick of digital tools – fitness bands, smartwatches, online social networks – to help. But those tools are often not enough to keep us on track. Part of the problem, says HAI Associate Director James Landay, is they don’t keep us mentally engaged. His research explores adding storytelling into a fitness app. As users meet their goals, they see the next chapter in an ongoing tale. The study is already showing positive results. 

How AI Systems Use Mad Libs to Teach Themselves Grammar

Machines can learn a lot about language by playing a fill-in-the-blank game reminiscent of “Mad Libs,” finds Christopher Manning, a Stanford professor of linguistics and of computer science. “As these models get bigger and more flexible, it turns out that they actually self-organize to discover and learn the structure of human language. It’s similar to what a human child does.” That has big implications for natural language processing, which is increasingly central to AI systems that answer questions, translate languages, help customers, and even review resumes.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

Related Content

Hands type on a keyboard.

When AI Writes Your Email

by Katharine Miller
May 6th, 2020

Artificial intelligence is changing the way we communicate with each other, leading to questions of trust and bias.