Christine Baker
Stanford computer science professor James Landay has always focused on people: improving how they interact with computers and improving their well-being. With that focus, he helped lay the foundation for many human-centered computer interfaces that are now nearly universal.
Design tools he developed in the 1990s presaged modern online design programs such as Balsamiq, Figma, and Canva. UbiFit, a device he and his colleagues designed in the early 2000s, was a precursor to the Fitbit and Apple Watch. And his science-fiction-inspired personalized AI education system called the Smart Primer presaged educational programing that’s only now becoming possible using large language models.
“James trail blazed focusing on people in computer science long before it was popular and long before it was easy,” says Michael Bernstein, associate professor of computer science at Stanford.
Landay’s cutting-edge research earned him the Lifetime Research Award from the Association for Computing Machinery (ACM) this year. Now the newly appointed co-director of the Stanford Institute for Human-Centered AI (HAI) is setting his sights on a fundamental need in today’s AI boom: defining human-centered AI.
“That’s truly what my career has been about: creating human-centered technology that happens to use AI,” says Landay, the Anand Rajaraman and Venky Harinarayan Professor in the School of Engineering.
As HAI co-director, where he oversees the institute’s research portfolio that has funneled more than $40 million to more than 300 Stanford scholars across disciplines, Landay feels he’s in the perfect spot to build a community of researchers and support a valuable mission. “AI is going to impact everything in society: our schooling, our health, our environment, our politics, all of it,” he says. “And I believe HAI is in the right place to help guide AI toward having a positive impact rather than a negative one.”
The Origin Story
As a high school student in the early 1980s, Landay already knew he wanted to study computer science. He had also already witnessed the ways that computers weren’t intuitive for users such as his attorney father, who as an early adopter of technology turned to his son for help. “Even then I had an interest in making it easier for people to get real power out of using a computer,” Landay says.
During his undergraduate years at the University of California, Berkeley, Landay turned that interest into a vocation. He studied end-user programming – helping people program computers without knowing a programming language. He envisioned people telling or visually showing computers what they wanted, and computers responding.
Early Career: Creating Interactive Design Tools
For his dissertation at Carnegie Mellon University, Landay created a design tool for computer interface designers called SILK (Sketching Interfaces Like Krazy). He was interested in sketching as a creative tool because research showed people became less innovative once they began designing on computers. “When a design is informal, people focus on the big-picture ideas, rather than the font and the colors and whether things are positioned precisely,” he says. Because SILK allowed people to sketch a graphical user interface design onto an electronic tablet and interact with those sketched yet functional pieces of the design (such as a scroll bar, button, or menu), it encouraged people to come up with new ideas, he says.
This work led to an early-career focus on interactive design tools. As a professor of electrical engineering and computer science at UC Berkeley – just as the internet began to take off – he created DENIM, a SILK-like program for sketching websites, and multimodal tools that allowed designers to create user interfaces that use speech and gesture together. In 1998, he co-founded NetRaker, a company acquired by Keynote Systems in 2004, that provided web-based systems for evaluating and improving the design and navigability of websites.
The impact of this early work has been enormous, Bernstein says. “James’ user-centered methodology for developing interactive design tools has become the dominant method for designing interactive systems of all kinds.”
Making Tools to Change Behavior
At the University of Washington, where he was professor of computer science and engineering from 2003 to 2013, Landay dove into designing interfaces focused on helping individuals change their behavior – helping people follow through on short- or long-term goals around exercise or skills development, for example. These interfaces took advantage of machine learning and sensing capabilities that were just becoming available at that time, Landay says. For example, his lab designed UbiFit, a pager-sized device with a separate user interface running on the screen of early smartphones, which tracked a person’s exercise and displayed a garden growing flowers as they met their exercise goals. Once goals were met, butterflies would appear in the garden.
This precursor to the Fitbit and Apple Watch also predated the use of GPS on smartphones, so UbiFit and other systems at the Intel lab he directed instead mapped a person’s location relative to Wi-Fi access points and used machine learning to determine what they were doing, such as running, cycling, or using a stair climber. “This work was groundbreaking in terms of using AI,” Landay says. And the open-source location-aware software his team created at Intel Labs, where he served as a laboratory director from 2003 to 2006, became the basis for similar software used by early smartphones.
Useful Ubiquity
A third major thread in Landay’s work has been ubiquitous computing – embedding computer technology into everyday objects. For example, while at Intel Labs, his team developed an ambient health care system so older adults could more easily age in place. Using an RFID bracelet, the team gathered data about a person’s movements and interactions with RFID-labeled household items. They used that data to train a machine learning model to recognize a person’s activities. For example, if someone was using the toothpaste and toothbrush in the bathroom, they were likely brushing their teeth. Under Landay’s leadership, the Intel lab also developed and tested a digital display for caregivers showing a picture of the older family member with icons around the edge to represent whether they had taken their medications, eaten meals, gotten exercise, and had social interactions. The families who tested this prototype wanted a productized version right away, he says.
The Future of the World
A few years before joining Stanford in 2014, Landay spent a sabbatical mulling his next research efforts. He decided to focus on three areas that he felt were most important to the future of the world: health, education, and the environment.
“The odds that your research actually has a big impact in the real world is quite small,” Landay says, “but I felt that if I got lucky and did have an impact, at least I should be doing something important.”
In the health arena, Landay’s work on UbiFit evolved into an app called “Who is Zuki?” that uses narrative to keep people engaged with exercise. In a collaboration with Sarah Billington, a professor of civil and environmental engineering; HAI co-director and computer scientist Fei-Fei Li; and School of Medicine professor Arnold Milstein, Landay has launched a new Ambient Intelligence for Health lab to study whether emerging technologies can monitor people in a privacy-preserving way to allow older adults to remain in their homes longer while also helping upskill the personal health aides often taking care of them.
Also in collaboration with Billington, he continues working with ubiquitous computing as part of a project to develop healthier hybrid digital-physical spaces. Their interdisciplinary team uses an AI sensing platform to learn how people are affected by a building’s features such as natural or artificial light; natural wood or artificial laminated furniture; or images in the workplace that show diversity. The team also evaluates whether dynamically changing the environment with lighting, sound, or large interactive displays can reduce people’s stress levels or increase creativity, physical activity, and a sense of belonging. The team is currently collaborating with other researchers in the Department of Computer Science and at the Stanford School of Medicine to bring these ideas to hospital and senior living settings.
Landay’s education work has focused on developing a Smart Primer that uses narrative to help young children learn. The idea came from a science-fiction story by Neal Stephenson about a young girl with an electronic book that teaches her as it tells a story customized to her life and her world. Creating and personalizing a Smart Primer outside of fiction requires AI that can understand what a user already knows, doesn’t know, and should learn next. With the advent of generative AI, it’s now becoming possible to create personalized narrative as well as imagery that is specific to a particular student’s situation, Landay says. “I’ve been trying to design and build the Smart Primer for the last 10 years and I see this as taking another 5 to 10 years to develop this deeply personalized narrative capability.”
Defining Human-Centered AI
Landay defines human-centered artificial intelligence in a recent HAI seminar.
As one of HAI’s founders, Landay insisted that “human-centered” be part of the institute’s name. The term is borrowed from his own field of study, human-computer interaction, which commonly goes by human-centered computing. But after HAI launched five years ago, Landay noticed the term “human-centered” meant different things to different people. At first, he thought it could be valuable for people to use the term in ways that made sense to them. But as time passed he realized HAI would be more effective if it built consensus around what distinguishes human-centered AI projects from other AI projects. For example, it isn’t actually human-centered for an AI researcher to simply pick a topic or application area that seems human-centered, such as health, if they don’t also consider how their work will impact doctors and patients and society at large. Or for a sociologist to study an algorithm’s impact on workers only after it’s already been implemented in the world.
In recent talks and an upcoming paper, Landay outlines a definition of human-centered AI. Designers of AI need to – from the get-go – consider the needs of users, the needs and concerns of communities of people impacted by the AI’s use, and the needs and concerns of society at large. “I’m trying to create a design process that integrates across those three levels,” Landay says. “If we succeed in this, we’re less likely to produce AI systems that have a negative impact on society.”
Building a Community
Core to Landay’s human-centered innovations are the humans. Over the years he has focused on building communities of students, postdocs, and collaborators working in human-computer interaction.
“I’m a people person, so I love to help develop people, whether it’s my graduate students or the staff I work closely with at HAI,” he says. “I like to help people reach beyond what they think is their own potential and develop their careers.”
“James has always said that being a PhD advisor is a lifelong commitment, and he has lived up to that,” says Scott Saponas, Landay’s advisee at the University of Washington from 2004 to 2010, now senior director of the Biomedical Computing and Medical Experiences and Design groups at Microsoft. “He is one of only a few people who I know would pick up the phone if I called at 3 a.m. and needed advice on any topic – even 14 years after I graduated. Few people make that level of commitment to their students.”
And then there’s his ability to build communities of researchers working together toward a common goal, most notably at the University of Washington, where he created DUB – Design. Use. Build. – an interdisciplinary grassroots alliance of scholars focused on improving people’s experiences with technology. That work arguably transformed the University of Washington’s human-computer interaction program into one of the top two in the world, Landay says. “It brought interest to the area, brought new hiring across four departments, and now it’s a community of over a hundred people working in the field at the top level.”
HAI is also taking advantage of Landay’s knack for building communities of researchers. Under his leadership, the institute has supported more than 400 Stanford scholars across disciplines with seed grants, community workshops, and more. “I’ve always been excited by helping an organization of researchers achieve more than they could on their own,” he says.
Landay shares his research path from his own perspective in this recent HAI research seminar.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.