Skip to main content Skip to secondary navigation
Page Content

Hari Subramonyam: Making Human-Centered AI a Reality By Studying the Teams That Make it

As a HAI Faculty Fellow, Subramonyam hopes to bring together multidisciplinary experts to work on specific human-centered AI projects.

Image
Hari Subramonyam

For his PhD dissertation at the University of Michigan, Hari Subramonyam observed human-computer interaction designers and AI engineers working together. He wanted to study how interactive collaborative design processes might produce more human-centered AI systems. He also prototyped a tool that AI system user-interface designers can use to test how well their designs will perform for diverse people and across a variety of use contexts. And in separate but symbiotic work in the field of education, he developed a tool that augments students’ comprehension of complex text by automatically diagramming information as they highlight it.

Now, as he starts his five-year appointment as a HAI Faculty Fellow associated with the Graduate School of Education and the HCI Group at Stanford, Subramonyam wants to incorporate additional multidisciplinary perspectives into the design of specific AI systems. By understanding the way diverse researchers function in these design microcosms, he hopes to better understand how to foster the design of high-quality human-centered AI systems generally.

What is your vision for developing human-centered AI and how do you plan to go about it?

Whether we like it or not, each of us uses at least one or two AI systems every day. AI might autocomplete our words and sentences in emails and texts, answer our spoken questions to Alexa or Siri, drive our car, or make important hiring and criminal justice decisions that affect us. 

And although we have a high-level understanding of what we want from AI and what we don’t want – automation that costs jobs and discriminatory hiring decisions, for example – we don’t know how to operationalize these values and needs for people. That’s because the issues at play involve a complex network of diverse academic communities with boundaries between them.

So in my work, we’re bringing these communities together to discover how they can and should collaborate to realize these values and this vision for human-centered AI. 

Until now, I’ve been focusing on collaboration between designers and engineers, but at Stanford I’d like to bring in educators, psychologists, policy experts, ethicists, and others. We all need to be able to work together to build human-centered AI.

Why do you focus your work around solving specific AI problems?

To me, there is a symbiotic relationship between building AI systems and understanding the challenges of building them. When you take a 1,000-foot view of human-centered AI, you miss out on the low-level details about the kinds of decisions that go into realizing these systems. So, the approach that I took during my PhD research and that I plan to pursue here at Stanford is to tackle very specific problems. And I’ve seen that it’s only by designing and building specific systems that you come to understand what kinds of design decisions people need to make and what kinds of tools might support their needs.

Why does human-centered user-interface design need to be more collaborative when AI is involved?

When people first started building software systems for personal use in a non-AI context, they typically developed the technology first and then turned to making it work for people. In recent years, as designers gained a better understanding of human-computer interaction, that process flipped. The pipeline now proceeds from exploring people’s needs to designing the system that we all see, touch, and interact with, and finally to building the system itself.

In the AI context, however, neither the tech-first nor the human-first pipeline is appropriate. AIs themselves need to be designed with the human experience in mind. Consider unlocking your smartphone with a password vs. with facial recognition that uses AI. The user experience for the password is entirely in the designer’s control. But when it comes to an AI-based experience that uses facial recognition, the designer has much less control. S/he cannot specify that it should work in low light, or that it must work for people who have different skin tones, wear glasses, or have facial hair. Those are things that the AI must be trained to do. As a result, control shifts from the designer (who is used to doing human-centered design) to the engineer who also now needs to think about what it means to implement these models and AI components in a human-centered way.

In the AI context, the interaction between designers and engineers must therefore shift from one of coordination and handoff to one involving collaboration among people with different kinds of expertise – engineers who are trained to think in terms of abstractions, models, functions, and variables, and designers who are trained to think in terms of people and behavior.

We need to figure out the lingua franca for putting these communities together in collaboration to ensure we get what we want from human-centered AI.

What have you learned about enabling collaboration between user-interface designers and AI engineers?

By bringing designers and engineers into the lab to observe how they collaborate on a specific project, and by interviewing people across many, many organizations that build AI products, I’ve learned a lot about how to enable collaboration between designers and engineers, including how to enable knowledge sharing and how to enable negotiation about design.

This research led me to develop a co-design process in which neither the AI nor the user experience comes first. Instead, both things happen in parallel: The designer and engineer work together to think about the data needed to train a machine learning system, how to label the data, the desired behavior for the AI, and how to implement the system so that it does what it is supposed to do. They talk about potential uncertainties, failures, and errors and how to fix them. And through this collaborative process, they align both the user experience and the AI with the needs of the people for whom they are designing the system.

Out of this work, my colleagues and I also developed and tested a tool called ProtoAI that helps designers consider how their design will respond to different kinds of user input data and different model outputs. Existing user-interface prototyping tools don’t work with actual data. They only allow designers to think about such things as where a button should go. But in the ProtoAI workflow, designers can use the models the engineers are building and simulate how their designs hold up in response to diverse inputs and diverse users. Because one of the strengths of AI systems is their potential for personalization and customization, designers need to understand the contexts in which the system will fail.

You’ve also developed a tool (texSketch) for intelligence augmentation that automatically diagrams complex ideas in text as the reader highlights it. Where do you plan to go with this work in the future?

When it comes to intelligence augmentation, AI systems could completely automate learning. For example, you can take a photo of a math equation and an AI tool can solve it for you. But when you do that, you kind of miss the point of helping people learn. So, I am interested in these domains where we try to balance what AI does with what people should be doing – where there’s a balance of control between automation and augmentation of human effort. Going forward, I plan to continue working on the diagramming and visualization work that I did with texSketch because learning through diagrams and visual representation is very powerful.

But I’m also interested in a new area, which is learning in the wild – i.e., outside the classroom. Science communication on the web, particularly about current events such as climate change and vaccines, has produced a lot of misinformation and misunderstanding. As part of my time at HAI, I want to see how we can support this process of learning in the wild: How can we take what we know about classroom learning and online learning and apply it to informal science communication, and what kinds of tools might people need so that they’re able to assess scientific claims and make sense of contrasting evidence. There are potentially a lot of web-based tools we could create. For example, imagine texSketch, but on the web so that as you’re reading multiple articles it helps you make sense of causal relationships between different things. 

This is an area of research that I am actively pursuing. And in this context, collaboration will be a major feature. In addition to designers and engineers, we will need to collaborate with educators, psychologists, and policy experts, to name a few. This is where my other work becomes symbiotic with my work on how to build human-centered AI.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics