Skip to main content Skip to secondary navigation
Page Content

AI in the Loop: Humans Must Remain in Charge

Stanford HAI’s upcoming conference challenges attendees to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.

Image
illustration of human figure set against lines

When AI practitioners use the phrase “human in the loop,” it conveys a model that runs with minimal influence from a person – sure, it was designed and tested by a human and may occasionally consult a human before making a final decision, but essentially the technology doesn’t need us.

Instead, Stanford HAI faculty argue, practitioners should focus on AI in the loop – where humans remain in charge. Artificial intelligence should augment humans, be a useful tool in our work, but control should remain in human hands.

That’s the theme of the Stanford HAI Fall Conference, which takes place Nov. 15, 2022, on the Stanford campus and online.

“There’s a lot of talk about closed-loop AI, where the technology is so good that it can fly the plane or drive the car or make the diagnosis,” says HAI Associate Director Russ Altman, a professor of bioengineering, genetics, medicine, and biomedical data science and one of the event’s co-hosts with professor of computer scientist and HAI Vice-Director and Faculty Director of Research James Landay. “But a better default would be a doctor who has a useful AI assistant or a driver who gets a lot of help from their car to drive safely. Humans should not be replaced or have agency taken away.”

During the one-day conference, speakers will examine AI design, the relationship between communities/organizations and AI, and human-centered AI health care, with keynote presentations by Jodi Forlizzi of the Human-Computer Interaction Institute at Carnegie Mellon University, and Genevieve Bell, director of the School of Cybernetics and 3A Institute at the Australian National University. The day will also include a poster session and creative AI demonstrations.

Here, Altman and Landay explain what attendees should expect for the day:

How should we think about finding the right balance between human control and AI technologies?

Altman: The default should be human control, and we should use autonomous AI systems only rarely under special circumstances. More importantly, all AI systems should be designed for augmenting and assisting humans – and with human impacts at the forefront.

What is at risk if we get this balance wrong?

Altman: The credibility of AI systems and their builders could be in jeopardy if these systems arise, proliferate, and do not contribute to human welfare. Companies’ profit motive is not a sufficient reason to move from human control to AI control. Just as for humans, AI systems should be held to the highest standards and not given a “pass” for decisions that are detrimental, dangerous, or contribute to degradation of the human experience.

What are some of the discussions or speakers you’re most excited about?

Altman: I’m personally excited for our second session, which addresses the role of communities in setting expectations for AI systems, monitoring them, and assuring that they have positive impacts on people’s lives.

Where do you expect to hear disagreement from your speakers?

Altman: Can “human-centered” be defined as a set of design principles in a clear manner so that engineers will know how to implement human-centered AI systems, or is it more fuzzy and needs to be evaluated on a case by case basis? 

Landay: Or do we need an entire new design process that makes sure that human-centered concerns are considered properly?

Who should attend this conference?

Altman: The conversation should appeal to a wide audience, but in particular, I hope to see AI system designers and people in communities that are expecting AI to be used and who are worried about whether the AI will have a positive effect on their community. 

Landay: We would like to also see AI experts and practitioners so that they can see what types of interdisciplinary teams they will need to partner with to build truly human-centered AI applications and systems.

What do you hope people take away from this discussion?

Altman: I think developers will take away ideas around how we can create guidelines to evaluate that our systems are remaining human-centered. Industry professions will be challenged to consider how they run a business using AI to augment their mission while making sure it respects their core values. And the general public will be better informed to ask, are AI system builders thinking about the human impacts in a responsible manner that leads to confidence, or does society need to intervene to stop AI systems from being built and used inappropriately?

Register or learn more about the 2022 HAI Fall Conference on AI in the Loop: Humans in Charge.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

More News Topics