Skip to main content Skip to secondary navigation
Page Content

Designing Decision-Making Algorithms in an Uncertain World

Stanford researcher’s new book will help designers of intelligent systems find the right algorithm for the task at hand.

Image
illustration of many gray lines taking many paths against a pink background.

The new book Algorithms for Decision Making helps developers find the best approach for complex problems. | Artem Pohrebniak

Anyone setting out to design an intelligent system for making decisions in the face of an uncertain and ever-changing world might want to begin by reading Algorithms for Decision Making, a new book by Mykel Kochenderfer, associate professor of aeronautics and astronautics at Stanford University and director of the Stanford Intelligent Systems Laboratory (SISL), and his colleagues, Tim A. Wheeler and Kyle H. Wray. 

Decision-making algorithms ingest problem-relevant information from the environment and produce an action. Think an AI algorithm that takes in patient vital signs and outputs a diagnosis, or a stock-trading system that synthesizes daily market prices and suggests stock buys. 

But building an agent in a highly uncertain environment is a challenge for any developer. In this new book, Kochenderfer and his co-authors recommend various approaches for designers solving different kinds of problems. For example, if a designer knows that there’s a particular type of uncertainty in the environment, or that a source of input, such as a sensor, is imperfect, then they would go to a particular part of the book, and it would outline a number of different algorithms they might use.

Here, Kochenderfer discusses the value of computer-based algorithmic decision making and the key themes of the book.

When do computer algorithms make better decisions than humans?

Humans are not very good at reasoning about low probability events or about complex scenarios where many things are happening at once. That’s where a computational approach can bring a tremendous amount of value to a decision-making process. It reduces the burden on the human designer to anticipate all the possible scenarios.

For example, a human designing a self-driving car cannot anticipate every possible driving scenario or the kinds of sensor failures and errors that might arise when things go wrong. 

A related example — we have studied aircraft collision avoidance systems that make decisions using a process called dynamic programming that can reason about very low probability events, such as unexpected maneuvers, and optimize for the best possible course of action given the various sources of uncertainty. Rigorous analyses showed that these collision avoidance systems are both safer and more efficient than something that a team of humans could have produced on their own. 

Why is decision making under uncertainty a particular focus of the book?

Most of the decisions we make in our lives are based on imperfect information. When we make a medical decision, for example, we know that diagnostic tests might be imperfect — there can be false positives or false negatives; and when we’re building robots, the sensor systems might fail in some way; or a self-driving car might encounter occlusions in the environment, such as a van blocking our ability to see a pedestrian.

We’re just inherently uncertain about the state of the world and sometimes that uncertainty is a significant factor. So, we want to address problems with uncertainty head on, and that’s a key aspect of decision making that this book is trying to address. We want to help people build decision-making algorithms that can take imperfect information and make decisions that achieve an objective or set of objectives.

And when we talk about uncertainty, that includes uncertainty about the effects of our own actions, uncertainty about the state of the environment, uncertainty about how others might respond to our actions, and uncertainty in our conception or “model” of how the world works.

And the book breaks these forms of uncertainty down to their essence — to very simple computations. And it turns out that computers can do these computations pretty easily using multiplication and addition of potentially small numbers.

How does time play a role in algorithmic decision making?

Time is critical. We generally need to reason about the effects of our actions over an extended time window, including keeping track of the recent past as well as making predictions about the future. Most real-world problems don’t involve single-shot solutions.

In the book we start off by introducing probability theory and utility theory in single-shot contexts so that the reader gets a solid understanding in this more simplified context. But we then move on to sequential problems, because decision makers typically want to reach a goal that is going to require a series of actions. For example, in a medical context, doctors don’t make a single decision and that’s it. They hopefully have a long-lasting relationship with the patient, so we don’t want an algorithm to greedily make what appears to be the best decision in the moment. We need it to reason about the future.

How have various disciplines contributed to the field of algorithmic decision making?

Many different communities inspired the content of this book. There’s not only AI, which has traditionally been a subfield of computer science, but also operations research, control theory, psychology, neuroscience and economics. All of these fields have contributed to the concepts in the book in a major way.

In fact, economics is the first one that comes to mind. In the 1940s, John von Neumann and Oskar Morgenstern published a book called Theory of Games and Economic Behavior that sets forth a set of axioms about rational preferences. These are properties that we just accept, such as if I prefer apples to bananas, and bananas to cookies, then I’d better prefer apples over cookies. And their work gave support for the idea of utility theory, which says that so long as you have these rational preferences, you can assign utilities — numeric values — to different outcomes. And that allows you to make the problem of decision making under uncertainty well defined: One need only choose the action that maximizes your expected utility. That’s the maximum expected utility principle, and that principle, which comes from economics, underlies the entire book.

How are you seeing algorithmic decision making benefiting or harming society?

The book highlights several examples of beneficial deployment of algorithms. For example, because of aircraft collision avoidance systems, we will have safer and more efficient air transportation. In the financial sector, algorithmic decision making can help people invest their resources so that they can have a sustainable level of consumption across their lifetime. And medical decision support systems can help promote safer, better medical care.

There are also aspirational kinds of research that have not yet been deployed, such as figuring out how to fight wildfires. Firefighting resources are finite and there’s a lot of uncertainty about exactly how a fire will develop depending on the wind, the vegetation, the terrain and so forth. Algorithms that account for these uncertainties could help us fight fire more effectively and safely.

On the other hand, there are some potential pitfalls. If these systems are deployed without proper validation, there can be a risk to life, and there could be unfairness and bias. So, a major focus of our research is to not just build systems that are worthy of our trust, but to come up with methodologies to validate that they will behave as expected or as desired when deployed in the real world. We want to proactively make sure that these systems are safe and that they have the desired societal impact. And because we want to understand potential issues with our systems well before they are deployed, this book includes a chapter that talks about validation — a topic that is actually worthy of an entire book that we’re currently writing titled Algorithms for Validation.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics