Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.
.png&w=1920&q=100)
The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.
The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.
Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.
The stochastic multi-armed bandit (MAB) is a benchmark model for decision-making under uncertainty. In the classical MAB setting, a decision maker sequentially chooses between a set of alternatives ("arms"), and earns a reward upon each choice. The decision maker's goal is to ensure these rewards are as high as possible over their decision horizon. MABs are used in a wide range of applications, from Internet advertising to healthcare.
It is well known that high performing MAB algorithms must balance "exploration", i.e., learning about relatively unknown arms, against "exploitation", i.e., leveraging arms that have already been seen to perform reasonably well. Unfortunately, due to practical constraints, fairness requirements, and ethical considerations, actively exploring may not be possible in some domains. For example, in health care, "exploration" may involve using an untested treatment on a prospective patient, but ethical considerations may preclude such use without appropriate safeguards.
Surprisingly, a body of recent research has suggested that in many practical regimes of interest, algorithms for MAB problems that focus solely on exploitation (i.e., choosing the empirical best arm) -- known as "greedy" algorithms -- in fact can perform quite well, due to exploration that happens for "free" during the run of the algorithm. In this talk we describe this phenomenon; highlight its specific emergence in particular in MAB problems with large numbers of arms, as well as in a range of other settings; and suggest directions for future investigation.
Joint work with Nima Hamidi, Ramesh Johari, and Khashayar Khosravi.
Read "When 'Greedy' is Good" here
Mohsen BayatiAssociate Professor of Operations, Information and Technology at The Graduate School of Business and, by courtesy, of Electrical Engineering
No tweets available.