Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.
What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.
AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery. Experts will separate hype from reality, spotlighting where AI is already enabling genuine breakthroughs and where its limits and risks remain.

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery. Experts will separate hype from reality, spotlighting where AI is already enabling genuine breakthroughs and where its limits and risks remain.
The stochastic multi-armed bandit (MAB) is a benchmark model for decision-making under uncertainty. In the classical MAB setting, a decision maker sequentially chooses between a set of alternatives ("arms"), and earns a reward upon each choice. The decision maker's goal is to ensure these rewards are as high as possible over their decision horizon. MABs are used in a wide range of applications, from Internet advertising to healthcare.
It is well known that high performing MAB algorithms must balance "exploration", i.e., learning about relatively unknown arms, against "exploitation", i.e., leveraging arms that have already been seen to perform reasonably well. Unfortunately, due to practical constraints, fairness requirements, and ethical considerations, actively exploring may not be possible in some domains. For example, in health care, "exploration" may involve using an untested treatment on a prospective patient, but ethical considerations may preclude such use without appropriate safeguards.
Surprisingly, a body of recent research has suggested that in many practical regimes of interest, algorithms for MAB problems that focus solely on exploitation (i.e., choosing the empirical best arm) -- known as "greedy" algorithms -- in fact can perform quite well, due to exploration that happens for "free" during the run of the algorithm. In this talk we describe this phenomenon; highlight its specific emergence in particular in MAB problems with large numbers of arms, as well as in a range of other settings; and suggest directions for future investigation.
Joint work with Nima Hamidi, Ramesh Johari, and Khashayar Khosravi.
Read "When 'Greedy' is Good" here
Mohsen BayatiAssociate Professor of Operations, Information and Technology at The Graduate School of Business and, by courtesy, of Electrical Engineering
No tweets available.