HAI’s policy briefs take cutting-edge, policy-relevant AI/ML academic research produced by Stanford faculty and transform it into digestible briefs for time-strained policymakers and staff.
While machine learning applications in healthcare continue to shape patient-care experiences and medical outcomes, discriminatory AI decision-making is concerning. This issue is especially pronounced in a clinical setting, where individuals’ well-being and physical safety are on the line, and where medical professionals face life-or-death decisions every day. Until now, the conversation about measuring algorithmic fairness in healthcare has focused on fairness itself—and has not fully taken into account how fairness techniques could impact clinical predictive models, which are often derived from large clinical datasets. This brief seeks to ground this debate in evidence, and suggests the best way forward in developing fairer ML tools for a clinical setting.
As the development and adoption of AI-enabled healthcare continue to accelerate, regulators and researchers are beginning to confront oversight concerns in the clinical evaluation process that could yield negative consequences on patient health if left unchecked. Since 2015, the United States Food and Drug Administration (FDA) has evaluated and granted clearance for over 100 AI-based medical devices using a fairly rudimentary evaluation process that is in dire need of improvement as these evaluations have not been adapted to address the unique concerns surrounding AI. This brief examined this evaluation process and analyzed how devices were evaluated before approval.
The number of non-military satellites in orbit is rapidly growing and each of these satellites offers unprecedented access to imagery to help measure sustainable development outcomes. AI-powered tools can help extract and assess important information from satellite imagery—such as agricultural productivity, urban population density, and rural economic activity, making them an intriguing and valuable addition to the sustainable development toolkit. This brief discusses how AI models can map satellite image inputs to sustainable development outcomes, their potential and future applications, as well as the limitations of such an approach and ways to respond.
AI is being deployed for a range of tasks across the medical system, from patient face-scanning to early-stage cancer detection. At the same time, however, AI systems drawing conclusions about demographic information could seriously exacerbate disparities in the medical system—and this is especially true with race. Left unexamined and unchecked, algorithms that both accurately and inaccurately make assessments of patients’ racial identity could possibly worsen long-standing inequities across the quality and cost of—and access to—care.
The American criminal legal system is rife with—and perpetuates—inequality. These discrimination problems across racial, socioeconomic, and other lines are well-documented, but studying the problem is still a resource-intensive process. Technology may be able to relieve some of this burden. In this brief, we propose using machine learning to analyze decision-making in the criminal legal system. The aim is not to predict human behavior or replace human decision-making, but to better understand the factors that led to past decisions in the hopes of facilitating increased fairness and consistency in how criminal law is applied. We call it the “Recon Approach.”
One of the most promising uses of artificial intelligence is in radiology, the medical specialization that uses imaging technology to diagnose and treat disease. AI holds great promise to improve traditional medical imaging methods like CT, MRI, and X-ray by offering computational capabilities that process images with greater speed and accuracy, automatically recognizing complex patterns to assess a patient’s health. This sophisticated software needs more robust evaluation methods to reduce risk to the patient, to establish trust, and to ensure wider adoption.
Despite the emergence of new machine learning technologies capable of diagnosing diseases, understanding speech, or recognizing images, the enormous economic potential of many digital goods and services remains largely untapped. In this brief, scholars propose a set of policy recommendations that could increase productivity growth, make the U.S. more competitive, and reduce income inequality.
The U.S. Intelligence Community faces a moment of reckoning and AI lies at the heart of it. Since 9/11, America’s intelligence agencies have become hardwired to fight terrorism. Today’s threat landscape, however, is changing dramatically, with a resurgence of great power competition and the rise of cyber threats enabling states and non-state actors to spy, steal, disrupt, destroy, and deceive across vast distances — all without firing a shot.
Natural language processing for mental health monitoring is an emerging use of AI poised to disrupt the landscape of the health care industry. As the profusion of social media platforms allows for the population to share their thoughts and feelings with the world, users’ posts and reactions extend the scope of medical screening methods for psychological disorders such as depression. Users are already being marketed to with sophistication based on these behaviors — why not leverage these technologies for public health?
Improving AI Software for Healthcare Diagnostics Facial recognition technologies have grown in sophistication and adoption throughout American society. Significant anxieties around the technology have emerged—including privacy concerns, worries about surveillance in both public and private settings, and the perpetuation of racial bias.
Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.
Social Media platforms break traditional barriers of distance and time between people and present unique challenges in calculating the precise value of the transactions and interactions they enable. In the case of a company like Facebook, each layer of connections creates value and attracts additional users to the platform. The compounding nature of this phenomenon gives platforms significant market power. In the face of growing scrutiny from policymakers, the media, and the public, regulators are now considering a number of proposals to ensure platforms do not abuse their market power or restrict the economic benefits of their networks from being more equitably distributed.
With advances in AI, researchers can now train computer algorithms to interpret medical images – often with accuracy comparable to physicians. Yet a survey of medical research shows that these algorithms rely on datasets that lack population diversity and could introduce bias into the understanding of a patient’s health condition.