HAI’s policy briefs take cutting-edge, policy-relevant AI/ML academic research produced by Stanford faculty and transform it into digestible briefs for time-strained policymakers and staff.
One of the most promising uses of artificial intelligence is in radiology, the medical specialization that uses imaging technology to diagnose and treat disease. AI holds great promise to improve traditional medical imaging methods like CT, MRI, and X-ray by offering computational capabilities that process images with greater speed and accuracy, automatically recognizing complex patterns to assess a patient’s health. This sophisticated software needs more robust evaluation methods to reduce risk to the patient, to establish trust, and to ensure wider adoption.
Despite the emergence of new machine learning technologies capable of diagnosing diseases, understanding speech, or recognizing images, the enormous economic potential of many digital goods and services remains largely untapped. In this brief, scholars propose a set of policy recommendations that could increase productivity growth, make the U.S. more competitive, and reduce income inequality.
The U.S. Intelligence Community faces a moment of reckoning and AI lies at the heart of it. Since 9/11, America’s intelligence agencies have become hardwired to fight terrorism. Today’s threat landscape, however, is changing dramatically, with a resurgence of great power competition and the rise of cyber threats enabling states and non-state actors to spy, steal, disrupt, destroy, and deceive across vast distances — all without firing a shot.
Natural language processing for mental health monitoring is an emerging use of AI poised to disrupt the landscape of the health care industry. As the profusion of social media platforms allows for the population to share their thoughts and feelings with the world, users’ posts and reactions extend the scope of medical screening methods for psychological disorders such as depression. Users are already being marketed to with sophistication based on these behaviors — why not leverage these technologies for public health?
Improving AI Software for Healthcare Diagnostics Facial recognition technologies have grown in sophistication and adoption throughout American society. Significant anxieties around the technology have emerged—including privacy concerns, worries about surveillance in both public and private settings, and the perpetuation of racial bias.
Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.
Social Media platforms break traditional barriers of distance and time between people and present unique challenges in calculating the precise value of the transactions and interactions they enable. In the case of a company like Facebook, each layer of connections creates value and attracts additional users to the platform. The compounding nature of this phenomenon gives platforms significant market power. In the face of growing scrutiny from policymakers, the media, and the public, regulators are now considering a number of proposals to ensure platforms do not abuse their market power or restrict the economic benefits of their networks from being more equitably distributed.
With advances in AI, researchers can now train computer algorithms to interpret medical images – often with accuracy comparable to physicians. Yet a survey of medical research shows that these algorithms rely on datasets that lack population diversity and could introduce bias into the understanding of a patient’s health condition.