To develop equitable and trustworthy technology, we must understand how AI performs in practice, and guide and shape the way AI interacts with humans, their vital social structures and institutions, and the international order.
Artificial intelligence and machine learning are poorly understood, even within academic and research communities. The media portray a world of robots run amok; new applications and milestones are often described as “machines beating humans;” and influential public figures warn of job losses and more dire consequences. While some concerns are legitimate, misleading narratives too often distract from the pressing issues society is likely to confront as AI systems become commonplace.
Scholarly research is needed to measure and manage a host of critical issues, including the extent to which algorithms introduce, compound, or mitigate business risk or bias; a “responsibility gap” between decisions made by machines and people; the use of AI for surveillance, population control, and waging war; and the impact of AI on industry structure, labor markets, economic growth, and trade across nations. This research will inform engagement with industry, government, and civil society to beneficially guide AI’s development.