An inclusive, multi-stakeholder approach to guiding AI is key to building public trust and realizing the promise of the technology to broadly benefit humanity — without compromising human interests and destabilizing society in the process.
In the opening plenary session of the fall 2019 conference on AI Ethics, Policy & Governance, held by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), Susan Athey, Associate Director of HAI and the Stanford economics of technology professor, and Erik Brynjolfsson, professor at MIT Sloan, set up the tension that is being felt around the world: AI’s promise vs. its peril, and what can be done to resolve it.
“We have right now the most powerful technologies that humans have ever had,” Brynjolfsson observed. Athey, who co-hosted the conference along with fellow HAI Associate Director Rob Reich, a Stanford professor of political science, pointed out a number of important areas where AI-based innovations promise to improve our quality of life, including transportation, education, healthcare, food and worker retraining.
“More powerful technologies, by definition, means we have more power to change the world” and as a consequence, “values matter more than before,” said Brynjolfsson. But shared prosperity and other desired outcomes are not guaranteed, nor is the safeguarding of human rights and other democratic ideals. “It’s time to have hard conversations about what we want to use these awesome, powerful tools for and, ultimately… what kind of world we want to create,” he said.
Stanford HAI is at the forefront of a nascent movement advocating for a more inclusive, human-centric approach to guiding the technology and mitigating inevitable misuses. The goal of human-centered AI is to represent diverse stakeholders in developing frameworks that prioritize interdisciplinary collaboration in the design, development and governance of AI. The speakers and panels at HAI’s fall conference aimed to reflect some of that diversity.
A dystopian scenario has become manifest in a rising tide of data hacks, AI-enabled disinformation campaigns and deep fakes, along with revelations about indiscriminate data brokers and racial bias in AI technologies such as facial recognition. All of which has undermined public trust in AI and raised fundamental questions about accountability, democratic freedoms and human rights, according to Marietje Schaake, who represented the Netherlands as a member of the European Parliament from 2009 to 2019 and is HAI’s International Policy Fellow (and affiliated with Stanford’s new Cyber Policy Center).
HAI Co-Directors Fei-Fei Li and John Etchemendy have called for a national AI vision and strategy, proposing that the U.S. government build a new AI research ecosystem in partnership with educational, research and civil society organizations, with an investment of at least $120 billion over 10 years. The effort would include a national research cloud to provide high-value data and high-performance computing for public-interest research.
Big data, smarter algorithms
The call for a bold new direction in the stewardship of AI comes as algorithmic decision-making is accelerating change and impacting virtually every aspect of our lives, and a new, more robust generation of the technology is coming to the fore.
At the HAI conference, Eric Schmidt, former CEO and executive chairman of Google, explained how AI is rapidly evolving beyond simply being able to detect patterns in data and recognize objects. New mathematical techniques allow AI to mine sophisticated insights from datasets with ever-greater efficiency, significantly enhancing its diagnostic power. Early detection of lung and breast cancer, and the ability to predict heart attacks and strokes from retina scans, are among the many benefits. Advances in medical applications of AI, he said, “will save, or help, millions of people over the next five or 10 years as it deploys.” (Schmidt currently serves as technical advisor to Google parent company Alphabet Inc. and is a member of the company’s board. He is also on the advisory council at HAI.)
Alphabet’s “AI first” strategy has helped propel the company into the top tier of tech giants, where the coin of the realm is data. Data is the precursor of AI, and the fact that a relatively small number of companies with massive computing resources sit atop the biggest volumes of data has led to concerns about concentration of market power and a data regime that reinforces existing hierarchies in Big Tech and diminishes innovation.
Schmidt told the HAI audience, however, that AI innovation was becoming less dependent on having access to massive data storehouses. “To me, the most interesting trend is not that data is becoming more valuable, but rather that algorithms are being developed that need less data. So in other words, if you think data is the new oil, your oil may become less valuable over time, as research is showing us that we can learn on much smaller datasets.”
He also pointed out that Alphabet has created an open-source AI library, called TensorFlow. “We’re doing this because we’re trying to stimulate the industry,” he said. “We want you to build these solutions and identify these new opportunities.”
But such gestures are unlikely to quell a growing consensus around the need for a more assertive and coordinated regulatory approach to AI to ensure an evolution that’s transparent, safe and beneficial to humanity.
Schaake shared the floor with Schmidt for the conference’s panel on Regulating Big Tech. “I believe that if we want to preserve democracy,” she said, “we need to democratize the way in which we govern technology itself.”
In her view, while “the full impact of the massive use of tech platforms and AI remains largely unknown,” it has become abundantly clear that policymaking and democratic decision-making has not kept pace with technological advances in the private sector.
AI engineers are excited about the fact that algorithms are developing the capability to self-modify and undergo endless iterations, she related, sometimes resulting in unpredictable outcomes. “I can understand that excitement, but we can only know what the unintended outcomes are when we know what was intended in the first place,” she said. The lack of transparency into algorithmic black boxes, and “the idea that companies can take over more and more vital functions [in society] without having accountability towards the public, I think, is unsustainable.”
A failure of self-regulation
“Trade secrets and other intellectual property protections cannot be the perpetual shield against meaningful access to information and oversight,” Schaake added. Big tech companies have largely “failed” in their self-regulation efforts, while the idea that politicians, who supposedly “don’t know anything about technology,” will unwittingly “stifle innovation” with new laws and regulations, has become hackneyed. “This is about preserving principles, standards and values no matter what technological disruption” may come.
Schaake, who wants to see governments become more innovative and iterative in how they regulate digital technology, cited recent legislation enacted by the state of California and the city of San Francisco — “interestingly, very close to where these technologies are developed.” She was referring specifically to San Francisco’s ban on the use of facial recognition by government agencies and California’s passage of Assembly Bill 5, requiring companies operating gig-economy platforms to provide labor protections for their workers.
Stanford HAI’s Etchemendy questioned Schaake about the risk of premature legislation and spoke out against San Francisco’s blanket ban on the use of facial recognition as being too blunt an instrument, preventing implementation of beneficial uses of the technology. On the same stage later in the day, Reid Hoffman, Co-founder of LinkedIn and Partner at venture firm Greylock Partners, said, “I can think of lots of places where facial recognition is a really good thing.” He offered as examples tracking down a bioterrorist in airports or trying to stop “a 12-monkey scenario,” a reference to the 1995 sci-fi film involving a gang known as the Army of the Twelve Monkeys who are suspected of causing a pandemic that leads to the post-apocalyptic society. (Hoffman also serves on HAI’s advisory council.)
In conversation with Hoover Institution Senior Fellow Amy Zegart, Hoffman shared the HAI conference stage with his former LinkedIn colleague Dhanurjay "DJ" Patil, who went on to serve as Chief Data Scientist of the United States in the Office of Science and Technology Policy during the presidency of Barack Obama, and is currently Head of Technology for Devoted Health.
In discussing ethics and values tradeoffs in technological decision making, they recalled a formative episode at LinkedIn when a hedge fund sought to acquire the company’s data for the purpose of gaining insight into how companies were performing based on people changing jobs. They decided against it, as they believed it would not have benefitted LinkedIn’s individual users.
“One of the big lessons I took away from that experience was that if you don’t have ethics and liberal arts as part of the core training in your undergraduate curriculum,” Patil said, “you’re at a disadvantage.” A mutual grounding in philosophy enabled Hoffman and Patil to glean “deeper insight of what you should be doing as stewards of the data.”
Blitzscaling — Hoffman’s term for the imperative of moving quickly to gain advantage in the world of startups — inevitably involves tension between values and a race to growth and profits. Technology ethics in this context, he explained, is a subtle and consistent balancing act that involves steering towards having “society as your customer” rather than striving for “a 0% chance” of causing harm to society.
“Ethics is where you drive to, not necessarily always hitting the brakes.”
As the conversations at the HAI fall conference clearly showed, many people in government and society want to see more controls around AI and a firm fence around what is and isn't permitted, and many tech companies, especially those with billions of dollars at risk, prefer to determine for themselves where their own moral barriers should exist. HAI wants to provide all of these groups a forum to debate these issues so that the best results can emerge, rather than allowing distance and frustration to grow on both sides — a situation from which society will most definitely not benefit.