Where Is Artificial Intelligence Taking Humanity?
The popular fascination with artificial intelligence—both its promise and its perils—drew a sell-out crowd to Stanford’s Memorial Auditorium to hear two of the leading thinkers on the subject: Professor Fei-Fei Li, the co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and professor of computer science at Stanford University, and Yuval Noah Harari, the best-selling author and professor of history at Hebrew University. The event was co-hosted by The Stanford Humanities Center, the McCoy Family Center for Ethics in Society and Stanford HAI.
Their 90-minute conversation, moderated by Wired editor Nicholas Thompson, revealed two sharply different ways of looking at the future of artificial intelligence. At the same time, both Li and Harari seemed to build their arguments on a common foundation, sharing a deep enthusiasm for the benefits that AI can bring to the lives of ordinary people as well as profound concerns about how the powerful technology could be misused.
AI for Humanity
“This is such a fertile ground,” said Li, explaining what attracted her to the subject. “When it comes to healthcare decisions, financial decisions, legal decisions—there are so many scenarios where this technology can be potentially positively useful.”
True to the vision for the new Stanford HAI institute — which opened this March as an interdisciplinary, global hub for research, discussion and development of Artificial Intelligence — the discussion brought together two academics from very different specialties to explore the question of how humanity should prepare for AI’s future.
Harari, a historian by training, now specializes in tackling macro-historical questions such as “What is the relationship between history and biology?” He pointed out the natural tendency of historians and philosophers to think on a very different timescale than technologists such as Li, who are focused on solving problems for the current generation.
Harari focused on the impact of AI abilities that won’t likely be feasible for decades, if not centuries, which often made the the historian sound simultaneously more optimistic and more pessimistic than Li about the impact of AI on humankind. “There are enormously beneficial things that AI can do for us, especially when it gets linked with biology,” Harari said. Even in the not-too-distant future, he predicted, “We are about to get the best healthcare in history—and the cheapest—available for billions of people on their cell phones.”
The Perils of AI
Harari, who has gained a reputation as a cautionary voice over the perils of AI, quickly added that such astounding advances are certain to come with an array of unintended consequences. “My job as a historian, as a philosopher, as a social critic, is to point out the dangers in that.”
In particular, Harari is concerned about the interplay of AI and biological sciences, especially neurosciences, that will go far beyond improving healthcare. “What you get is the ability to hack humans,” he warned. “To create an algorithm that understands me better than I understand myself—and can therefore manipulate me, enhance me or replace me.”
Harari summarized his two greatest concerns as a pair of “potential dystopias.” In the first, today’s liberal democracies could become overrun with machines that know enough about ordinary humans to exert a subtle and pernicious control over their lives, including where to work and study, whom to date or whom to vote for. The second possibility, darker still, is the potential for future AI technology to fall into the hands of totalitarian regimes that would use it to create a police state able to monitor and control all aspects of our daily life, 24 hours a day.
Love and Fire
Yuval asked provocatively: “Is there anything human that is un-hackable?”
“The first word that came to my mind is ‘love,’” said Li. “Is love hackable?”
After a brief discussion on the nature of love, Li cited her own academic training to tamp down expectations that machines would achieve true consciousness — let alone experience their own feelings. “I do want to make sure that we recognize that we are very, very, very far from that. This technology is still very nascent,” she said. “Part of the concern I have about today’s AI is the super-hyping of its capabilities.”
Li countered those fears, acknowledging the potential dangers but pointing out that history is full of advancements, including fire, which had the potential to create similar catastrophes if left in the hands of would-be totalitarians. Her own concerns about today’s technology tended towards the less dire but more concrete: how to make sure the AI that exists today can be made to best serve humanity.
The Need for Diversity in AI
“Machine learning system bias is a real thing,” she said. “I wake up every day worried and about the diversity, inclusion issue in AI.” Issues of of privacy and the potential to displace workers from their jobs are also pressing, she added.
“So absolutely we need to be concerned, and because of that we need to expand the study, the research, and the development of policies and the dialogue of AI beyond just the [computer] code and the products, into these human realms.”
One interesting exchange involved the near-term evolution of AI assistants such as Amazon’s Alexa and Apple’s Siri. Harari advocated the development of similar assistants built not to serve the goals for-profit corporations, but which instead owe their fundamental loyalty to individual users. Li agreed, arguing that one key to the development of human centered AI will be keeping much of the research in academia, which is better suited to focusing on the needs of ordinary citizens as the technology develops.
Asked to advise the audience on how to prepare for a world in which AI plays an increasingly central role, Li recommended students study the subject through a variety of traditional disciplines. That fit well with the theme of the evening’s conversation. Said Li: “We’ve opened the dialogue between the humanist and the technologist. I want to see more of that.”
Watch the full video: