Skip to main content Skip to secondary navigation
Page Content

Ensuring America’s Innovation with Artificial Intelligence

The Hoover Institution’s Condoleezza Rice and HAI Co-Director Fei-Fei Li discuss AI ethics, technology advances, and the talent pipeline.

Image
An innovative scene of skyscrapers and people passing in New York City.

Thomas Hawk

Maintaining the United States' frontrunner status relies on mastery of technology hardware, software, and processes, says HAI's Fei-Fei Li. 

One describes herself as a “failed piano major from Birmingham, Alabama.”

The other came to the U.S. from China at age 15, then ran a dry-cleaning business on weekends to support her family while attending Princeton.

Today, that failed pianist, Condoleezza Rice, is the incoming director of Stanford’s Hoover Institution after serving as U.S. Secretary of State under President George W. Bush, while Fei-Fei Li co-leads Stanford’s Institute for Human-Centered Artificial Intelligence (HAI). 

On June 30, Rice interviewed Li about the future of AI in an event co-sponsored by HAI and Hoover. Their wide-ranging discussion covered everything from AI’s ethical issues to its ability to mimic human emotion and the STEM talent pipeline.

Here are highlights from the conversation.

Rice: Many laypeople aren’t sure about what AI is. They imagine a science-fiction scenario with machines taking over the world. What are the benefits you see for these technologies?

Li: AI is simply modern technology based on computer science and math. It’s about enabling computers to learn from data and make smart predictions and inferences based on patterns. AI is already everywhere: When we deposit checks to banks, an AI-based system reads them; AI powers recommendations we get for streaming shows and online shopping; new cars use AI systems to keep us in our lanes and detect cars in our blind spot.

I want to dispel two important myths about AI. The first is that AI is omnipotent. It’s not; it can be powerful, but it’s about solving specific problems without going too far. Second is that AI is only for a small group of hackers or tech-savvy scientists. They may develop it, but it’s a “horizontal” technology requiring participation of everyone from scientists to artists to policymakers.

Rice: What was the inspiration to start HAI?

Li: AI is transforming life and society and affects the most pressing human issues – individual rights, community welfare, ethics. We need a multi-stakeholder approach to stay ahead of technology. HAI does that through three functions: guiding and forecasting the human impact of AI in collaboration with other institutions, including Stanford’s business, law, and medical schools; designing and creating AI applications to empower people; and doing basic research on AI-based technologies.

Rice: It’s said that humans are often better at knowledge than wisdom, such as using atom-splitting breakthroughs to design the A-bomb. In the same way, AI can lead to enormous ethical issues, such as discrimination in health care. Can you comment?

Li: Technology has always been a double-edged sword.

Innovation can be used for good or bad. So we need to put in guardrails based on ethical responsibilities. For example, machine learning can help to understand COVID-19 epidemiology, discover drugs and vaccines for the disease, and alleviate clinician fatigue by monitoring patient status. And the government is already using AI applications for things like border control.

But everything needs to be assessed for potential bias. We have to bake ethics into every step. For instance, if we train a machine to recognize skin-disease symptoms but use only data from one color of skin, it won’t apply beyond that. Or the question of whether speech-based health care applications will understand people with accents.

At the same time, AI can also call out bias, such as policing bias based on bodycam conversation analysis. Or even AI-based solutions to remind health care workers to wash their hands, without bias.

Rice: To keep U.S. national security strong, it’s critical to maintain our global technology lead. People worry about China’s potential to surpass the U.S. here, in part because machine learning requires lots of data and China doesn’t have the U.S.’s data-related privacy concerns. Is that a problem?

Li: I’m confident in U.S. science leadership based on our strong ecosystem of collaborative university, industry, federal, and other institutions. Maintaining our frontrunner status relies on mastery of complex technology hardware, software, and processes. But while data is a “first-class citizen” in today’s AI research, it’s only one driver. Still, HAI is promoting America’s data-building capability and democratization of data, such as helping with recent legislation to create a national research cloud.

Rice: Where does emotional acuity fit into the future of AI? How different are machines and humans?

Li: Machines have potential here. For example, researchers are developing algorithms to mimic babies’ curiosity, in line with the theory of mind. As far as how likely AI is to think and feel like humans, we are always out-innovating ourselves, but need to recognize ethical implications. Remember: We may be able to get machines to think more like humans, but there are no independent machine values – only human values.

Rice: HAI ideally will involve the entire campus in its multi-stakeholder approach. Those of us not trained in AI can still be part of the conversation. How can we get the most out of the institute and AI more generally, to bring humans forward?

Li: We need everyone.

On one hand, AI-related jobs are the fastest-growing of all STEM occupations, and there’s not enough of a supply of graduates with these skills. We need to fill that pipeline with diverse talent across dimensions including gender, race, economic status, and others. We helped start the AI4ALL nonprofit initiative, which encourages high school students nationwide, especially from underserved groups, to participate in AI.

But in general, America’s strength is our people. The more people who participate in AI, from every academic and professional domain, the better the technology will be and the stronger we will become.

Watch the full conversation:

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content

Fei-Fei Li Photo

Dr. Fei-Fei Li: The benevolent scientist

by CNBC Meets: Defining Values
July 28th, 2019

Tania Bryer, Host and Executive Producer at CNBC, interviewed Fei-Fei Li, Denning Family Co-Director of the...