Meet Fei-Fei Li, the recipient of the National Geographic Further Award
PHOTO BY: PHILIP MONTGOMERY
Scientist Fei-Fei Li specializes in artificial intelligence, but humanity is at the heart of all of her work: How can AI benefit people from all walks of life and ultimately lead us toward a planet in balance?
Fei-Fei Li’s commitment to using artificial intelligence for good is the through line of her long resume: She works as Denning Family Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence, an institute dedicated to advancing interdisciplinary scholarship in AI research, education, policy and practice in ways that benefit humanity. She is also a co-founder and chairperson of the nonprofit organization AI4ALL, which is focused on increasing diversity, inclusion and accessibility in AI education. She additionally serves as a co-director and co–principal investigator at the Stanford Vision and Learning Lab, where she works with students and colleagues worldwide to build intelligent algorithms that let computers and robots “see and think” like we do, and conducts cognitive and neuroimaging experiments to discover what we can learn from our own brains on the subject.
For her uniquely innovative, timely and impactful work, Li was selected as the 2019 recipient of the National Geographic Further Award, which recognizes a leader pushing the boundaries of his or her field. The Further Award honors Li’s insistence that now is the time, more than ever before, to “harness our creativity as well as our humanity.” Inspired by Li’s fascination with big questions (“What is life? What is human life? What is intelligence?”), we endeavored to learn a little more about what inspires her fascination with artificial intelligence. Her answers to our questions — complete with a photo essay by National Geographic photographer Philip Montgomery — below.
When do you remember first becoming aware of AI? What drew you to that field?
Since I was a child, I’ve been very curious about science. I loved watching the stars and thinking about the origin of the universe. That led me to become a physics major at Princeton University when I went to college. Physics was the kind of discipline that enabled me to ask the kinds of big questions that I love so much. Where does the universe come from? What are the fundamental laws of the physical world? What are the stars for?
But something really interesting happened when I started reading the writings of the big physicists like Einstein and Schrödinger. I noticed that towards the end of their academic or intellectual life, they also pondered questions about life itself, as if curiosity for the physical world also led them to curiosity of the living world. Like them, I became very interested in the question of life and the foundational questions like what is life, what is human life, what is intelligence.
What was the inspiration for AI4ALL?
The inspiration for AI4ALL dates back to my experience as a young girl in science and math classes. I’ve faced many teachers who didn’t expect girls to excel in these classes, and I had to defy that kind of bias. So in my early career, as an advisor to students and early career professor at Stanford, I tried to be helpful, be a faculty host of a women’s club in computing, and all that.
But the real changing moment happened around 2012-2013. That is around the time that AI went through a transformative change both because of my own work with ImageNet as well as because of the deep learning revolution that came with the maturation of computing hardware. We started to see this technology move from a lab experiment to a transformative societal changing force. When that happened, the public conversation around AI started to heat up as well. I kept hearing about anxiety of technology turning evil and people worrying about killer robots.
PHOTO BY: PHILIP MONTGOMERY
While that was stirring up a sense of crisis in the public, I was also living another crisis in the reality of the lack of representation in my field. For a long time I was the only woman faculty in the Stanford AI Lab. I was also the director, but I was the only woman faculty member, and in the AI graduate student population, women were hovering around 10 percent. Our undergraduate population in computer science was slightly better — around 30 percent — but the attrition rate was really high, and, by the time you reached a professor stage, you just don’t see many women. Even worse are the numbers of underrepresented minorities.
So, I was looking at these two crises: the killer robot crisis and the lack of representation crisis. I think the epiphany hit when I realized there is a deep, philosophical, human connection to these two crises: Our technology is not independent of human values. It represents the values of the humans that are behind the design, development and application of the technology. These humans have a critical and direct say in what this technology is about. So, if we’re worried about killer robots, we should really be worried about the creators of the technology. We want the creators of this technology to represent our values and represent our shared humanity.
You’re sort of in a sandwich generation — where you’re not only caring for your children but also your parents. Does your unique personal life contribute to your work in any way?
I think it’s funny that one of my passions in research in the past seven years has been taking this technology into healthcare applications — especially looking at critical health situations like ICU patients or our aging seniors. Because while that’s been happening in my professional life, my parents-in-law are joining that group, and I have also been taking care of my mother who also hasn’t been in very good health for the past two decades. Last summer I spent weeks and weeks in the hospital with her. That kind of demand of life, of personal responsibility, is definitely challenging.
But I do want to say that — and I’m not saying this just for the sake of saying this — I’m thankful and feel very lucky to have that kind of demanding life, because, as a scientist and a leader who is part of this AI transformation, that kind of life experience and responsibility grounds me in the important questions and issues of technology for the benefit of humanity. No matter what kind of fancy gadget that AI might enable, it is so personally important to me that this technology benefits human lives — not just for convenience, but for well-being, for dignity, for community, for society. I cannot pick apart whether it’s the scientific realization or my life’s challenges that have informed me on these issues.
PHOTO BY: PHILIP MONTGOMERY
How do you see your work contributing to a planet in balance?
I think AI and machine learning is a technology that can contribute greatly to our environment and our ecosystem. Whether we’re using drones to map out the deforestation areas, or we’re looking at water contamination, or tracking endangered animals or optimizing energy uses in factories and homes, there are endless possibilities and applications of AI to help our earth and our environment. So I really want to inspire people, especially technologists, to think of their work in a human-centered way and invite the humanists, the social scientists, the policy makers, the artists, the journalists to participate in the development and deployment of this technology.
This interview has been edited and condensed for clarity. See more from photo essays from the National Geographic Awards on the @InsideNatGeo YouTube channel.