Too often we focus on artificial intelligence replacing jobs. But consider the possibilities of AI augmenting our jobs: teachers whose AI assistant can immediately assess a student’s learning and help them adjust the lesson; doctors who use the technology to manage a tricky diagnosis they might not be familiar with; artists who push the boundaries of their medium and redefine “art.”
“When we think of ‘augment,’ we mean not only improving our capability but also the quality of our life,” says Russ Altman, associate director of the Stanford Institute for Human-Centered Artificial Intelligence. “Our belief at HAI is that some of our most significant global challenges can be addressed through AI technologies that augment humans rather than replace them.”
To explore this potential, Stanford HAI is bringing together scholars, artists, educators, and health experts for its spring conference, “Intelligence Augmentation: AI Empowering People to Solve Global Challenges.” Hosted by Altman, a physician and bioengineer, and HAI associate director and computer scientist James Landay, the virtual day-long event will take place on March 25 and include discussions on immersive technologies for caregiving and healthy child development, amplifying artists through AI, and technology to super-power teachers.
Here Altman and Landay discuss the main themes of the event, who should attend, and what participants will learn from the day. (Learn more and register here.)
What are the most exciting ways you’re seeing AI augment these sectors now?
Altman: There’s been some work in having AI generate a painting or music, but really the more intriguing things are human collaboration with AI. It raises questions about how humans and the arts should interact, and what is the nature of creativity? As an audience member, does it change my perception of a performance to know that it was augmented with a computer program?
At the conference, we will hear from practicing musicians like Hilary Hahn, who is going to talk to us about playing an instrument with a computational partner, and Rashaad Newsome, an artist who’s exploring issues of race and gender at the intersection of art and AI.
It’s just fascinating for me to see these artists mold these new technologies to their own uses and goals. And they don’t just accept these technologies as a given. If they don’t think that they’re expanding the possibilities of their art, they will reject AI. That is an important lesson for everybody in any industry.
Where are we seeing augmentation in education?
Altman: We have a shortage of teachers in the world. Augmented education is going to be part of the future, but it can’t replace tried-and-true teaching methods. Young children need eye contact. They need to see emotions. As an educator, I don’t want to give up that special relationship with students, but I’m happy to have it be made better and more efficient. Imagine a skilled teacher who wants to motivate students and is using AI to enhance and increase the performance of the learners. Can the AI pull up images at the right time? Can the AI ask quiz questions in preparation for a lecture to get students fired up about the topic?
Landay: AI can also help where the current school system is failing some kids: motivation. There is a lot of talent that never gets motivated in school and the potential that a good education might bring is lost in that person’s life and in society at large. AI-augmented tutors for use at home can focus on making learning fun, engaging, and personalized. This can lead to motivation and interest in learning that might carry over to a student’s interaction in the traditional classroom.
Our speakers are people who focus on education first, then AI. Daniel Schwartz is dean of the Stanford Graduate School of Education. Chris Piech is a colleague at Stanford who’s done very thoughtful work in scaling up computing education using AI. Candace Thille was at Stanford and is now at Amazon, thinking about ways technology can be used at scale to improve training in the workforce. And Amy Ogan from Carnegie Mellon University will discuss advances in educational technology.
What about in health care?
Altman: Like arts and education, there’s a major opportunity for nurses, doctors, and health care providers to make use of intelligent assistance.
We’re going to hear Dennis Wall of Stanford Medicine talk about his work in autism, where AI is being used to help develop treatments. He has Google glasses that enable an autistic child to look at someone expressing emotions and cue them into which emotions. It helps them recognize when someone is happy, or angry, or confused, and to learn to do this task better on their own. Suchi Saria at Johns Hopkins has been doing very important work, recently looking at AI and data-sharing privacy, and how to respect patients’ personal data but also have it be available for the purposes of training systems that will help doctors not miss a diagnosis. Eric Horvitz joins us from Microsoft, a huge company that may surprise people with how deeply it’s interested in health care.
Landay: And Deborah Estrin of Cornell Tech is working on how to use wearables, sensing, and machine learning to make positive impact on our personal health. She coined the term “small data” as a reaction to big data, focusing more on personal data and how that can be used in making health decisions.
Is there a case that augmentation could go too far in any of these areas?
Landay: Definitely. Say you decide, hey, we can get rid of some teachers because we have these great smart tutors. That would be a misuse of AI. That’s why we want to focus on how these technologies augment the teacher and augment the health care providers, rather than replace them.
Altman: The practitioner must have a vote in how the augmentation is going. This use of AI gets out of control when deciders who are not the providers — not the artists, teachers, or nurses — determine how far to push the augmentation. The person whose work or capabilities are being extended and impacted by AI has to be one of the primary voices.
Landay: Another important element is how you pick the problems to augment. Russ was describing Dennis Wall’s work in tools to support autistic kids. In a lot of those situations, those kids don’t have the care provider in their everyday lives at home. They only see them when they go to see the specialist or an occasional drop-in visit by a care provider. It’s all that in-between time where they’re not getting any services except what their parents can provide. These kinds of augmentation tools are allowing parents to collect data that would otherwise be lost as well as to help train their kids in ways that they don’t have the time and expertise to do themselves. If you choose the problems where there aren’t accessible services, you’re less likely to replace someone’s current job.
Who should attend this conference?
Landay: First, the researchers who are interested in these areas. Also, professionals in these industries or other fields who are wondering what might be coming down the line in their work, or who are in a position to influence how AI shapes their fields. They could use this as an opportunity to think about the good ways that augmentation could go.
Altman: Also, government regulators. Regulators who are looking at AI-critical industries like education and health care might want to see what some of the issues are technically and scientifically.
How will students be involved?
Landay: We asked students in each of these areas to talk about their research for a minute in a lightning round, with links to a website or a paper where people can find out more. We hope to expose people to some of the great student research going on here at Stanford and attract potential students who may want to join us at Stanford in the future.
Altman: Also, students are intensely concerned with issues of justice, diversity, equity, and inclusion, and that is a very important feature of augmentation. They will be addressing these topics in their presentations.
What do you hope people leave with?
Landay: I would hope that people have learned more nuance for how AI will be used in our future, how it can improve the quality of people’s work and personal lives by taking on specific tasks, rather than thinking that AI is purely about replacing humans with machines. I think that’s the key message.
Altman: The title is “empowering people to solve global challenges.” We hope people will see that some of these challenges are going to need AI, but shouldn’t be only AI. This technology should be in the service of people solving these big problems.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.