As AI Shifts Jobs, How Do We Prepare the Workforce?
Too often our vision of the future of work involves robots and AI replacing our livelihoods. Instead, says James Manyika, senior partner at McKinsey & Co. and the chairman of the McKinsey Global Institute, artificial intelligence will more likely change our jobs, taking on key components that are better suited to automation.
So how do workers, employers, and policymakers prepare for this future?
Manyika joined HAI co-director and computer scientist Fei-Fei Li and Mary Kay Henry, who helms the 2 million-member labor union SEIU, for HAI’s latest Directors Conversations, a video series discussing top trends in the AI field with leading experts.
In this conversation, the three discuss who will be most impacted by AI, the need to incentivize reskilling at scale, the importance of having every stakeholder brought to the table at the design stage of new technologies, and the lessons learned from these leaders' work on the California Commission on the Future of Work.
Full transcript:
Fei-Fei Li: Hello, everyone, and welcome to Directors' Conversations by the Stanford Institute of Human Centered AI, or HAI. My name is Fei-Fei Li. I'm a professor of computer science and the standing co-director of HAI at Stanford University. I'm just so thrilled and honored that joining me today are two very distinguished guests and friends of Stanford HAI.
Mary Kay Henry is the International President of the Service Employees International Union. She leads nearly two million American workers, from janitors, to healthcare workers, to teachers. Just recently, she was named one of the Time's most influential people of 2020. Also joining me today is my dear friend James Manyika. James holds a PhD in AI from Oxford University, and now is a senior partner at McKinsey & Company and the Chairman of the McKinsey Global Institute. Under his leadership, the McKinsey Global Institute produces cutting edge research on technology and digitization, the future of work, productivity and competitiveness, and globalization. Thank you so much, James and Mary Kay, for joining me today. What an honor.
Let me just start off our conversation with one question, and hopefully this will be a very dynamic and organic dialogue among three of us. Today, we're going to focus on talking about AI and the future of work, and to be perfectly honest, when these two terms are put together, there's often a lot of fear, right? The word “replace” comes to the mind of many, including the public, the media, and the image of robots replacing us, replacing workers, tend to be prominent as well.
As a technologist myself, I tend to be working on technology that's actually on the flip side, that is more complementing people and assisting people, especially my own work in healthcare, where AI, in the forms of smart sensors and algorithms, are trying to give doctors, clinicians, and nurses an extra hand or extra pair of eyes to help make patient care better. But I want to just ask both of you, when it comes to AI and the suite of technology related to data, machine learning, decision making, who do you think are the most impacted, and what are the opportunities and challenges we see about AI in work from their vantage points? Maybe we'll start with Mary Kay, and then James.
Mary Kay Henry: Thanks, Fei-Fei. All of the service and care workers that I represent and I'm in relationship with, their key issue is how are they involved in the introduction of artificial intelligence or any kind of technology in their work because they think they have a lot of ideas about how to use it to both complement their work, or if it's going to replace the work they do, they'd love to understand how government, business, and working people can join together to unleash their work into something else.
So for me, the answer to your question is that the service and care workers in the U.S. economy have many things impacting them at this moment, the global health pandemic, the deepening economic crisis. But really, it's built on a structural foundation in the economy and democracy that has undervalued this work since the beginning of time. We know that 70% of the U.S. workforce is in the service sector, and one in four jobs are care jobs. Those jobs are poverty jobs that, in most cases, there's two million black, brown, and immigrant women doing home care work in this nation. It's the fastest growing job. I think those workers would love to understand how technology and artificial intelligence could actually elevate the work they do caring for the nation's elderly and people with disabilities.
But right now, they have no access. There are structural barriers to their work. There's a lack of regular schedule. They have to hustle for hours every week, and one week to the next, they don't know really what their schedule is, which makes it impossible for them to engage in any kind of ongoing education. Then the other thing that is an impact is they're excluded from the ability to join together and form a union and bargain a better life. That also keeps them from being able to have a dialogue with their employers about how artificial intelligence in technology gets introduced into their work.
So for me, the biggest impact is on the parts of the economy that have been excluded for a very long time, and have been underpaid. That's what I hope our conversation today will focus on, which is how do we unleash the 44% of the U.S. workforce that's paid $18,000 a year, and those workers are living below the poverty line, but in a lot of cases, are working 80 to 100 hours a week. They would love to figure out how technology could enhance their work.
Fei-Fei Li: Yes. Wow. James?
James Manyika: Thank you for the question, Fei-Fei. There's obviously a lot of concern about technology and its impact on jobs and work, and I think that some of the research that we've done, and others have done, too, some of the research — so, if you look at jobs, each occupation, whether it's what a welder does, what a teacher does, what a doctor does, is made up of constituent activities where no one job is fully one thing. If you look at the constituent activities, the Bureau of Labor Statistics tracks over 800 occupation categories in the United States. We've looked at all the constituent activities that people do, and we've tried to understand the impact of AI and its related technologies on each of those activities.
The pattern when you look at that and you factor in the economics, the pattern that emerges is the following. There will be jobs that'll grow, actually, and other ones that'll be created. So that's a good thing. Think of that as jobs gained. There will be jobs that'll be lost, partly because technology will be able to do the various activities involved in that job. Third, there will be jobs that will be changed. The jobs change is part of the fact that while the job is still there, technology will complement some of those activities. So if you put all those three pieces together, which is the jobs gained, jobs lost, and jobs changed, the net of the jobs lost and jobs gained is actually a positive. So there's actually more work. In fact, the share of jobs and occupations that can be fully automated in terms of all their constituent activities is actually relatively small, at least for the next several decades.
The bigger issue is, in fact, the jobs changed question. This is where technology that complements what workers do, what people do, becomes critically important. I think Mary Kay is correct, that if, in fact, workers can be part of that process as we think through the jobs changed, that will actually lead to very, very good outcomes for workers. As I said, the jobs changed is the biggest part. What I always tell people when we have this conversation is don't worry about a jobless future. It is not for many, many decades. What we should think about is how we manage the transitions and adaptations as we help workers cope with this.
Let me come back and start to close in, I think, a little bit on what Mary Kay was describing. If I'm saying don't worry about a jobless future, there are some things that we have to solve. Let me describe at least four of them. The first one is, we do have to solve the skills question because as jobs change, we're going to need to make sure that workers can actually adapt, learn skills, be able to work alongside machines, or move into occupations that are actually growing. So the skills question is, in fact, a real thing for us all to work on.
The second question is how do we help workers transition either form declining occupations to the occupations that are growing? This is where policy and other mechanisms are really, really important to make sure we support the workers, we have the safety nets and the benefit models, and transition supports to actually help workers transition. That's the second thing.
The third thing that I'd highlight is, in fact, what Mary Kay raised, which is the wage question, because one of the challenges that we've got here is that some of the hardest occupations to automate, and the ones that are going to grow, tend to be in sectors like care work, as Mary Kay described. We need real people to do that work. They tend to be teachers, they tend to be all these occupations that are really, really important and fundamentally human. The challenge with our labor market systems is that those tend not to be some of the best paid jobs in the economy. So even if there is work, we have to think about how do we support living wages for people doing that work to be able to live. So the wage question is actually a fundamentally important one.
The fourth and final thing that we need to solve for is how do we actually redesign work, because what happens is, the workplace actually changes as we bring in technology to the workforce. In fact, it's one of the things that Mary Kay and I, and others, we've been talking about, which is how do we think about data in the workplace? How do we think about redesigning the work itself? By the way, if we didn't think these questions about redesigning work were urgent, we only have to pay attention to what's happened with COVID right now.
Fei-Fei Li: Exactly.
James Manyika: Some of the workers who actually have to show up for work are exposed physically, and we're now having to think about how do we redesign the workplace to make it safe for them to do, quite frankly, the essential work that they have to do, quite often on all our behalf.
Fei-Fei Li: Yeah. No this is incredibly useful framing. Thank you, James. Just to share with our audience, if you haven't read James' reports on the future of jobs, please do. These are one of the most insightful reports that I just continue to learn your incredible, rigorous, data driven way of doing this research. It's so important.
Mary Kay Henry: I agree, Fei-Fei, I agree. Cut through all the mythology and fear-
James Manyika: You're both very kind.
Mary Kay Henry: And get the facts. Get the facts.
Fei-Fei Li: Yes. Okay, so we have a lot to talk about, and, Mary Kay, I have to say that you are really a world leader of care workers. I'm just one person who is taking care of aging parents, and really getting the help of care workers. I cannot agree more with you that these jobs are particularly harsh, yet they're so essential and important. As an AI person who works in healthcare and senior care as part of my research, the number one thing I recognize is we cannot replace these humans. They are so critical part of this equation.
There are several things that you both said are really worth diving into. The skilling, the policy guardrail, the wage issue. One thing I really want to just bring to the forefront that Mary Kay said is access to the conversation. The humble goal of HAI as part of this global platform of AI conversation is actually to create that multi stakeholder opportunities. All three of us have been in so many conversations of tech, and work, and jobs, and we still don't hear enough from these workers. Mary Kay, give us some idea, or even one or two important example of how you can nudge this multi stakeholder conversation to involve or include these workers.
Mary Kay Henry: Well, in the U.S., the one example that I celebrate all the time for care workers is that the Affordable Care Act, when it was first passed in 2010, created government money to incent innovation that created multi stakeholder tables of the home care workers, the employers, and the local government, both public health and health officials, to think together about how could technology be used to up skill the home care providers' role with elders, but connect them to primary care physicians so we could reduce hospitalization cost and visits because the home care provider is an early warning system if they are trained and oriented about how to use technology to communicate with the primary care physician.
The six-month study proved that we could reduce hospitalization for elders by 30%, which is reducing infection, is better for the elder, and made the home care provider understand their skill and worth at a different level. Because they were doing it intuitively in the old system, but there was no access for them to talk about what they needed to deal with weight, blood pressure, skin color, all the things that both the smart phone device and the iPad then gave them access to.
Then, Fei-Fei, the other experience that we've done is the U.S. labor movement, is to look around the world. Other countries have actually created mechanisms that don't exist in the U.S. So, this pilot existed, but then the government money went away and there was no way to continue the lifelong learning for the home care providers, and all the technology evaporated because it was no longer financially supported. But in Sweden, there is an ethos between government, employers, and working people because they have a bargaining system that gives everybody a seat at the table. They've made a decision to protect workers, not jobs. So workers can understand that there will be lots of change, but that the government and employers have a commitment to the retraining. It deals with this transition and the wages points that James just raised.
The truck drivers in Sweden are working with engineers to design and tune up the driverless vehicles. Truck drivers understand that their jobs are going to be replaced at the same wages and benefits in whatever the economy creates as new jobs as Sweden goes to carbon free emissions and all the other needs in the country. So truck drivers are training the autonomous AI, work together with engineers, and then going into middle schools and helping children understand that there are going to be other opportunities for them besides truck driving, rewiring the next generation. I think that's an incredible example of how there's a global commitment.
James Manyika: Mary Kay, I think those are terrific examples. In fact, one of the things that even AI researchers — and I know, Fei-Fei, you're involved in some of this, or people working in robotics, for example — realize is that there are certain aspects of training where it actually is very helpful to actually do that with workers alongside the machines. So, often when you're training things like ... How do you think about robotic manipulation, for example? Some of that training can actually be done by the machines. I remember the work, for example, that Rodney Brooks is doing with his Baxter robots, many of which are actually used, by the way.
One of the things that's interesting in this COVID moment, robotic mechanisms have actually been quite important in actually doing things like improving the testing. I'm involved, for example, with the Broad Institute of MIT. If you go into their labs, into their genomics labs, a lot of the agent and reagent placing is actually done by machines and robots. Quite often, when machines are trying to learn how to place and locate things in a space and in an environment, it's often very helpful to learn that from real human beings, so work alongside human beings.
I think there are lots of examples, both on the research, and to involve workers in those processes of designing such machines and systems. Also, quite often they come up with actually creative ways for the additional things that often the robotics research engineers haven't thought about, that have to do with ways to learn. So there's a co-learning aspect to this. I think that's another way in which workers can be part of the process.
Fei-Fei Li: I cannot agree more. If you allow me to geek out for 30 seconds at what James has said.
What James is describing is imitation learning in robotic training. Especially for reinforcement learning, this is a big part of robotic training. What I actually see is beyond imitation learning, where the goal of training is still to automate, there is still also the lifelong collaboration between machines and workers, and humans, where you are in a situation that no matter how much training you do for either robots or AI agents, you still need that humane loop to provide the cognitive emotional skill that machines do not have in any foreseeable future.
Back to Mary Kay's example, I'm also deeply in senior care research using AI smart sensors, and I was just imagining this beautiful scenario that, Mary Kay, you're describing, is home care workers. They're so essential as human emotional support, as the complex cognitive and labor support for our seniors. But we can have the iPads and smart phones, we can have the smart sensors to provide useful data analytics and reminders of pill time, questionable sleep pattern changes, frequent toilet runs that might indicate urinary tract infections. All this can become part of that human care and AI agency loop.
One thing I also want to add to complement both of you is, I think the stakeholder, multi stakeholder approach needs to come, just like James said, not only at the end of the application stage, but in the design stage. For example, in healthcare we recognize that looping in these workers, from clinicians to caretakers, we start to understand deeply the privacy concerns, the ethical concerns. When we start working with the nurses in Stanford Hospital about hand hygiene practice, the initial reaction that some researchers tell us is, "You'll never be allowed to do this because there is a privacy infringement, or actually concern for the nurses."
But as we loop in the nurses and understand the concerns, we share the technology that does not infringe on privacy, you realize these people are so incredible. They are creative, they're supportive, they want what's best for the patients, they become our biggest allies and partners in this. So I just cannot agree more with both of you that there is a lack of culture in tech right now to involve more of our workers, especially service workers, in the early stage of the tech design, all the way to the application stage. So, so happy the two of you are both pushing this and pushing us to do that more.
James Manyika: Fei-Fei, you're doing it, too. One of the questions I have for you is, I've just been struck by how you've thought about constructing HAI because at least when I see these issues, they involve roboticists and AI scientists, they involve economists, they should involve ethicists, they should involve people who think about policy. I think HAI in some ways has attempted to do something relatively unique here, which is to try and bring all these pieces together. How do you think about that? Because I think ... Is that something that could be replicated in industry and the private sector? How do you think about that, bringing all the pieces together?
Fei-Fei Li: Well, James, I'm just inspired by you and also work by Mary Kay. I remember, James, we had early conversation, even predated HAI. We both come from technology background. I stayed in academia and research longer in AI, and my industry experience at Google on my sabbatical really showed me that there is such a deep human impact of this technology that we ... It's not enough to recognize, oh, tech is changing the world. It really is important to recognize how we want tech to change the world. The how question, even the why question, is so important. It cannot just be for the money or power. It shouldn't be.
That's why, with the encouragement like you, James, we come back to Stanford and realize, as an academic, higher education platform, we don't answer to Wall Street. We answer to thought leadership, to educating the next generation of students, and it's our opportunity and responsibility to create a different way of doing tech, talking tech, and thinking about the impact of tech. We are still experimental, James and Mary Kay. To be honest, before I think about scaling up this model, as James has this audacious plan for us, we need to do a good job at Stanford to show that this human centered approach in AI research, and AI helping social science research, and humanity research, and education, and policy outreach, would work. Part of this is why both of you are our advisors and nudging us to do the right thing.
I also want to get on the skilling part. Both of you talk about skilling workers, and this is music to my ear as an educator, is that we not only have to make the tech work for our workers, we also have to empower our workers and continue to help them to adapt. Can you both talk about ... James, talk about the changing nature of jobs which involve skilling, and, Mary Kay, talks about the challenges that, given the nature of the work, it's hard even for them to go back to education or get involved in part time education. What are the challenges and opportunities you want to talk about in terms of skilling our workers, especially connected to AI and tech?
James Manyika: First of all, I think the skilling challenge is huge, and the opportunity is very large. But the challenge with the skilling question, at least in the current environment, and this is even before COVID, is that most of the great examples you find of re-skilling, or skilling, or where companies and others have put in place mechanisms to do that, tend to be relatively small scale. So, I've found myself, every time people say, "Hey, we've got a great skilling or re-skilling learning example," my first question is how many people are going through that, because often it's a great example, but the numbers tend to be small when, in fact, we're going to need to skill very large numbers of people to adapt to this.
The reason I highlight this issue, Fei-Fei and Mary Kay, is because one of the imbalances that I find, particularly from a policy standpoint, and even from what the employers can do, from a policy standpoint, we've put in place so many incentives for companies to invest in capital, to invest in R&D, all of which are very, very important and necessary for the economy. But we haven't done nearly as much to create incentives and mechanisms to invest in human capital. This is one of, I think, the gaps that I think our policy mechanisms are going to have to solve for because ... Part of the reason why that's important is because for the companies ... and where the workers really need this re-skilling adaptation, they tend to be in the service sectors that Mary Kay highlighted. They tend to be relatively middle or low wage workers. So many of those companies aren't as incented to invest in the skilling as much as they could. So I think policy could play a role. I'm not here to discern policy, but I think there's a policy gap there.
I also think that we've over emphasized and confused skilling with education. I think that's something we're going to need to think through, and I'm actually curious to hear what Mary Kay thinks about this because quite often there are a lot of skills that particularly workers in the service sector have and could have that aren't about getting more and more degrees, getting PhDs. It's not about that. It's really practical skills that enable them to do their jobs better, adapt, earn higher wages, and we don't focus on that quite as much. So I think this is one of the gaps that I see, and I'm actually curious, Mary Kay, because in some ways, many of the workers you represent are the ones who should have opportunities to re-skill, should have opportunities and support systems to help them do that. I'm not quite sure we're doing anywhere quite enough, either policy-wise, or the employers aren't doing enough, policy makers aren't doing enough, and where we need it the most.
Mary Kay Henry: For me, Fei-Fei, we can't think about it employer by employer. I think that's one of the things the three of us have learned on the Future of Work Commission, that the four million people doing fast food work in this country that are living in poverty and have irregular schedules, but have dreams of doing other kinds of jobs, have no access in the current system that's either supported by government or their employer to engage in any lifelong learning. I remember standing in the street with this one fast food worker when a store manager came out ... we were on strike, and said, "I'm going to replace you with robots, so I don't care about your strike." The worker said back, "I'd love a chance at designing the robot, because you trained me on the grill. I've done the job for 15 years, and I actually have figured out ways to beat the clock." Because there's a clock on what has to happen by when. The manager was just stunned.
I remember this exchange like it was yesterday, and it was in 2012. To me, it was just like a whisper. Imagine a system where the companies, the government, and the workers together thought about how do we unlock the four million people that are stuck in those jobs, and actually make them good entry level jobs for youth, and then train the current workforce to do other work that's emerging in the future. Because my lived experience is, people do want to be unlocked and have been blocked, essentially.
On home care work, Fei-Fei, it's actually different. I find that most home care workers love that work, and they just want that work enhanced. They want it better paid, and they love the idea of up-skilling so that they could do a better job in the way you described about how AI and technology together could actually enhance the care that they do. I just think it's both, and I was looking at New Zealand because Governor Newsom just set the goal for electric vehicles in California, and that's going to create a huge shift in work. Imagine if we were creating access for the seven million people in California that are trapped in jobs that will have no access to the electric vehicle design, maintenance, care, all the jobs that will be created. I just think we have to think ... The skills thinking has to be thought about more systemically because employers by themselves, I don't think can fix the depth of the problem if we really want to escalate the future of work.
Fei-Fei Li: Yeah, I think I hear you loud and clear, Mary Kay. I just love this way of thinking, to advocate to involve the workers in the design. First, I can tell you, no robot is near the capability of flipping burgers and making them delicious and beautiful. We're very far from that. But so much work ... In my own work, I just said earlier, the work is better. The technical work becomes better. Eventually, the product becomes better when these workers are involved in the design and we have a way to combine the strength of tech and human workers. I want to say you ... Oh, go ahead, James.
James Manyika: I'm sorry. I was going to piggyback on something that Mary Kay said, which I think is very, very important. One of the things we've learned, both on the ... looking at California, but I think this is true everywhere, is that we can be relatively assured of the categories of work that are going to grow in the future partly because of demography and aging, partly because of the fact that we know the reality of climate and climate change is coming. So we already know that care work is going to matter. We already know, and it's going to grow as a category. We already know that work related to renewables, and technology, and climate adaptation is coming. So we already know many of these things.
Given that, we could actually start to prepare and to plan for that, and start to re-skill people for that, start to redesign this work to make it attractive, and interesting, and well-paid. So we can already plan for that, and we should. I think many of these workers have tended to be people we don't pay as much attention to, we don't listen to, we don't involve them as much. I think many of them are represented by Mary Kay, thankfully, so they are lucky in that regard. But I think we already know what those categories are.
At the same time, COVID has reminded us painfully, by the way, so we don't even need to wait until the future to know that we're going to need those roles. COVID is a wake up call to say these are the most vulnerable workers, these are the most exposed workers, how do we involve them in redesigning the work that they do, and start to do the skill building today?
Fei-Fei Li: Yeah. So, Mary Kay and James, you guys alluded to this thing that our audience probably don't know, the Future of Work Commission. I do have to say I'm talking to my two bosses, quote, unquote. So, Mary Kay and James, since 2019 to 2020, are the co-chairs of California Governor Newsom's Future of Work Commission, and it truly is one of the biggest honor of my life to serve as a member of your commission. This commission is bringing leaders from technology, labor, business, education, together to help California's governor and the leadership to think about the long-term economic changes and growth. Since we have the two co-chairs on this conversation, I really do have to ask you what is the biggest learning point ... because we can talk forever, but given time, what is the biggest learning point for you coming out of this, or wrapping up this work with this commission for the largest economies of our country, California state, as well as one of the largest of the world?
James Manyika: Well, I think a couple things. You're correct. California, if it were a country, would actually be the fifth largest economy in the world, so it's a big place. So I think we've learned a few things. By the way, if anybody is interested, these are public documents that anybody can access. The work we've done so far is identified about a dozen really challenging issues for work and workers in California. I won't go through all 12, but highlight some of them.
First of all, we do have a challenge that, in fact, there's a lot of poverty wage work going on in the state, as in the wider economy. We have to do something about that. We have, as everybody knows, an issue with inequality. We have to do something about that. We also have challenges to do with the fact that workers now face a whole range of what we discussed in the commission as work adjacent issues. The cost of housing is very high, they often have very long commutes, and so forth. We also know that, quite frankly, we don't have a 21st century benefits and safety net system.
It's quite striking that is one of the things was a big issue in the commission before COVID, and COVID, again, just reminded us. Because if workers in their benefit and support models, whether it's for healthcare or any of these things, are tied to employers, and then they are out of work, well, it won't work. Right? So we really don't have a 21st century benefits and safety net system to support workers. Those are just some of the ... and Mary Kay should add, those are just some of the issues. But we're starting to think about so, what do we do with all of this. Well, it could be some bold initiatives that a state like California with its pioneering heritage and spirit can do that can, quite frankly, innovate, and perhaps even lead and show others how these issues can be tackled.
Mary Kay Henry: Yeah. For me, the biggest ... I really appreciate James' frame, and that frame, I have to admit to you both, in the last 12 months, the biggest learning point for me is things that I never regarded as part of the future of work are actually about the future of work. James and you, Fei-Fei, helped me with these two learning points. One is my lifelong mission to end poverty wage work in the nation is actually connected to the future of work, and frankly, I hadn't connected it before this commission, one.
And two, that there are people in the tech sector, like you, Fei-Fei, who are leaders in getting the workers that are part of the way in which AI and technology is being introduced to employment. You are an advocate for getting workers in both the design, the training, and implementation phase, and I didn't know that before the Future of Work Commission. Both points made me incredibly hopeful that, as a civil society, academics, philanthropy, employers, working people's advocates, government, can actually come together as multiple stakeholders and think about how we unleash human capital to make an economy that works for everybody, and reduces the level of inequality, and allows every community, black, brown, Asian, white, all across California, the opportunity to thrive. That, for me, was the biggest learning because of the multiple stakeholder commission that the governor appointed.
Fei-Fei Li: Yeah, so well said, Mary Kay. Your commission, together with James, was really an embodiment of multi stakeholder conversation. I vividly remember session one, we hear from the workers from warehouse, from service industry, and that was just ... It was just so powerful to involve them. I know we're running out of time.
I still hope we can build in one question that is self centered, is for Stanford HAI, both of you are advisors to us, and we ... Again, we aspired to be a global hub of this kind of research and conversation, but I want you to actually push us, to nudge us.
What do you want us to do? What do you want to get out of Stanford HAI's efforts from the technologists? We have tremendous technologists on campus. You name it, right? Machine learning, deep learning, robotics, healthcare, economics, and all that, and thought leaders. What do you want from researchers like this? What do you think we should do with the companies we work with? Some of them are leading tech companies who are changing the landscape of jobs. How about the policy workers we also work so closely with to bring them to the world of technology, but also to help brainstorm, like James said, policy recommendations? What are the things you want HAI to do on these fronts?
James Manyika: Mary Kay, you want to go first?
Mary Kay Henry: Sure. I would just like to push on your vision, Fei-Fei, of how do care workers get involved in the research and design? How can care workers impact the introduction of AI and technology into senior care work in the nation? I just think it would be incredibly exciting to have a collaboration on the front end, and could we encourage the tech employers that HAI works with, and the employers in that sector, to consider how workers could be a part of the design and solution. I think that would be incredibly exciting.
James Manyika: I think I agree with that, and let me build on that in the following way, Fei-Fei. I think first of all, I would like HAI to actually fully realize its ambition to be a multidisciplinary entity, and take these issues in a truly multidisciplinary way. I think it's quite easy to do research in AI by itself. It's quite easy to do research in economics. It's quite easy to do policy research. It's quite easy to do ethical research. Stanford already does that. Many universities already do that. But I think to put all those things together and have a point of view that's integrated and actually necessarily ... and in some ways almost function like a commission where you've got everybody at the table working together. I think that is unique to HAI. I'd like to see you fully realize that, number one.
Number two, I think there's something about, while you're doing cutting edge work on all of those things, do also some real world work. In other words, are there some initiatives or pilots that you can collaborate with people like Mary Kay, with workers' groups, with companies, to actually think about these work issues in all these multidisciplinary way in the real world, in addition, of course, to the academic research that you will undoubtedly do amazing things with. That'll be the second thing.
I think the third thing is something you've already started to do, which is engage the national dialogue, because one of the things that you are uniquely able to do by virtue of physically where you are, you can attract and engage companies, the most important ones working this space. You have an extraordinary heritage at Stanford University. So you can engage the whole country on how to think about these. I wouldn't shy away from being bold in that way, and would encourage you to ... Fei-Fei, you're a remarkable leader. You're one of the AI pioneers by any stretch, by anybody's standards, and I think you can do that.
Finally, the other thing ... Sorry, we'll add one more. You asked for this, Fei-Fei. You asked us.
Fei-Fei Li: Yes.
James Manyika: We're happy to give you assignments.
Fei-Fei Li: I love your list, James.
Mary Kay Henry: Me, too. Me, too
Fei-Fei Li: I'm taking notes.
James Manyika: Here's the last one, which is, AI itself has been an incredibly non diverse field. Non diverse in the sense it's very rare to find women. There aren't that many. There aren't that many people of color. And also very rare in ... It also hasn't always looked at work and workers who are considered low wage, to the extent that often, AI research has done research with workers, it's been working with radiologists, doctors, not as much with people on the front lines. So I would encourage diversity in the ways that I just described, in how you approach these issues, and who you involve, and who you recruit as you build out your networks and ecosystems.
Fei-Fei Li: Yes. That's just so beautifully said. I hope in the near future we can invite you back. It's just so incredible to ... and what an honor to be in conversation with the two of you. For the audience out there, if you're interested in this conversation and more conversations with leaders like Mary Kay, James, and others in the field of different areas and related to AI, please visit Stanford HAI's website or subscribe to our HAI YouTube channel to follow these directors' conversations. Thank you. Thank you, Mary Kay, thank you, James. Thank you, everyone, for listening. We'll see you next time. Bye.
Watch more Directors' Conversations.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.