In one vision of workplace of the future, organizations will rapidly convene online to take on projects far more complex than the work associated with today’s gig economy, as technology helps configure and direct the collective talents of remote workers only to have them disband once the job is finished. The question confronting policymakers is how to ensure that such a future is a positive one for workers?
Stanford’s Michael Bernstein, an associate professor of computer science and a member of the university’s Human-Computer Interaction (HCI) Group, laid out both the promise and perils in a presentation to the California Future of Work Commission, in September. Established by an executive order of Governor Gavin Newsom, the commission — a public-private partnership between the California Labor and Workforce Development Agency and Institute for the Future — is tasked with making recommendations to help California leaders shape the laws, policies and norms governing how technological developments affecting workers play out, with the overarching aim of promoting shared prosperity for all Californians. Considering that Silicon Valley is ground zero for technological transformations that are shaping the future of work, and that the state has taken the lead in developing a body of law governing the gig economy, the commission’s final recommendations, due by year-end 2020, could reverberate widely.
The commission is comprised of a broad spectrum of voices, from labor and academia to CEOs and policymakers. Fei-Fei Li, the Sequoia Professor in Stanford’s Computer Science Department and co-director of the university’s Institute for Human-Centered Artificial Intelligence (Stanford HAI), is a member of the commission. Melissa Valentine, an assistant professor in Stanford’s Management Science and Engineering Department and co-director of the Center for Work, Technology & Organization; and Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business, will be among the presenters at upcoming commission sessions. (Valentine has collaborated extensively with Bernstein on how computation will impact the future of organizations.)
On one level, Bernstein sees the emergence of rapidly assembled organizations and new collaboration tools as exciting, even transformational. But he emphasizes that there’s a serious problem that goes along with it, one that code can’t solve. It’s not so much about the future of work but of workers who could find themselves bereft of the safeguards provided by traditional organizations and subject to wage arbitrage enabled by technology offering up experts in a click, on the cheap.
Bernstein spoke candidly about the conundrum in his presentation to the Future of Work Commission, stating that he’s increasingly uncomfortable creating infrastructure to power new forms of organization unless improvements for workers are assured. “To make this concrete,” he said, “we have to ask, ‘Would you be happy if your own kid joined this project-based workforce?’ I think for many of us, the answer right now would probably be a firm ‘no.’ ”
What might need to happen for computation to point the way to a positive future for workers? In his presentation, Bernstein cited Dynamo, a project developed to enable collective action among online workers. Providing a space to gather and organize without oversight by management, it was adopted by workers on Amazon’s Mechanical Turk platform who perform the data-labeling necessary for machine-learning algorithms to identify objects. (Bernstein’s collaborators on Dynamo included Lilly Irani at UC San Diego, Niloufar Salehi at UC Berkeley, Mechanical Turk workers Kristy Milland and Clickhappier, along with many other students and workers.)
Bernstein and his colleagues also have helped build software for convening groups of online workers into guilds, whose members can collectively certify each other as being high quality — giving platforms where they offer their services signals that can help drive higher wages. They’ve also developed systems for skill acquisition through micro-internships and platform analytics to help chart a positive path forward for workers.
“But all of this is simply not enough,” Bernstein told the commission. “For every new pro-social project I do, I read 20 headlines about a tech platform exploiting or disenfranchising people.”
A key issue confronting Stanford HAI and the commission concerns equity, according to Li: Can AI be a positive, equalizing force or will it exacerbate inequalities? She described hearing from a panel of workers at the commission’s first session. They’d experienced displacement or struggled with other issues such as (in the case of a former Amazon warehouse picker, Eric Guillen) job-related injuries and discomfort with technology used to monitor workers. The monitoring in the warehouse made him feel expendable, like a number, she related.
After the panel, Li spoke with Guillen about his experiences and about using technology not to infringe on privacy but to improve safety and working conditions. "After that experience, he was encouraged to think that there’s another side of technology.”
The anxiety that automation generates is real and important to acknowledge, says Li, “as we see evidence that jobs will be shifted.” But the effects of technological change on work are not predetermined or inevitable and are taking place in the context of a rise in income inequality caused by structural factors that have developed over several decades.
For Li, data is at the core of social equity because it is shaping the economy and driving the profitability of Big Tech. While much attention has been paid to Big Data’s impact on privacy and its potential for inadvertently introducing bias into systems on which society depends, the bigger question, according to Li, is around whether the economic benefits are being fairly shared with the workers and consumers from whom the gold is extracted.
“This is where government needs to think hard about its role,” she explains, “because it’s hard to imagine corporations that are benefitting from this will come up with their own regulations. Government should have a strong role to play.”
Stanford’s Center for Advanced Study in the Behavioral Sciences (CASBS) has engaged with many of the same issues before the commission. Its project on the Future of Work and Workers has morphed into a successor initiative, Creating a New Moral Political Economy, jointly led by Margaret Levi, director of CASBS, and Federica Carugati, a program director at the center. Levi, who also serves on the advisory committee of Stanford HAI, says that in order for the Future of Work Commission to achieve its goals, “we have to think about the whole political and economic framework under which we are currently operating, and which I believe needs some fundamental change as has happened periodically throughout the history of American capitalist democracy.” In addition to revisiting rules and regulations governing private industry, she says policymakers would be wise to consider a civilian conservation corps for building infrastructure and undertaking educational and even art projects, “so people would have meaning and dignity and things to do even if work has transformed and you don’t have a life-long career.”
Li believes that one way to move policy in the right direction is to develop and show the positive impact of transformative technology. She cites intensive care unit (ICU) clinicians and nurses who are overworked. A privacy-protected smart sensor could help them ensure patient safety, and the same genre of technology can be enormously helpful to humans who look after an exploding U.S. population of seniors — for instance, assessing early behavioral changes that can lead to falling, dementia or malnutrition, all critical factors impacting quality of life and independent living for seniors.
Bernstein says we can learn from models of the gig economy and do better. “My exhortation to the committee is to say, ‘What does this world need to look like for it to be a positive future for workers?’ Because I think we know what it looks like when it’s not.”