Skip to main content Skip to secondary navigation
Page Content

Future of Work: Beyond Bossware and Job-Killing Robots

To encourage a human-centered workplace, we must rethink AI-driven automation, bossware, labor taxes, and corporate R&D.

Image
CCTV Camera In Open Plan Blurry Office

Bossware can track our movements at our offices and across our computer screens. Companies may save money, but at what cost to human dignity? 

The public conversation around AI’s impact on the labor market often revolves around the job-displacing or job-destroying potential of increasingly intelligent machines. The wonky economic phrase for the phenomenon is “technological unemployment.” Less attention is paid to another significant problem: the dehumanization of labor by companies that use what’s known as “bossware” — AI-based digital platforms or software programs that monitor employee performance and time on task.

To discourage companies from both replacing jobs with machines and deploying bossware to supervise and control workers, we need to change the incentives at play, says Rob Reich, professor of political science in the Stanford School of Humanities and Sciences, director of the McCoy Family Center for Ethics in Society, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

“It’s a question of steering ourselves toward a future in which automation augments our work lives rather than replaces human beings or transforms the workplace into a surveillance panopticon,” Reich says. Reich recently shared his thoughts on these topics in response to an online Boston Review forum hosted by Daron Acemoglu of MIT.

To promote the automation we want and discourage the automation we don’t want, Reich says we need to increase awareness of bossware, include impacted workers in the product development lifecycle, and ensure product design reflects a wider range of values beyond the commercial desire to increase efficiency. Additionally, we must provide economic incentives to support labor over capital and boost federal investment in AI research at universities to help stem the brain drain to industry, where profit motives often lead to negative consequences such as job displacement.

“It’s up to us to create a world where financial reward and social esteem lie with companies that augment rather than displace human labor,” Reich says. 

Increase Awareness of Bossware

From cameras that automatically track employees’ attention to software monitoring whether employees are off task, bossware is often in place before employees are aware of it. And the pandemic has made it worse as we’ve rapidly adapted to remote tools that have bossware features built in — without any deliberation about whether we wanted those features in the first place, Reich says.

“The first key to addressing the bossware problem is awareness,” Reich says. “The introduction of bossware should be seen as something that’s done through a consensual practice, rather than at the discretion of the employer alone.”

Beyond awareness, researchers and policymakers need to get a handle on the ways employers use bossware to shift some of their business risks to their employees. For example, employers have historically borne the risk of inefficiencies such as paying staff during shifts when there are few customers. By using automated AI-based scheduling practices that assign work shifts based on demand, employers save money but essentially shift their risk to workers who can no longer expect a predictable or reliable schedule.

Reich is also concerned that bossware threatens privacy and can undermine human dignity. “Do we want to have a workplace in which employers know exactly how long we leave our desks to use the restroom, or an experience of work in which sending a personal email on your work computer is keystroke logged and deducted from your hourly pay, or in which your performance evaluations are dependent upon your maximal time on task with no sense of trust or collaboration?” he asks. “It gets to the heart of what it means to be a human being in a work environment.”

Privileging Labor over Capital Investment in Machines

Policymakers should directly incentivize investment in human-augmentative AI rather than AI that will replace jobs, Reich says. And such human-augmentative options do exist.

But policymakers should also take some bold moves to support labor over capital. For example, Reich supports an idea proposed by Acemoglu and others including Stanford Digital Economy Lab Director Erik Brynjolfsson: Decrease payroll taxes and increase taxes on capital investment so that companies are less inclined to purchase labor-replacing machinery to supplant workers.

Currently the tax on human labor is approximately 25%, Reich says, while software or computer equipment is subject to only a 5% tax. As a result, the economic incentives currently favor replacing humans with machines whenever feasible. By changing these incentives to favor labor over machines, policymakers would go a long way toward shifting the impact of AI on workers, Reich says. 

“These are the kinds of bigger policy questions that need to be confronted and updated so that there’s a thumb on the scale of investing in AI and machinery that complements human workers rather than displaces them,” he says.

Invest in Academic AI Research

If recent history is any guide, Reich says, when industry serves as the primary site of research and development for AI and automation, it will tend to develop profit-maximizing robots and machines that take over human jobs. By contrast, in a university environment, the frontier of AI research and development is not harnessed to a commercial incentive or to a set of investors who are seeking short-term, profit-maximizing returns. “Academic researchers have the freedom to imagine human-augmenting forms of automation and to steer our technological future in a direction quite different from what we might expect from a strictly commercial environment,” he says.

To shift the AI frontier to academia, policymakers might start by funding the National Research Cloud so that universities across the country have access to essential infrastructure for cutting-edge research. In addition, the federal government should fund the creation and sharing of training data.

“These would be the kinds of undertakings that the federal government could pursue, and would comprise a classic example of public infrastructure that can produce extraordinary social benefits,” Reich says.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content

A woman works on her laptop.

Preparing for the Future of Work

by Sachin Waikar
November 11th, 2020

The Stanford Digital Economy Lab’s first conference featured conversations and insights from visionary leaders across...