Skip to main content Skip to secondary navigation
Page Content
Image
James Landay, second from left, on a stage for a panel discussion in Davos

Chris Cooper

Professor James Landay, second from left, during a panel discussion in Davos.

At the 2024 World Economic Forum in Davos, Switzerland, talk of AI dominated panel discussions and coffee conversations among executives, economists, academics, NGO representatives, and government officials. Leaders from OpenAI, Google’s DeepMind, Microsoft, Meta, and more made appearances, while attendees discussed the technology’s massive impact on the nature of work, business strategy, and productivity.

Stanford HAI Vice Director James Landay attended the week’s activities, sitting on a dozen panels with corporate executives from Fortune 500 companies as well as non-profit leaders. “AI was the dominant topic,” he said. “So many companies were selling AI or implementing it.”

Here are his six main insights on AI from the week’s conversations:

AI FOMO

The fear of missing out ran strong among attendees, Landay says. Last year panels focused on AI experimentation; this year, on AI implementation.

“There was a little fear-mongering of ‘don’t be left behind,’ but I would take that with a grain of salt,” Landay said. “Companies are fine if they’re still trying to get it right.” 

Good AI is complex: It takes high-quality, clean data; fine-tuning of foundation models; thoughtful and responsible roll-out. “Many companies aren’t in a position to use AI in this way yet.”

In one panel, Landay suggested leaders try a bottom-up approach: Let employees use generative AI tools in-house and explore potential uses for the technology. “Employees are going to be the ones that come up with great use cases that companies may want to implement in a bigger way. Some companies aren’t even allowing this experimentation yet, though their employees are doing it on their own time.”

Real AI Risks

Fewer conversations at Davos this year focused on superhuman or sentient AI run amok, Landay said. But conversations didn’t focus enough on the real and current risks of AI, which he refers to as the four “Ds”: Deepfakes, disinformation, discrimination, and (potential) displacement of jobs. 

Deepfakes are already creeping into everyday life - consider the fake “Biden” robocalls in New Hampshire - and generative video and audio continues to get better. Similarly, disinformation may influence voters in a major election year. Disinformation campaigns that required hundreds of people prior can now be created and dispensed with two people and AI, he noted.

And while discrimination in AI is not new—these systems have shown to discriminate across demographics of race, gender, age, and more—we’re still not much closer to fixing many of these harmful issues, he said.

Finally, AI might not eliminate all jobs, Landay said, but he anticipates large displacement, and “the gains and losses are not going to be distributed evenly,” he said.

These four real risks need a human-centered approach, he cautioned: “AI systems impact more than just the direct user. They impact the broader community and have societal impact. If we focus on these side effects from the start and design with those larger groups in mind, we have a better chance of creating AI systems that have a positive impact.”

Building Trust in AI

At Davos, the concept of trust played into both panels and dinner discussions. How do we restore trust in organizations? How do we trust AI? A major failing of AI is few tools and companies accept and act on feedback. “If a system makes a mistake and I can’t correct that mistake or get feedback from the company, then I may not trust them in the future,” Landay says. 

Academia Must Play a Role

Today only the wealthiest, biggest companies or nations build AI foundation models. They decide how to build it, for whom, and for what incentive. We do not even know what data these models are trained on.

“Academia needs to be a player here, as a neutral ground to recognize some of these issues and develop systems in a different way,” Landay said. “Academia is also an interdisciplinary player—we have experts in law, medicine, history, social sciences, computer science, art, and design, coming together to ask questions, rather than tech companies focused primarily on a profit motive. We need academia and non-government organizations to have a say and play in this game, and question this power dynamic.”

Companies Rethink Product Development

AI challenges companies in a way that other products have not. In prior years, companies might push out an AI tool to discover later that it discriminates against one group of people. This year Landay heard more executives discuss AI teams that include ethics and design experts at the start, with much more involved processes in place before release. “A couple of companies really stood out to me as, hey, they’re thinking about this genuinely,” he said. “People seemed open to learning more about how they could do better because I think they don’t want the negative blowback if they do it poorly.”

Regulation: A Mixed Bag

At any gathering of capitalism’s who’s who, regulation sounds like a curse. And of course plenty of attendees worried about how new EU regulation might stifle innovation or entrench the biggest players. But Landay heard many people speak highly of efforts to limit this growing technology. “A lot of people just don’t know how to do it well so that the regulations will be able to adapt, be useful, and not out of date every time AI progresses.”

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics