Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Davos 2024: Six Takeaways on the AI Conversation at WEF | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Davos 2024: Six Takeaways on the AI Conversation at WEF

Date
January 30, 2024
Topics
Economy, Markets
Finance, Business
Chris Cooper

Stanford HAI leader James Landay joined executives and policymakers in Switzerland for the major economic summit.

At the 2024 World Economic Forum in Davos, Switzerland, talk of AI dominated panel discussions and coffee conversations among executives, economists, academics, NGO representatives, and government officials. Leaders from OpenAI, Google’s DeepMind, Microsoft, Meta, and more made appearances, while attendees discussed the technology’s massive impact on the nature of work, business strategy, and productivity.

Stanford HAI Vice Director James Landay attended the week’s activities, sitting on a dozen panels with corporate executives from Fortune 500 companies as well as non-profit leaders. “AI was the dominant topic,” he said. “So many companies were selling AI or implementing it.”

Here are his six main insights on AI from the week’s conversations:

AI FOMO

The fear of missing out ran strong among attendees, Landay says. Last year panels focused on AI experimentation; this year, on AI implementation.

“There was a little fear-mongering of ‘don’t be left behind,’ but I would take that with a grain of salt,” Landay said. “Companies are fine if they’re still trying to get it right.” 

Good AI is complex: It takes high-quality, clean data; fine-tuning of foundation models; thoughtful and responsible roll-out. “Many companies aren’t in a position to use AI in this way yet.”

In one panel, Landay suggested leaders try a bottom-up approach: Let employees use generative AI tools in-house and explore potential uses for the technology. “Employees are going to be the ones that come up with great use cases that companies may want to implement in a bigger way. Some companies aren’t even allowing this experimentation yet, though their employees are doing it on their own time.”

Real AI Risks

Fewer conversations at Davos this year focused on superhuman or sentient AI run amok, Landay said. But conversations didn’t focus enough on the real and current risks of AI, which he refers to as the four “Ds”: Deepfakes, disinformation, discrimination, and (potential) displacement of jobs. 

Deepfakes are already creeping into everyday life - consider the fake “Biden” robocalls in New Hampshire - and generative video and audio continues to get better. Similarly, disinformation may influence voters in a major election year. Disinformation campaigns that required hundreds of people prior can now be created and dispensed with two people and AI, he noted.

And while discrimination in AI is not new—these systems have shown to discriminate across demographics of race, gender, age, and more—we’re still not much closer to fixing many of these harmful issues, he said.

Finally, AI might not eliminate all jobs, Landay said, but he anticipates large displacement, and “the gains and losses are not going to be distributed evenly,” he said.

These four real risks need a human-centered approach, he cautioned: “AI systems impact more than just the direct user. They impact the broader community and have societal impact. If we focus on these side effects from the start and design with those larger groups in mind, we have a better chance of creating AI systems that have a positive impact.”

Building Trust in AI

At Davos, the concept of trust played into both panels and dinner discussions. How do we restore trust in organizations? How do we trust AI? A major failing of AI is few tools and companies accept and act on feedback. “If a system makes a mistake and I can’t correct that mistake or get feedback from the company, then I may not trust them in the future,” Landay says. 

Academia Must Play a Role

Today only the wealthiest, biggest companies or nations build AI foundation models. They decide how to build it, for whom, and for what incentive. We do not even know what data these models are trained on.

“Academia needs to be a player here, as a neutral ground to recognize some of these issues and develop systems in a different way,” Landay said. “Academia is also an interdisciplinary player—we have experts in law, medicine, history, social sciences, computer science, art, and design, coming together to ask questions, rather than tech companies focused primarily on a profit motive. We need academia and non-government organizations to have a say and play in this game, and question this power dynamic.”

Companies Rethink Product Development

AI challenges companies in a way that other products have not. In prior years, companies might push out an AI tool to discover later that it discriminates against one group of people. This year Landay heard more executives discuss AI teams that include ethics and design experts at the start, with much more involved processes in place before release. “A couple of companies really stood out to me as, hey, they’re thinking about this genuinely,” he said. “People seemed open to learning more about how they could do better because I think they don’t want the negative blowback if they do it poorly.”

Regulation: A Mixed Bag

At any gathering of capitalism’s who’s who, regulation sounds like a curse. And of course plenty of attendees worried about how new EU regulation might stifle innovation or entrench the biggest players. But Landay heard many people speak highly of efforts to limit this growing technology. “A lot of people just don’t know how to do it well so that the regulations will be able to adapt, be useful, and not out of date every time AI progresses.”

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Chris Cooper
Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch

Related News

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

How AI Shook The World In 2025 And What Comes Next
CNN Business
Dec 30, 2025
Media Mention

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Media Mention
Your browser does not support the video tag.

How AI Shook The World In 2025 And What Comes Next

CNN Business
Industry, InnovationHuman ReasoningEnergy, EnvironmentDesign, Human-Computer InteractionGenerative AIWorkforce, LaborEconomy, MarketsDec 30

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Stanford Researchers: AI Reality Check Imminent
Forbes
Dec 23, 2025
Media Mention

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.

Media Mention
Your browser does not support the video tag.

Stanford Researchers: AI Reality Check Imminent

Forbes
Generative AIEconomy, MarketsHealthcareCommunications, MediaDec 23

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.