Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
What Davos Said About AI This Year | Stanford HAI
news

What Davos Said About AI This Year

Date
January 28, 2026
Topics
Economy, Markets
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

Artificial intelligence dominated discussions at this year’s World Economic Forum in Davos—from closed-door roundtables on governance to high-profile panels on business transformation. Stanford HAI Co-Director James Landay and Vanessa Parli, HAI managing director of programs and external engagement, joined the weeklong summit, contributing to conversations across keynotes, panels, and stakeholder meetings.

Throughout the week, they heard a consistent theme: enthusiasm for AI remains high, but the tone has shifted from hype to effective real world deployment. Leaders are pressing for tangible impact and clearer responsibility. Geopolitics also loomed large, shaping debates about “sovereign AI,” open ecosystems, and the risks of overreliance on any single country or company.

Below, Landay and Parli share what they heard—and what they emphasized—in this conversation on Davos and the ideas that dominated the AI agenda.

What was the mood around AI at Davos this year?

James Landay: People are still optimistic—but they’re more realistic. Compared to past years, it felt like fewer people were saying, “Experiment with everything.” More were saying, “We need AI to deliver actual returns now.” The hype is still there, but there’s more pressure to show what’s working.

Vanessa Parli: We still heard a lot of excitement about what can be done with AI, but I was also happy to hear industry leaders ask, how can we make responsible AI a business case? This is important at a time when public trust in AI, especially in Western countries, is quite low. 

What “outside AI” factor shaped conversations the most?

Landay: President Trump’s presence and comments—particularly around Greenland—cast a bit of a shadow. People were genuinely asking, “What is going on?” That fed directly into a conversation that was already building: sovereign AI and what happens if partners (or markets) aren’t reliable.

“Sovereign AI” came up a lot. What did people mean by it?

Landay: It meant different things to different people. In broad strokes, many were talking about countries wanting more control over their AI future—often in response to geopolitical uncertainty and the dominance of major tech companies.

What I tried to emphasize is: first, define the goals. Our HAI policy team has been analyzing this, and one helpful framing is that countries often pursue sovereignty to protect things like national security, economic security and prosperity, cultural values, and other national resilience goals. Countries can then choose where to focus in the “AI stack,” such as compute (GPUs, data centers), data, models, applications, and talent. Different countries emphasize different layers depending on their goals.

Parli: HAI is currently working on research to help define the different angles of AI sovereignty and the benefits of each — describing what it means, what are the components, how different approaches are beneficial. Creating insights grounded in research is essential for countries to make good decisions.

Do you agree with the “build our own model” version of sovereign AI?

Landay: Not as the only option. A lot of the debate assumes sovereignty means: “We control everything, so we must build our own models.” I argued there’s another path: open source—building shared capability internationally so no single company or country controls it.

Parli: HAI generally leans toward an open ecosystem — open data and open models would help build transparency and trust in the technology and accelerate innovation. If any country is going to see the true benefits of the technology, the users need to trust it. 

Is HAI doing anything concrete in that direction?

Landay: Yes. We’ve announced an MOU with ETH Zürich and EPFL—our first partner in a broader, global effort to collaborate on open models and related work. And we’re in conversations with other governments and research centers as well.

How were people thinking about AI and the changing nature of work?

Landay: The ROI of AI came up a lot. I heard less about worker replacement and much more about worker augmentation and shifting work processes. People said, if I have AI, how does the work process and its design change, how do roles change, what new products could we develop? Someone gave a great example: If AI makes loan decisions in five minutes instead of five days, how does that change your product and what’s possible for customers?

In prior years, we heard so much hype that if we didn’t get into AI quickly, we’d be toast. This year, people are starting to realize, well, we need to do it, but we need to do it smartly or it won’t lead to real benefits. 

Where is the pinch for workers?

Landay: At one talk, I heard an executive of a major business unit of one of the major tech companies say he was charged with growing his part of the business something like $40 billion over the next three to five years, with no increase in head count. So this means while they’re not laying people off, they’re not hiring people. They will use AI to improve everyone’s personal productivity. The main takeaway is that, while maybe we’ll avoid mass layoffs from this type of company, there won’t be a ton of new jobs ahead. If I were a new grad, I’d be concerned. In the long run though, I and others are still bullish on AI creating more new jobs.

A lot of people were talking about AI agents. Your thoughts?

Landay: “Agents” came up in two ways: practical implementation inside companies (which is already happening), and a more expansive vision of many independent agents negotiating information and money across the open internet. I’m more cautious on the latter—especially when personal or financial data is involved. There’s important research and infrastructure still needed before that becomes something people will broadly trust.

In your own panel discussions, what did you emphasize?

Parli: I reminded people that while there is a lot of opportunity for AI, it’s not guaranteed, and we need to think critically about how it is designed and deployed. You need many voices in the conversation to ensure that AI benefits everybody.

Landay: If we want AI to be successful and socially beneficial, we need three things, and none of them alone is sufficient:

  • A design process that accounts for community and society in addition to being user-centered — this is what I consider human-centered AI.

  • Ethics education and professional norms for the people building these systems.

  • Regulation, policy, and law, because some actors will cut corners or cheat—and society needs mechanisms to respond, like we do in other industries.

All three matter. And even with all three, there will still be problems—we need realistic expectations and the ability to respond when things go wrong.

Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch

Related News

How AI Shook The World In 2025 And What Comes Next
CNN Business
Dec 30, 2025
Media Mention

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Media Mention
Your browser does not support the video tag.

How AI Shook The World In 2025 And What Comes Next

CNN Business
Industry, InnovationHuman ReasoningEnergy, EnvironmentDesign, Human-Computer InteractionGenerative AIWorkforce, LaborEconomy, MarketsDec 30

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Stanford Researchers: AI Reality Check Imminent
Forbes
Dec 23, 2025
Media Mention

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.

Media Mention
Your browser does not support the video tag.

Stanford Researchers: AI Reality Check Imminent

Forbes
Generative AIEconomy, MarketsHealthcareCommunications, MediaDec 23

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.

Centaurs, Canaries and J-Curves: Pitfalls and Productivity Potential of AI
Newsweek
Dec 18, 2025
Media Mention

Erik Brynjolfsson, HAI Senior Fellow and Director of the Stanford Digital Economy Lab, advocates for humans as an end and not just a means to an end, emphasizing augmentation of human tasks for true economic gains.

Media Mention
Your browser does not support the video tag.

Centaurs, Canaries and J-Curves: Pitfalls and Productivity Potential of AI

Newsweek
Economy, MarketsWorkforce, LaborDec 18

Erik Brynjolfsson, HAI Senior Fellow and Director of the Stanford Digital Economy Lab, advocates for humans as an end and not just a means to an end, emphasizing augmentation of human tasks for true economic gains.

Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs