HAI at Davos: Key Insights on AI From the World Economic Forum

From top left, clockwise: Yejin Choi, Erik Brynjolfsson, Andrew Ng, and James Landay at the Stanford HAI Davos reception; Yejin Choi; Alex (Sandy) Pentland; James Landay. | John Werner
Stanford HAI faculty joined world leaders and policymakers to discuss AI trends, shifting U.S. policy, the future of diversity efforts, and more.
At the 2025 World Economic Forum in Davos, Stanford HAI-affiliated faculty members engaged in critical discussions on the evolving landscape of artificial intelligence. From participating in high-profile panels to private conversations with global leaders, they observed a growing interest in how the new Trump administration will shape AI policy, alongside broader concerns about AI’s societal impact and geopolitical implications. While past discussions often centered on the race toward artificial general intelligence, this year’s conversations reflected a notable shift toward “small AI” — specialized models designed for targeted applications.
Attending this year were HAI Co-Director James Landay, Stanford Digital Economy Lab Director Erik Brynjolfsson, HAI Center Fellow Alex (Sandy) Pentland, HAI Senior Fellow Yejin Choi, and HAI Affiliate Faculty Andrew Ng.
Here Landay, Pentland, and Brynolfsson offer their key takeaways, revealing how policymakers, business leaders, and researchers are navigating these emerging trends and shaping the future of AI.
James Landay: Professor of Computer Science and the Anand Rajaraman and Venky Harinarayan Professor in the School of Engineering; Co-Director, Stanford HAI
It was clear this year at Davos that AI has finally risen to the top. It was ubiquitous. And more people are realizing how much AI will reshape society. Last year, it was more, hey, you’ve got to get AI in your business right now or you’ll lose out to your competitors. This year, more people focused on societal impact and the digital divide — both between poor people in the United States and rich people, as well as the global majority countries versus the West, language divides, imperialism, and culture and ontology embedded in foundation models.
That last point was a pressing issue — the need for large foundation models that reflect global cultural diversity. Currently, AI models are predominantly built by U.S. companies with an emphasis on English and Western content. That creates cultural biases and even mistranslations. We need culturally and linguistically diverse training data in AI development.
Another recurring theme I heard was the need for standardized definitions in AI. Establishing common definitions for terms like AGI (Artificial General Intelligence) and AI could lead to more productive discussions.
There was obviously a lot of concern for AI’s use of energy and water, but I emphasized to people I spoke with that this is likely a short-term concern. There’s so much incentive for technology companies to optimize both inference time and training time energy use. Companies don’t make money if every time you do a query, it costs too much energy to fulfill that query. DeepSeek was getting in people’s heads during Davos, but really, the company just used several known (and developed some new) clever optimizations for both training time and inference time compute. Now there’s a lot of skepticism over exactly how much money and how many GPUs were required, but it’s clear that they were able to train really great models much more efficiently. And now these are techniques that everyone else can use because it’s open source and in published technical reports. So that, again, leads to way more efficiency.
The new Trump administration was also weighing on people’s minds. One panel discussed DEI and how there’s going to be a push for you to change names of your programs. Some people are pushing hard back against that; one leader said that diverse teams make his employees and company better. It was good to see some leaders who are going to fight against this stuff. And given the American election, there's a lot of fear of what is going to happen. Europeans asked me, do you expect AI to go hog wild in the U.S.? And I said, well, no one was expecting any regulation in the U.S. anyway, even if Kamala Harris had won. The only thing you’re likely to pass in such a split U.S. Congress is anything tied to competition or national security with China. For me, the big question is, through any of the executive orders on AI, what will remain or not? Which tech executives will influence the administration? With this much uncertainty, it makes business hard. The uncertainty makes investment risky for people.
Alex (Sandy) Pentland: Toshiba Professor of Media Arts and Sciences; Professor of Information Technology; Media Lab Entrepreneurship Program Director, MIT Management Sloan School; Center Fellow, Stanford HAI; Faculty Lead of Digital Platforms and Society at the Digital Economy Lab
Davos this year (my 18th) had an air of seriousness I’ve not seen in more than a decade. People seemed to be girding themselves for challenges — geopolitical tensions, tariff wars, real wars, climate disasters — by a change in focus from idealistic goals and dreams to making social systems work and social contracts sustainable.
The most important shifts in emphasis I saw were changing from medicine to health, from new legal rights to providing access to justice to everyone, and from scary claims of AGI to small AI that does particular jobs and does them really well. The most interesting ideas were around using AI to help individuals: protection from fraud and scams, providing the broader context needed to identify misinformation, and the incentives and information needed to make healthy choices.
A major theme I think is emerging is what you might call the third way: not the US-EU, not China, but the way of India, Eastern Africa, the Middle East and Indopacific. These are middle income countries, no longer poor, and with sophisticated technical populations, and they are busy deploying digital technologies everywhere, including all but the most cutting-edge AI. I think the new models for healthy longevity, digital trade and finance, and AI-enabled education may well emerge there. I’m looking forward to seeing this future because, despite today’s problems, it is possible that a new and better world is emerging.
Erik Brynjolfsson: Director, Stanford Digital Economy Lab; Jerry Yang and Akiko Yamazaki Professor and Senior Fellow, Stanford HAI; Ralph Landau Senior Fellow, Stanford Institute for Economic Policy Research
The big story in Davos this year was of course AI everywhere.
But I also sensed a change in tone: companies weren't just excited about raw capabilities. They were looking for real business value. And a lot of them have been disappointed because productivity isn't coming as fast as acing AI benchmarks like math exams.
I think it's a healthy pivot to start focusing more on identifying the specific tasks where AI can be helpful. Ultimately that will lead not only to more business value, but also to more productivity, better healthcare, a cleaner environment, and a more prosperous society.