Skip to main content Skip to secondary navigation
Page Content
Image
DALL-E futuristic tree

What transformative shifts in AI can we expect in 2025? According to leading experts from Stanford Institute for Human-Centered AI, one major trend is the rise of collaborative AI systems where multiple specialized agents work together, with humans providing high-level guidance. These setups envision AI teams tackling complex problems in health, education, and finance. Other scholars expect new approaches to human-AI collaboration and heightened pressure on developers to prove AI’s real-world benefits.

Additionally, scholars warn, generative AI will likely increase the number of sophisticated scams at the same time that already limited regulation in the U.S. may grow weaker. 

Read these predictions and more from scholars across computer science, medicine, policy, and education.

Agents of AI, Less Regulation

We’re really going to see agents of AI – we saw it a bit in research, but now we’re seeing it a lot in research and industry. Essentially, how do you put these agents together to actually do things for you? And we’re now seeing interfaces. For example, there’s some API with Anthropic’s Claude where it can actually operate your computer to do things like put a meeting on your calendar or help you buy a plane ticket. Obviously, it has some risks – if they’re actually able to use your computer, they can do damage, they can make mistakes. So there are concerns with how this is done. But while the first versions of these kinds of tools weren’t successful, we’re now seeing more potential.

The other big story, I think, is what I would call asymptoting. Large models are improving slowly. We’ve seen the large models released at slower rates than they were before, and they’re still impressive in what they’re able to do on certain benchmarks, but for many of these tasks, they’re getting better much more slowly. Do we only get slightly better because they’ve used so much data, and maybe synthetic data hasn’t worked out as well as some of the people thought it might? And in fact, some of the newer models might even be worse at some tasks. Now, some people ask, is it AI winter? But these models can do a lot of useful things, so I don’t think it’s a winter. But there may be people who are drawing straight lines going up and to the right about AIs taking over the world or being able to do everything, and I think it’s going to take a lot more time until somebody makes major breakthroughs in architecture. 

Finally, with the new Trump administration, I would expect less regulation of AI in the United States. We already didn’t have a lot of regulation, but the Biden administration’s Executive Order set guidelines for a lot of the U.S. government, which does have impact because it’s such a big customer of technology. But I would expect the Trump administration to roll some of that back. That doesn’t mean there won’t be AI policy or regulation. We’re just going to see it from other players like the EU or patchwork state regulation.

James Landay, HAI Co-Director, Professor of Computer Science and the Anand Rajaraman and Venky Harinarayan Professor in the School of Engineering

More Scams, Less Consumer Protection

We will continue to see the misuse of generative AI to aid in committing scams, especially in terms of audio deepfakes of people’s voices. I predict that the incoming administration will take a lighter hand than the current administration in protecting the public from these scams. If the Federal Trade Commission takes a back seat, then state attorneys general will have a bigger role to play in consumer protection. Banks and other financial institutions, as well as providers of phone, email, and internet services, should step up their efforts to educate their customers about these scams. In particular, they (as well as government agencies) should ensure they are providing resources in languages other than English, since English speakers are not the exclusive target of scams. 

Riana Pfefferkorn, HAI Policy Fellow

‘General Contractor’ LLMs

We will start seeing complex systems for problem solving that are made out of a bunch of AI systems that talk to each other. For example, picture a series of large language models with specific expertise (fine-tuning) combining to solve problems. In some cases, they may be negotiating with one another; in other cases, they will hand off tasks to “expert LLMs” which will return answers. And so there will be a kind of “general contractor” LLM that deals with the human customers and it will subcontract some of its problem solving to other agents that have expertise. Where these arise first may include complex simulations, health decision-making, financial arrangements, or educational programs.

Russ Altman, HAI Associate Director, the Kenneth Fong Professor in the School of Engineering, and Professor of Bioengineering, of Genetics, of Medicine, of Biomedical Data Science, and (by courtesy) of Computer Science

Healthy Skepticism in Education AI

I expect to see more focus on multimodal AI models in education, including in processing speech and images. We’ll likely also see new education-specific or fine-tuned models, and with all of this, an increased skepticism and interest in gathering evidence on what actually works – what really helps students learn better and teachers educate more effectively.

Dorottya (Dora) Demszky, Assistant Professor of Education and (by courtesy) of Computer Science

Defining Value from GenAI

Given the rapid development of the technology and massive capital outlays, developers of these technologies will be under pressure to define and verify their presumed benefits. In healthcare, the focus on evaluating the clinical benefits will get sharper (which we recently wrote about in Nature) and we will have to devise ways of thinking that go beyond a narrow efficiency or productivity lens as we currently do.

Shared and transparent benchmarking along the lines of what our Center for Research on Foundation Model’s HELM project does will become mainstream, so that informed decisions can be made about the claimed benefits of using generative AI in healthcare.

Nigam Shah, Professor of Medicine and of Biomedical Data Science at Stanford Medicine, and Chief Data Scientist for Stanford Health Care

AI Agents Work Together

In 2025, we will see a significant shift from relying on individual AI models to using systems where multiple AI agents of diverse expertise work together. As an example, we recently introduced the Virtual Lab, where a professor AI agent leads a team of AI scientist agents (e.g., AI chemist, AI biologist) to tackle challenging, open-ended research, with a human researcher providing high-level feedback. By leveraging the multidisciplinary expertise of different agents, the Virtual Lab successfully designed new nanobodies that we validated as effective binders to recent SARS-CoV-2 variants. Looking ahead, I predict that many high-impact applications will use such teams of AI agents, which are more reliable and effective than a single model. I’m particularly excited about the potential of hybrid collaborative teams where a human leads a group of diverse AI agents.

James Zou, Associate Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering

Rethinking Human-AI Collaboration 

We will experience an emerging paradigm of research around how humans work together with AI agents. Identifying the best ways for AI and humans to work together to achieve collective intelligence will become increasingly important. Currently, AI systems are evaluated mostly for their ability to support autonomous setups; we will see more evaluation benchmarks and environments that take into account human-AI interaction and human-AI collaboration. As we progress with AI, we will continue to see a large body of work around risk assessment. AI risk assessment is far behind AI capability development research. In addition to inheriting these risks of traditional AI systems, the widespread adoption of LLMs/VLMs-based systems will also amplify some of them and introduce new ones.

Diyi Yang, Assistant Professor of Computer Science

More News Topics