Skip to main content Skip to secondary navigation
Page Content
Image
simple pop art of person facing forward looking through giant binoculars

This past year marked major advances in generative AI as terms like ChatGPT and Bard become household names. Companies sank major investment into AI startups (Microsoft’s $10 billion drop into OpenAI, Amazon’s $4 billion to Anthropic to name just two), while leading AI researchers and CEOs debated AGI’s likelihood in headlines. Meanwhile, policymakers started getting serious about AI regulation - the EU put forth the most comprehensive set of policies governing the technology yet, and the Biden Administration published a comprehensive Executive Order detailing 150 requirements for federal agencies.

Have we reached peak AI? No, say several Stanford scholars. Expect bigger and multimodal models, exciting new capabilities, and more conversations around how we want to use and regulate this technology.

Here are seven predictions from faculty and senior fellows at Stanford HAI. 

White Collar Work Shifts

I expect mass adoption by companies that will start delivering some of the productivity benefits that we've been hoping for for a long time. It's going to affect knowledge workers, people who have been largely spared by a lot of the computer revolution in the past 30 years. Creative workers, lawyers, finance professors and more are going to see their jobs change quite a bit this year. If we embrace it, it should be making our jobs better and allow us to do new things we couldn't have done before. Rarely will it completely automate any job — it's mostly going to be augmenting and extending what we can do. 

Erik Brynjolfsson, Director, Stanford Digital Economy Lab; Jerry Yang and Akiko Yamazaki Professor and Senior Fellow, Stanford HAI; Ralph Landau Senior Fellow, Stanford Institute for Economic Policy Research 

Deepfake Proliferation

I expect to see big new multimodal models, particularly in video generation. Therefore we’ll also have to be more vigilant to serious deepfakes — we’ll see the spread of videos in which people “say” things that they never said. Consumers need to be aware of that, voters need to be aware of it. We're also going to see legislation. The EU is getting into their final position for enacting widespread AI rules. There's back and forth whether that will affect the big American tech companies and their models, but it will come down very soon in 2024. For the U.S., we’re probably not going to see major regulation. Congress is not going to pass much legislation going into an election year. We will see more startups and other companies like OpenAI releasing the next larger models, and we'll see new capabilities. We'll still see a lot of controversies of “Is this AGI? and what is AGI?” I think people shouldn't be worried about AI taking over the world. That's all hype. But we should be worried about these harms that are happening now - disinformation and deepfakes. We’ll certainly see more of that in 2024. 

James Landay, Anand Rajaraman and Venky Harinarayan Professor, School of Engineering, Professor of Computer Science, Stanford University; Vice-Director and Faculty Director of Research, Stanford HAI

GPUs Shortage

I’m worried about a global shortage of availability of GPU processors—the special processors upon which lots of AI runs. The big companies (and more of them) are all trying to bring AI capabilities in-house, and there is a bit of a run on GPUs. There are a few companies that make these (NVIDIA is the major one), and they may be at capacity. This is a competitiveness thing for the companies but also for entire countries who don’t want to miss out on AI innovations.  

This will create a huge pressure not only for increased GPU production, but for innovators to come up with hardware solutions that are cheaper and easier to make and use. There is a lot of work in electrical engineering at Stanford and other places on low-power alternatives to current GPUs. Some of my colleagues including Kunle Olukotun and Chris Re are putting together an effort in this area. Additionally one of Stanford HAI’s Hoffman-Yee projects is focused in this direction. That work is still far off in terms of mass availability and bringing to market, but there will be huge pressure to accelerate such efforts in order to democratize access to AI technologies.

Russ Altman, Kenneth Fong Professor and Professor of Bioengineering, of Genetics, of Medicine, of Biomedical Data Science, and Stanford HAI Senior Fellow

More Helpful Agents

I'm looking for two things. One of them is the rise of agents and being able to connect to other services to actually do things. 2023 was the year of being able to chat with an AI.

Multiple companies launched something, but the interaction was always you type something in and it types something back. In 2024, we'll see the ability for agents to get stuff done for you. Make reservations, plan a trip, connect to other services.

Additionally, I think we'll make steps towards multimedia. It will take more than just one year.

We've seen so far a big focus on language models and then image models. At some point, we're going to have enough processing power to do videos as well. That'll be really interesting, because what we're training on now is all very intentional. People write down in pages and paragraphs what they think is interesting and important. Photos are taken when somebody clicks the shutter and points the camera and thinks something is happening.

With video, some will be like that. People make movies that tell stories in the same way that text does. But there are cameras that are on 24/7 and they're capturing what happens just as it happens without any filtering, without any intentionality. AI models haven't had that kind of data before. Those models will just have a better understanding of everything.

Peter Norvig, Distinguished Education Fellow at Stanford HAI

Hopes for U.S. Regulation

AI policy will be worth watching in 2024. We saw the most progress in 2023 to date. In July, Congress introduced the bipartisan, bicameral CREATE AI Act to give students and researchers access to AI resources, data, and tools. It garnered widespread support because it promises to broaden access to AI development. Then in late October, President Biden signed an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that demonstrates his administration’s commitment to not only to foster a vibrant AI ecosystem but also to harness and govern the technology. I hope in 2024, we’ll see Congress act. They need to pass legislation like the CREATE AI Act, adhere to the elements called on by the new EO, and invest more in the public sector to ensure America’s leadership in creating AI technology steeped in the values we stand for. 

Fei-Fei Li, Sequoia Professor in the Computer Science Department and Co-Director of Stanford HAI.

Asking Big Questions, Applying New Policies

One of my hopes for 2024 is that we can have the wherewithal to continue to ask the hard questions, the critical questions about what we want from artificial intelligence in our lives, in our communities, in education, in our society. I don't think we've ever seen a year quite like this. More and more kinds of this generative AI technology are going to embed themselves and entrench into our work, play and communication. How does this year make us feel about ourselves? 

I think we need to give ourselves the time and space to articulate what we think is permissible and where we should put the limits. One of the first realizations regarding this current generation of AI was back in February 2023 when (academic journal publisher) Springer Publishing issued a statement in which they said large language models can be used in drafting articles, but will not be permitted as a coauthor on any publication. And the rationale they cited, and I think this is so important, is accountability. That doesn't mean Springer is locked into this forevermore. But that’s so critical: putting something out there in earnest, understanding what your rationales are, and saying this is where we are right now with the way we understand it and in the future we may add more nuances into these policies. And I think that institutions and organizations must have that perspective and try to put guidelines down on a page in 2024.

Ge Wang, Associate Professor in the Center for Computer Research in Music and Acoustics (CCRMA) and Stanford HAI Senior Fellow.

Companies Will Navigate Complicated Regulations

Much of the focus on AI regulation in 2023 was on the AI Act across the pond in the EU. However, by mid 2024, two U.S. states — California and Colorado — will have adopted regulations addressing automated decisionmaking in the context of consumer privacy. While these regulations are limited to AI systems that are trained on or collect individuals’ personal information, both offer consumers the right to opt-out of the use of AI by systems that have significant impacts, such as in hiring or insurance. Companies are going to have to start thinking about what it means on the ground when customers exercise their rights, particularly en masse. What happens if you are a large company using AI to assist with your hiring process, and even hundreds of potential hires request an opt-out? Do humans have to review those resumes? Does it guarantee a different, or better, process than what the AI was delivering? We’re only just starting to grapple with these questions. 

Jennifer King, Stanford HAI Privacy and Data Policy Fellow

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics