Skip to main content Skip to secondary navigation
Page Content
Image
Vector illustration with hand drawn textures depicting artificial intelligence in medical sciences - abstract concept.

iStock/DrAfter123

Foundation models are the centerpiece of the modern AI ecosystem, leading to rapid innovation, deployment, and adoption of powerful AI systems. For some foundation models, developers release “model weights,” enabling almost anyone to fully use, scrutinize, and customize the model. These open foundation models, such as Stable Diffusion 2, BLOOM, Pythia, Llama 2, and Falcon, have emerged as critical to the industry. They have diversified and expanded the market for foundation models, offering important advantages in terms of access, transparency, and scientific discovery. However, they may also introduce risks and contribute to harm if they are adopted at scale, which has led some policymakers to call for harsh restrictions. 

The question of whether governments should regulate open foundation models and how to do so has become central to ongoing negotiations over AI regulation. For example, potential exemptions for open foundation models remain a key issue as policymakers in Brussels finalize the EU AI Act, Europe’s flagship AI legislation and the world’s first comprehensive AI regulation. In the United States, members of Congress have asked Meta executives to provide an explanation for Meta’s decision to release model weights for LLaMA, arguing that it was dangerous. 

In order to address some of the important issues being raised by elected officials, Princeton’s Center for Information Technology Policy and Stanford’s Center for Research on Foundation Models within Stanford HAI convened experts in artificial intelligence, open source technologies, policy, and law for a workshop on the responsible development and use of open foundation models. The workshop featured three sessions exploring principles, practices, and policy for open foundation models, beginning with a keynote from Joelle Pineau, VP of AI Research at Meta and Professor at McGill University.

You can watch the full workshop here.

How Do Well-resourced Developers Approach Open Foundation Models?

In her keynote, Pineau outlined Meta’s approach to open foundation models and highlighted the company’s efforts to promote a culture of open science in this area. Pineau kicked off the workshop by detailing the rapid increase in the size and capabilities of foundation models, noting that the immense scale of these technologies implies greater obligations for responsible development. She described the history of PyTorch, an open-source machine learning framework, as a powerful example of how open science can increase innovation. In Pineau’s words:

“There was really the vision that by sharing [PyTorch] with as broad a community as possible, we accelerate the development and the quality of the work. And that’s really my core hypothesis in pursuing this; it really comes down to the fact that when you open up the doors to this technology to a much more large and diverse community, you really accelerate your ability to make progress, not just on the technical components, [but also] on the usability components.”

Pineau forcefully made the case that open foundation models accelerate innovation, arguing that openness makes developers hold themselves to a higher standard of excellence due to outside scrutiny, that collaboration leads to better and faster solutions, and that transparency builds trust in scientific discoveries. She added that responsible innovation requires that companies “aim for extreme transparency to encourage participation and trust.” 

Pineau walked through Meta’s release process for several high-profile open foundation models, such as Segment Anything, OPT-175B, and LLaMa, identifying the guardrails Meta adopted to encourage responsible use. She stated that Meta begins every AI project with the intention to open-source each component of the project, but it is sometimes unable to do so either because the research idea is unsuccessful or due to other concerns.

First Principles for Responsible Development

The first panel featured a conversation as well as presentations by Mitchell Baker, CEO of Mozilla, William Isaac, Senior Staff Research Scientist at Google DeepMind, Rumman Chowdhury, CEO of Humane Intelligence, and Peter Henderson, Assistant Professor at Princeton.

Baker described the progress Mozilla has made in normalizing open-source development and demonstrating that open-source can improve security and provide collective value by facilitating community development. She said that liability for harm is a thorny issue that has yet to be resolved even for simpler examples, like closed-source software.

In addition to weighing in on the question of liability, Henderson underscored the importance of open-source tools to defend against potential harms from language models, such as environmental damage and self-harm. He suggested that open foundation models can be used more easily by government entities because they can be inspected more thoroughly; these government services can in turn provide benefits to the public.

Chowdhury drew on her experience organizing the Generative Red Team Challenge at DEFCON to describe risks such as fraud that may stem from open foundation models, some of which were stress tested during the challenge. She explained that most foundation models are open and that the open-source community has often done a good job of “public enforcement” related to risky practices. 

Like other speakers, Isaac noted that there is not a binary between open and closed foundation models but a gradient along which different organizations decide to release their AI systems. He highlighted a tradeoff between highly open forms of release that can reduce the steerability of the foundation model and highly closed forms of release that can reduce the knowledge the broader community gains as a function of the release. Isaac added that there may be negative effects from releasing open foundation models, stating “What are the downstream impacts? It is not costless to release highly capable systems out into the public. There are always going to be tradeoffs.”

Best Practices for Responsible Development 

The second panel focused on best practices and included Yacine Jernite, ML & Society Lead at HuggingFace, Stella Biderman, Executive Director of Eleuther AI, Melanie Kambadur, Senior Research Engineering Manager at Meta, and Zico Kolter, Associate Professor at Carnegie Mellon University. 

Jernite laid out Hugging Face’s approach to responsible AI, which includes facilitating carbon tracking and other forms of evaluation, adopting ethical charters, and using Responsible AI Licenses to restrict certain use cases. He contended that openness is a requirement of responsible development of foundation models as it is the only way to provide meaningful transparency, recourse, inclusion, and data subject rights. 

In describing Eleuther AI’s work on responsible and open foundation models, Biderman emphasized the ways in which openness promotes inclusion of nontraditional researchers. She concluded: “I don’t think it’s essential for everyone to be fully transparent about everything. But I do think it’s important to have transparency and have radical transparency, at least by some people, because there’s a lot of research, there’s a lot of important and pressing questions from a policy perspective that you really need transparency to answer.” 

Kambadur outlined Meta’s release process for Llama 2–one of its latest and most-capable open foundation models–which included a number of steps to promote responsible use. For instance, Meta invested significantly in red teaming, released a responsible use guide alongside the model, and used techniques such as supervised fine-tuning and reinforcement learning with human feedback to make Llama 2 on par with leading closed models in terms of safety. Kambadur called for the open-source community to release more artifacts that will accelerate research on safety, such as benchmarks for safety-aligned models and safety reward models. 

Kolter summarized his research on how adversarial attacks can evade foundation models’ safety filters, concluding that there is no guarantee that closed foundation models are more secure than open foundation models. He expressed the view that foundation models, whether open or closed, have certain security vulnerabilities that are “unpatchable bugs,” meaning that foundation models should be treated only as tools and that humans must be kept in the loop.

Policy Considerations

The final panel addressed AI policy and featured Stefano Maffulli, Executive Director of the Open Source Initiative, Peter Cihon, Senior Policy Manager at GitHub, Cori Zarek, Deputy Administrator of the U.S. Digital Service, and Daniel Ho, Professor at Stanford University.

Maffulli compared the current situation to the history of open source, where it was initially unclear what legal regime should apply to software. He made clear that while incumbents may resist open-source development, openness is a powerful engine for innovation that has helped catalyze technological transformation in the 25 years since the definition of open-source was solidified.

Cihon differentiated between open-source software policy and AI policy, stating that the analogy is limited by the fact that the vast capabilities of foundation models imply greater responsibility for developers. He delved into the ways that the AI Act could affect open foundation models, and proposed that the EU should adopt a tiered regulatory approach that requires all open developers to conduct risk assessments and provide technical documentation, while only open models above a certain threshold would be subject to additional requirements.

Speaking from the White House, Zarek gave a timeline of the U.S. federal government’s use of open-source software, which is now extremely common across federal agencies. She also noted that the federal government plays an important role in the open-source ecosystem by providing regulatory guidance and as a major purchaser of software, which is subject to specific procurement rules.

Ho summarized the current legislative debate concerning regulation of foundation models, including the push for registration and licensing. He began by saying the “question we should be asking as a community is whether regulation is necessary given the marginal risk of foundation models relative to the counterfactual. And then for this workshop the even sharper version of the question is whether there’s something unique that needs to be done with open foundation models.” He recommended that regulation of foundation models include a form of adverse event reporting to increase understanding of the distribution of emergent risks in addition to measures that encourage auditing and red teaming.

Watch the Workshop:

Kevin Klyman is a researcher at the Stanford Center for Research on Foundation Models, a center within Stanford HAI, and an M.A. candidate at Stanford University.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics