Skip to main content Skip to secondary navigation
Page Content

Responses to NTIA's Request for Comment on AI Accountability Policy

Researchers from the Stanford Center for Research on Foundation Models (CRFM), part of Stanford HAI, and Princeton University’s Center for Information Technology Policy (CITP), offered the following responses to the Request for Comment (RFC) by the National Telecommunications and Information Administration (NTIA) on AI accountability policy.


Response from Stanford CRFM, HAI, and Princeton CITP

Rishi Bommasani, Sayash Kapoor, Daniel Zhang, Arvind Narayanan, Percy Liang

Download the full brief

Response Cover

This response centers on foundation models (FMs), which constitute a broad paradigm shift in AI. Foundation models require substantial data and compute to provide striking capabilities that power countless downstream products and services. Researchers argue that pervasive opacity compromises accountability for foundation models. Foundation models and the surrounding ecosystem are insufficiently transparent, with recent evidence showing this transparency is deteriorating further. Without sufficient transparency, the federal government and industry cannot implement meaningful accountability mechanisms as we cannot govern what we cannot see. The submission recommends the federal government:

  • Invest in digital supply chain monitoring for foundation models
  • Invest in public evaluations of foundation models
  • Incentivize research on guardrails for open-source models

Response from Jennifer King

Jennifer King

Download the full brief

Response Cover

This response focuses on data protection, data accountability, and privacy mechanisms to ensure AI accountability. The researcher argues that there is an urgent need for comprehensive federal privacy legislation and regulation of AI and data practices. Individual privacy rights and sectoral approaches are insufficient to restrain the large-scale data collection required for AI. Accountability mechanisms focused on data provenance, quality, consent, and transparency are needed to address concerns with AI datasets. Greater public access to models, data, and computing resources would enable researchers and advocates to develop and test such mechanisms. Without legal guardrails and accountability, the expansion of data collection for AI threatens to intensify privacy harms and erosion of consumer trust.

Authors