About
In this affinity group, we will investigate the human-centered governance of AI. Governance is crucial in shaping the direction of AI research, the manifestation of its beneficial impacts, and mitigation of its harms. While discussions on what ethical and responsible AI entails have become increasingly popular, there is also a pressing need for deliberation on how governance itself should be structured and implemented in order to be effective, proactive, and inclusive.
Specifically, we will study and engage with the different stakeholders involved in AI governance (e.g. international governing leaders, tech entrepreneurs, engineers, ethics nonprofits, users, domain specialists, and educators). We will also seek to understand the parts of a governance toolkit (e.g. private and public regulations, funding, policies, laws, human rights doctrines, economic incentives, technical risk assessment measures, and enterprise software for governance).
Through discussions, speaker events, and outreach, we will merge disciplines such as computer science, management, international relations, and social science. We will understand the technical challenges AI poses for governance, as well as compare and evaluate existing governance frameworks. Valuing diverse perspectives, we aim to conduct panels with speakers across institutions, geographical regions worldwide, and applications of AI. Lastly, we hope to create opportunities for Stanford students and Bay Area residents to explore the intersection of novel innovations in AI governance with their career aspirations and the public sector.