How policymakers can best regulate AI to balance innovation with public interests and human rights.
At the Stanford Institute for Human-Centered Artificial Intelligence, our vision for the future is led by our commitment to promoting human-centered uses of AI, and ensuring that humanity benefits from the technology and that the benefits are broadly shared. Over the past 5 years, we have seen a growing community of faculty, researchers, students, and more who have been instrumental in carrying out this commitment.
At the Stanford Institute for Human-Centered Artificial Intelligence, our vision for the future is led by our commitment to promoting human-centered uses of AI, and ensuring that humanity benefits from the technology and that the benefits are broadly shared. Over the past 5 years, we have seen a growing community of faculty, researchers, students, and more who have been instrumental in carrying out this commitment.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
Concerns over the societal impacts of generative AI are prompting a flurry of regulatory and policy responses globally, with initiatives underway in the EU, China, Brazil, Japan, Singapore, the UK, and the US, as well as through multilateral organizations such as the UN, OECD, the African Union, and ASEAN. Proposals range widely, reflecting competing political philosophies.
Concerns over the societal impacts of generative AI are prompting a flurry of regulatory and policy responses globally, with initiatives underway in the EU, China, Brazil, Japan, Singapore, the UK, and the US, as well as through multilateral organizations such as the UN, OECD, the African Union, and ASEAN. Proposals range widely, reflecting competing political philosophies.
Fei-Fei Li, Co-Director of Stanford HAI, outlines “three fundamental principles for the future of AI policymaking” ahead of the AI Action Summit in Paris.
Fei-Fei Li, Co-Director of Stanford HAI, outlines “three fundamental principles for the future of AI policymaking” ahead of the AI Action Summit in Paris.
At the Stanford Institute for Human-Centered Artificial Intelligence, our vision for the future is led by our commitment to promoting human-centered uses of AI, and ensuring that humanity benefits from the technology and that the benefits are broadly shared. Over the past 5 years, we have seen a growing community of faculty, researchers, students, and more who have been instrumental in carrying out this commitment.
At the Stanford Institute for Human-Centered Artificial Intelligence, our vision for the future is led by our commitment to promoting human-centered uses of AI, and ensuring that humanity benefits from the technology and that the benefits are broadly shared. Over the past 5 years, we have seen a growing community of faculty, researchers, students, and more who have been instrumental in carrying out this commitment.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
Concerns over the societal impacts of generative AI are prompting a flurry of regulatory and policy responses globally, with initiatives underway in the EU, China, Brazil, Japan, Singapore, the UK, and the US, as well as through multilateral organizations such as the UN, OECD, the African Union, and ASEAN. Proposals range widely, reflecting competing political philosophies.
Concerns over the societal impacts of generative AI are prompting a flurry of regulatory and policy responses globally, with initiatives underway in the EU, China, Brazil, Japan, Singapore, the UK, and the US, as well as through multilateral organizations such as the UN, OECD, the African Union, and ASEAN. Proposals range widely, reflecting competing political philosophies.
Fei-Fei Li, Co-Director of Stanford HAI, outlines “three fundamental principles for the future of AI policymaking” ahead of the AI Action Summit in Paris.
Fei-Fei Li, Co-Director of Stanford HAI, outlines “three fundamental principles for the future of AI policymaking” ahead of the AI Action Summit in Paris.