Rohini Kosoglu Joins Stanford HAI as Policy Fellow
For the past two decades, Rohini Kosoglu has operated at the highest echelons of government, working across the aisle to find consensus and shape policy on initiatives as wide-ranging as the Affordable Care Act, the AI Bill of Rights, and the American Rescue Plan. Most recently, she served as the deputy assistant to the president and domestic policy advisor to the vice president, having previously led as chief of staff in the United States Senate.
Kosoglu now brings her bipartisan experience in technology, health care and economic policymaking to Stanford, where her new role as a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) seems a natural fit, she says.
“The Stanford community pairs innovation with a keen focus on how to better the human condition,” says Kosoglu. “I was attracted to the world-class talent here and their desire to translate policy into better outcomes for people’s lives, which has been the focus of my career.”
Kosoglu also serves as director of public policy and political affairs at the Stanford Byers Center for Biodesign and is a venture partner at Fusion Fund, which focuses on early-stage technology and health care investments. At HAI, she hopes to increase collaboration between Stanford and the federal government to promote the safe and equitable development of new AI technology. Here, she discusses that work, the advice she’d give lawmakers, and the challenge of enacting AI guardrails.
What do you want to accomplish as a HAI policy fellow?
HAI is in the position of being able to guide our leaders when it comes to AI, and I want to make sure the institute’s positions are translated well and taken seriously. I’ll be working as a conduit between the two coasts, providing HAI with strategic policy advice, expanding its network, and informing its research through bilateral feedback between Stanford and the federal government. I’ll also engage in coalition building to ensure that HAI has an inclusive voice in its human-centered approach. One thing in particular I’ll be looking at is helping to prioritize the flood of requests coming into HAI, in order to consider which will advance its long-term goals. I’ve been amazed at the work I’ve seen at this institute and how deeply everyone here wants to get this right for the people of this country. I want to be part of that.
You’ve worked on a wide range of policy issues ranging from broadband access to maternal health. Why are you now putting a sharper focus on AI?
I want to be involved in the AI space because there are so many conversations happening right now that are going to affect the lives of millions of people. I’ve had the amazing opportunity to work across different policy sectors, where I’ve learned best practices and how to think about unintended consequences. I want to help translate some of that between Silicon Valley and Washington. That would include issues such as how we can do things properly when it comes to artificial intelligence, why we need to be mindful about driving innovation while also understanding that individuals will be affected by AI, and how we can make sure everyone is included in this process. That means making it a priority to continue bringing voices to the table that represent different experiences so that we can reduce bias and harm.
What are the challenges in crafting laws and policy around AI?
I think everyone should be skeptical that we will ever figure out an easy way in Congress to legislate actual guardrails for the technology itself, because it’s moving so fast and has the potential of affecting every sector of our daily lives. But there may be consensus around potential harms. For example, if AI is being used to determine your credit score or whether you get a mortgage or not, if there’s no human being involved, and if the person affected doesn’t know why this decision was made, the government might need to legislate safeguards against that situation because there is real harm being done. Laws may be enacted around the actual usage of the technology rather than the technology itself.
Is artificial intelligence a partisan issue for legislators?
No. There’s a lot of bipartisan interest in AI; all these members have constituents, and for the most part no one wants to see a country in which people are being harmed by some force that they can’t see. I believe lawmakers are trying to understand how we can balance people’s rights under the Constitution but also make sure people are protected as much as possible from the unintended and potentially harmful consequences of this technology. For me, the process of finding consensus is so important. I’ve worked on legislation on behalf of Democrats, and I’ve loved working closely with Republicans as well. Negotiating with those who have completely different lived experiences and worldviews is enormously challenging but also fulfilling.
What advice on AI would you give Congress today?
We’re in the learning stages of all this, so I’d tell them it’s important to begin having serious conversations, one of which is how do we attract the best talent to help inform the public sector and eliminate any barriers to the recruiting of those experts? Secondly, how do we continue to stay collaborative with the private sector in a way that allows us to drive innovation while also protecting people along the way?
What would you like to see happen in the AI space in the next year?
I’d like to see a continued concerted effort — and a ramping up — of the public and private sectors working together, particularly around the building out of frameworks that focus solely on the consumer. The role of the government is to protect and strengthen Americans and their families, but there is so much drive in the private sector to stay ahead of the competition that most of its conversations revolve around what it can do, versus the things the government might be worried about. So whether the technology involves facial recognition, criminal sentencing, or generative AI, we want developers thinking on the front end about who’s in their vision, what that vision looks like, and whether we have the safety of those people in mind. It’s not an easy task to ask companies to do that, but it’s my hope that we’ll see additional build-out of these frameworks for companies to reference.
What advice do you have for young people interested in a career in public policy?
I’d tell them to focus on accumulating experiences, not titles, because when you look back at your career, what you’ll remember are those experiences. I started my career — literally — in the mailroom, and thought for someone who looked like me, my dream of working in a senior position in Washington could be a 40-year journey. Then I took an interview with the woman who is now the vice president, because I could recognize myself in her and I wanted her to be successful. I then joined her as deputy chief of staff, then chief of staff, and later her domestic policy advisor in the White House. None of that was part of any deliberate plan, but they were some of many amazing experiences.
In your limited free time, what do you enjoy doing?
I have three sons, ages 3, 7, and 10, and my husband and I know we are on borrowed time with these little guys because the time flies by so fast. I also couldn’t do any of this without the support of siblings and parents who live close by. Family takes up the majority of my spare time, but I’ve also been fortunate over my career to meet a number of friends and mentors who are intellectually curious, and I love sitting down with them and just talking about some of the most interesting topics in society — both serious and completely unserious. Spending time with the people that I care about is probably the most important thing to me outside of work.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.