Skip to main content Skip to secondary navigation
Page Content

How Can We Better Regulate AI Companies?

Legal scholar Sonia Katyal discusses why trade secrets law shouldn't protect AI, government’s role in regulation, and potential edits to Section 230.

Image
Slide showing images of John Etchemendy and Sonia Katyal

A very small number of technology companies hold vast troves of data on us and are creating new and novel AI tools with that information. But how much accountability we can expect from private industry when it comes to AI?

When U.C. Berkeley legal expert Sonia Katyal first considered that question several years ago, she felt optimistic that companies could successfully regulate themselves. But then came the 2016 U.S. election.

“It became more and more clear that companies had really underestimated the risk of disinformation and the risk in which filtering systems were in many ways magnifying destructive voices,” she says.

In this Directors’ Conversation, Katyal discusses this growing concern, ways private industry could better regulate itself, and the role of government in regulation, in particular what to expect from the Biden administration. She also explains how trade secrets law has protected AI companies at the expense of the public, and how technology can play an important role in fostering inclusivity for the LGBTQ community.  

 

 

Full transcript:

John Etchemendy: Hello, and welcome to Directors’ Conversations with Stanford HAI. Joining me today is Dr. Sonya Katyal, a legal scholar and co-director of the Berkeley Center for Law and Technology. Sonya’s specialty is the fascinating intersection of technology, intellectual property law, and civil rights. Most recently she’s been examining private companies’ accountability in the age of AI. So, Sonia, thank you for joining us. I’ve been looking forward to this conversation.

Sonia Katyal: Thank you. It’s really a pleasure to be here and to also just see the wonderful things that are happening at Stanford.

John Etchemendy: So let me start by saying that your work covers some really intriguing area of the law, IP and civil rights. Now, frankly, it’s not obvious to me how artificial intelligence really fits into that intersection. So can you explain how your work intersects with AI?

Sonia Katyal: Sure. So I was a student who went to law school right around the time that the internet was just beginning. And at the time that the internet was just beginning, it was like a free-flowing world where we weren’t really sure how regulation was going to attach to issues of technology. We weren’t really sure how issues of civil rights were going to map on to issues of technology. And as a practicing lawyer, and as someone who eventually went into academia, I devoted the bulk of my study to thinking about how concepts of privacy, freedom of speech, due process and equality could map on to the issues that were emerging around the internet.

And I spent a number of years doing that by really focusing on looking at the lens of property law and how structures of intellectual property, so trademark law, copyright law, trade secret law, patent law, how all those structures both included and excluded particular communities. And that was something that really defined my work for the bulk of my career. In the last few years, one thing that I’ve become even more concerned about is the advent of artificial intelligence, because as excited as we are about the possibility of using massive amounts of data and the power of computing to solve social problems, I’m also very concerned about the way in which data is often not representative of who we are as a people. And so as data feeds into the structures that create artificial intelligence, I’m finding that I’m asking myself the same question, who is left in and who is left out?

And as we’re also in a world where there’s more and more consolidation among private industries, there’s a very small number of very large tech companies that hold the vast troves of data that surround us. I think that I’ve become more concerned, less with property and more with structures of information ownership generally, and how those structures of information ownership are going to be deployed in this new world of artificial intelligence. And it’s exciting, but it’s also daunting to think about those possibilities.

John Etchemendy: I’d like to talk about your forthcoming paper examining private accountability in the age of AI. So I gather the question is, can private companies hold themselves accountable or is there need for regulatory solutions of some sort? So what options do you consider in the paper and what is your conclusion?

Sonia Katyal: I started working on the paper in about 2015, 2016 when these issues were just really coming to the horizon. And I started thinking about all of the amazing possibilities that AI engenders, just in terms of health, in terms of all of these different kinds of industries that I think AI really has the power to revolutionize, particularly when it comes to ways in which big data can be used to really harness the public interest. So I felt pretty optimistic when I started writing the paper and then the 2016 election happened. And then several months after that, it became more and more clear that companies had really underestimated the risk of disinformation and the risk in which filtering systems and systems that operated online were in many ways magnifying destructive voices and the huge privacy implications of that work.

So in many ways, the paper was motivated by identifying the ways in which data can be biased by different kinds of issues. So the first part of the paper really talks about different ways in which data can be biased, both in terms of our cognitive biases, but also in terms of statistical biases, when you have data that is insufficiently representative of a large population. So those are the issues that occupy the first part of the paper.

And then the latter part of the paper really turns to grappling with the question of how much accountability we can expect from private industry when it comes to AI. And this is where you really see a shift in the paper because originally the paper was fairly optimistic about the ability of private companies to think broadly about the implications of the technologies that they were creating. After the 2016 election, though, I think it’s a really open question how much we should expect companies to self-regulate particularly when we’re faced with tremendous implications for privacy, tremendous implications for freedom of speech and the possibility of companies to allow disinformation to sow because it benefits their business models.

John Etchemendy: So, Sonia, can you give me some specific examples on, how can a company hold itself accountable?

Sonia Katyal: This is a great question. So one of the things that I look at is the possibility of requiring companies to do detailed impact statements on the range of ways in which their AI can impact particular communities. And so, here we’re really focusing on the question of how impact statements can be drawn up in a way to take into account the way in which minorities can be disparately impacted with AI. So thinking about racial minorities, the disabled, particular age classes, sexuality, gender discrimination, really thinking about how AI-driven systems can be designed in a way to be inclusive by identifying the particular impacts of communities that may be affected by AI. That’s the first thing.

The second thing is to really think about asking individual engineers as well as individuals within companies to undertake codes of conduct and that those codes of conduct can be written in such a way to require them to think about ethical principles, and that, that can be a way to build in a culture of accountability within a company.

And another way in which we could imagine companies being more responsive to the concerns that are being raised with respect to AI and bias, is by looking to Europe as an example. So in the European systems that regulate AI, there’s a pretty detailed system that is undertaken by any company that desires to engage in automated processing of decision-making, and it requires companies to offer more detailed forms of explanation to their users, if a decision is reached that impacts them. It requires companies to process their decisions within one aspect of the company, but also have advocates for the public in another section, who can then exert oversight over the automated processing of data and the decisions that are derived from those systems.

So by building in separation, right, we can actually create a little bit more of a culture of accountability. And the nice thing is that we’ve actually this done before, right? So in the passage of the Sarbanes-Oxley act, which was passed after 2008 concerns about financial fraud within companies, you have these compliance regimes that were developed, we have privacy compliant regimes. So we can think about using those structures to think more broadly about ending bias and discrimination through the use of AI structures as well.

John Etchemendy: Wow. One thing that I just have to ask you is, how optimistic or pessimistic are you about government regulation? Should there be more of it? Is it going to be effective? Is government very good at regulating this industry? What are your thoughts there?

Sonia Katyal: This is a great question. And this is a question that in many ways has really shifted just in the last few months. So during the Trump administration, we saw a lot of interest, as you point out, John, in terms of regulating big tech, where it deleteriously affected the Trump family or the Trump reputation. So lots of attention paid to regulating, or re-regulating Section 230. But in this context, we did see a much more hands-off approach. I think that’s going to shift in the next few years. And I think the reason for why that’s going to shift is because more and more companies and government agencies are becoming concerned with the opacity that surrounds AI, the fact that AI-driven systems can have huge impacts on racial minorities, on inclusion with respect to credit, housing, employment, and these are areas that are regulated.

So I think that we’ll probably see more appetite for regulation and oversight, and it may take the form in the next few years, of doing a more thorough needs assessment. It may take the form of figuring out where we can peer into these opaque systems, where intellectual property principles can give way to public interest principles. Those are the kinds of questions that I think are going to occupy our time in the next year or so.

But it is also important to note that certainly at the state level, we have seen a lot of regulations starting with respect to AI. So you have cities banning the use of facial recognition technologies, you have cities really exploring ways to make government-driven decisions more accountable. You have an onset of lawsuits where individuals, particularly teachers, are challenging some of the automated systems that are governing their teaching. So I do think that as more and more of these lawsuits increase, we’ll probably see more and more attention paid to regulation.

John Etchemendy: So, I have one specific question I want to ask you. It seems to me clear that Section 230 is destined to be re-examined and on both sides of the aisle I think there’s this pressure. And I, for one, am at a loss to know what I would do if I were king of the world and I were re-writing Section 230. I saw one article advocating that, just get rid of it and hold the companies accountable for everything that is posted on their sites. And this person said this would make them like magazines or publications. And the thought of that... That actually just kills the industry. We have online magazines and what that would do is just collapse the social media industry into something like an online magazine. So what are your thoughts? What would you do if you could re-write that section?

Sonia Katyal: That is a great question and that is a really tough question, John. So probably what I would do is... And again, I think that this comes out of where my work really comes from, which is out of a commitment to civil rights, including freedom of expression, but also recognizing how things like freedom of expression have a real impact for particular minorities. And I was really affected by the recent recognition of Twitter, that it has created an unhealthy climate. And the fact that it has taken years for a well-founded admission of the unhealthiness of the climate of social media, is a testament to the fact that it actually should be the domain of regulators to push private industry, to think about better ways to regulate these things. We have now seen the results of hands-off regulation, and it’s not a great state of affairs.

John Etchemendy: Twitter’s interesting because I have to admit that I’m of two minds about Twitter’s shutting down Trump’s account. It was nice, the silence was wonderful, but they allow the Ayatollah to tweet “Death to America,” or whatever, and have not regulated other state leaders. And it’s not clear that what they did was right. I also think it’s not clear that what they did may have saved Trump. The people around Trump obviously could not control his speech, so Jack Dorsey did. He shut down that speech. What are your thoughts about that specific action?

Sonia Katyal: Yeah, that’s a great question. I think for me, the way that I tend to think about that is slightly different than the way that I might think about other forms of speech because of the immediacy of the danger of violence. I think that’s what was really motivating, and you see it in the events that surrounded January 6, is that if you don’t tell communities the truth, they really will be motivated to violence. And so I totally supported Dorsey’s decision to pull back because of the immediacy of that violence.

Now, I agree that there are lots of examples where we actually should be equally concerned about the immediate possibility of violence. And maybe it does mean that we have to be more thoughtful about these kinds of things. One thing that I often think about in the LGBT context is how often educational videos that are basically designed to support individuals in the trans community or LGBT pride videos often get censored by social media companies, because they’re considered to be content that is sensitive when in fact they provide tremendous benefits to members of the LGBT community. But companies can be very aggressive about regulating that content while they are very under aggressive about regulating the hate speech that is often directed at members of the LGBT community.

So I think there are lots of examples where we see a real disconnect in terms of how social media responds to threatening expression. I would like to see more regulation, not necessarily regulation from government, but more regulation in the hands of private companies, where they’re more thoughtful about how to encourage healthier and more respectful commentary. So using the power of being free from First Amendment regulations to actually exercise more regulation of speech on their platforms could be a really wonderful thing. And we have seen examples where Twitter has done that. I just wish that they were doing it more aggressively, particularly when hate speech is concerned.

John Etchemendy: Yeah. I appreciate everything you say, it’s interesting when you think of one person, one company making these decisions and some of the decisions is not clear that, unless we can give very, very clear guidelines, it’s not clear that we want that to be subject to a potentially capricious decision on the part of whoever the censors, if you will, are. So, we could talk about this for another hour, I’m sure.

Sonia Katyal: And you are in fact entirely correct that... I do not envy the individuals who are put in those decisions. And I agree that putting this decision in the hands of one person at Twitter raises lots of different issues and maybe the next generation of speech advocates will be able to figure out better ways of dealing with a system that has now in many ways, gotten effectively out of control, I would say.

John Etchemendy: So actually, let me ask you to talk about... One of the things that you focus on is whistleblower protections. Recent years, we’ve seen a lot of government whistleblowers who, despite whistleblower protections, have not really been protected. So, first talk about whistleblower protections and the importance of them, and then I’d like to know whether the recent events have given you pause.

Sonia Katyal: So the whistleblower part of the paper is really the end part of the paper. And that is the part of the paper where I identify the risk of relying too heavily on private accountability. That, as we are creating systems where we just rely on private industry to self-regulate, which was largely the story during the Trump administration, it became necessary for us to hope that people would come forward and disclose some of the concerns that they had with the technologies that they were creating. And it turns out that in 2016, Congress passed a law called The Defend Trade Secrets Act.

And the thing that’s really unique about that particular area of law is that, in as much as it protected trade secrecy, which is largely the body of law that protects and governs AI, it also provided a small provision of protection to allow individual employees to come forward and identify any issue where they felt that the law might be being violated. And if they did that, disclose their concerns to an officer of the court, basically any lawyer, they could expect some level of immunity from being sued for trade secret misappropriation later on. It’s a really revolutionary provision, and it’s a provision that is directly attributable to my colleague, Peter Menell’s work, who happens to also be married to a very prominent whistleblower.

And so this provision made its way into this protection of the Defend Trade Secrets Act. And I thought about it as a potential glimmer of light to allow people to feel some degree of safety, should they decide to come forward and identify concerns about the AI that they were developing and whether or not it had legal implications. And I’ll also admit it was really driven by learning the stories of whistleblowers from previous generations, the story of Karen Silkwood, other kinds of whitsleblowers who have come forward.

Now, John, you are completely right to say that there are lots of examples where whistleblowers have not been able to receive full immunity or full protection and have faced a tremendous degree of obstacles, both in coming forward, as well as in terms of their own public reputation. But, this is still an area of law that is relatively new. And if anything, the last few months reveal to ourselves the importance of empowering individuals within industry. So you have Google starting its own union. You have the very prominent firing of leading researchers at Google who were coming forward to express their concerns about various AI driven systems.

And this is a conversation that needs to happen about how companies need to undertake their own level of protection for whistleblowing and be accountable to their employees as well. So there is a lot of hope, I think in the next few years, although, obviously there’s many things to be concerned about.

John Etchemendy: Okay. So you talked about the next few years, and we have a we have a new administration that is now up and running, or getting up and running, and it will presumably potentially have a very different approach to technology legislation and regulation in AI than the Trump administration. So it is actually interesting which way it’s going to go. Toward the end of the Trump administration, there was an awful lot of talk about regulating the big tech companies, which was unusual given that, that was coming from the Republican side. But, what is your prognostication about what’s going to happen in the next few years under President Biden?

Sonia Katyal: Yeah, that’s a great question, and it’s a question that I think is on the minds of so many of us who are just really excited and looking forward to the possibilities that this new administration brings. One thing that I really do see is a possibility for us to return to the Obama era of concerns, where a lot of different kinds of reporting and analysis was going into studying how AI was affecting particular populations. So during the Obama era, you had the drawing up of the Consumer’s Bill of Rights that would help the consumers, empower them when it came to their data, when it came to particular concerns about how AI might impact them in credit or housing or employment.

So I do see a possibility of a very strong return to those principles and really putting the consumer at the heart of these initiatives. So no longer thinking about civil rights concerns and consumer concerns as separate, but actually identifying how AI opens up the possibility of us to bring those two concerns together. So that’s the Obama picture, right, where I think that there will be a theme where we’ll go back and we’ll pick up on those issues.

There are two big things that I think I would add to suggest that the Biden administration may even go further. And the first is I think that the last year, with the advent of concerns over race, the Black Lives Matter movement, concerns over the need for anti-racist initiatives embedded throughout government, really opens up the possibility of a very strong focus on racial discrimination through the lens of technology. So thinking about how AI-driven systems can leave out racial minorities is something that I expect the new administration and the new hires actually will really heavily focus on, reaching out to racial minorities and figuring out how we can make technology more responsive and more inclusive. That would be the first thing.

The second thing that I also see happening is a real focus on overcoming what is known of as the digital divide. So thinking about the need for us to build an actual infrastructure plan that includes broadband to rural minorities, that includes broadband to underserved communities. I think these two things are things that I think the Obama administration had been less responsive on, and we can expect the Biden administration to be much more thoughtful about. So thinking about all of the different ways in which government agencies can be responsive.

John Etchemendy: Yeah. That’s really terrific. And so if you were in charge, you would say, first focus on the first and then on one of the broadband access?

Sonia Katyal: I think that I probably would put them both at the same level although-

John Etchemendy: Start with both of them.

Sonia Katyal: Yeah, but I would also say that, thinking about the lens of looking at minorities is something that we could really broaden to not just think about race, but to think about gender, to think about age, to think about class, sexual orientation, gender identity, disability. So all of those kinds of constituencies are constituencies that I think deserve to be included in the promise of new technology. And so thinking about how we can actually ask that of our government, because our government is now in a position to actually see those constituencies, I have to say, it’s a very exciting set of possibilities.

John Etchemendy: Interesting. Yeah. So, let me change to another of your articles. A while back, you wrote an article on the paradox of source code secrecy. Now I’m a logician and I’m interested in what that paradox actually was. Could you say a little bit more about why source code raises AI related concerns?

Sonia Katyal: So, this article is an article that really did in fact attempt to bridge the existing gap between thinking about intellectual property and thinking about artificial intelligence. And one of the big problems that has come up in the context of artificial intelligence, is the fact that algorithmic structures, AI driven structures are often protected through the legal lens of trade secrecy. So what often happens in the criminal justice system is, you will have a defendant who is convicted because of the results that stem from a particular technology. And when their lawyers try to examine that technology to show that it may be prone to errors in some way, they find themselves stymied by trade secret protections that are even operating in the criminal justice context.

So the article was being driven by a desire to recognize that this trend towards trade secrecy is actually a trend that has been put into motion because of, to some extent, the failures of other areas of intellectual property to protect software. So back in the 1970s, when software was first coming onto the horizon, we used Trade Secret law in the context of contractual agreements, because software was very individualized at the time. As it became more and more mass market. We started turning to areas of copyright protection and patent protection to protect software and source code.

And while that was really, really promising because both copyright and patent law are really oriented towards a desire to share that information with the public... So if you file for a patent, you have to share your technology with the office and with the public in order to receive this very high level of protection, even copyright law is oriented towards publishing your results with the public. But as time has unfolded, both of those structures, copyright and patent law, have fallen in some ways by the wayside. We had a golden age of business method patents, but it led to a number of patents being granted that were a very low quality, very low levels of inventiveness. And this really in many ways muddied the waters of innovation. So we’re back to thinking about trade secrecy as the solution, and that’s a problem.

John Etchemendy: So I’m curious. Are there tweaks that could be made to the patent laws or to the copyright laws that would make it more applicable or usable in the software context?

Sonia Katyal: So, the article takes the position that the age of thinking about software patenting has passed us by, that trade secret law is primarily, I think the default engine that most companies rely on to protect their AI. So, what we want to do is figure out ways to, not necessarily weaken the structures of trade secrecy, but to allow for greater inquiries in cases of public interest. So the article ends by making a bunch of different recommendations, primarily in terms of reforming the court system to allow for a more adversarial scrutiny over algorithms and AI driven systems.

So courts have been doing this actually in the context of intellectual property litigation for decades. There are protective orders, there are all sorts of ways in which we can allow for greater public interest scrutiny. But we just haven’t done so in the criminal justice context, and we haven’t really started to even think about how to build in due process when we think about structures of AI.

So the end result of the article is really driven by a desire to say, okay, if we’re in a world where trade secrecy is the default protection, then let’s keep in mind that much of the Source Code that we’re protecting isn’t actually that unique to a particular company and doesn’t actually deserve trade secret protection. So let’s figure out ways to allow for more inquiry and for more integration of public interest concerns.

John Etchemendy: I see that you’re also the Chair of the LGBTQ citizenship cluster at Berkeley. And first, I don’t know what that is, and I’m interested in how your interests there relate to your interest in technology and law.

Sonia Katyal: So as much as I cut my teeth on the world of intellectual property and information law, another aspect of the work that I do has always been motivated by thinking about civil rights through a broad lens. Originally, it was motivated because I was one of the few South Asian LGBT identified law professors in the country. I was really motivated by thinking about how to build structures of law that were more inclusive to lesbian and gay communities, bisexual, transgender, all sorts of communities that can be thought of as under the LGBTQ umbrella.

And that work was actually the work that inspired the first law review article that I wrote, which was really thinking about how to build better structures of protection. And so that work has continued often in parallel with the work that I’ve been doing and intellectual property. I do find that there’s lots of intersections between them, and in the last few years, I’ve seen a very clear intersection in terms of the way in which the LGBTQ community has interacted with technology.

And what we see is that, while the world of tech represents an incredible host of opportunities for inclusivity among the LGBTQ community, because you have all sorts of communities connecting through social media and learning how to express themselves through YouTube, there’re huge amounts of education for young teens that are struggling with their identity. There are lots of communities that they can tap into now online. There are also ways in which technology continues to operate in a very binary world. So one of the areas that I’ve really been focused on lately is how AI structures often use a binary system that focuses only on the polarities of male and female, and by doing so, basically it raised communities of transgender and non-binary individuals.

And so some of the work that I’ve been doing brings those two areas together by saying, this is a very clear indication of how technology could be designed to be more inclusive. And yet for decades, these technologies have been developed without a moment of thought to the large communities of trans and non-binary individuals. And it’s becoming a community that is larger and larger and deserves and demands social recognition. We’ve seen this last year where the Supreme Court issued a number of decisions that were fairly protective of transgender, non-binary, LGBT identified individuals.

So I think that actually, again, this is a world where we can actually be optimistic. I think that the tech world, at least when it comes to LGBTQ equality, has been a trailblazer. Apple was one of the first companies to offer domestic partner protections. Facebook offered 50 different terms for one’s gender identity and LGBTQ identity. The tech world has been very much at the forefront of thinking about how to be more inclusive. So we’re waiting for a moment where we can have better and more inclusive conversations at the AI level as well.

John Etchemendy: So, Sonia, I can’t tell you how much I’ve enjoyed talking to you today, and I’m delighted you could share your work with us. I hope we could eventually get you down post pandemic, get you down in person to the Institute.

Sonia Katyal: Well, thank you so much. I have to say that the work that is happening at Stanford is... There is a rivalry between Berkeley and Stanford, but now when it comes to AI, you guys are really doing wonderful, wonderful things and having really important conversations. And I’m excited be a part of those whenever they can unfold again.

John Etchemendy: Thank you for saying that, Sonia. I also want to thank our audience for listening in, and if you would like to learn more, we’re going to share links to Sonia’s most recent work wherever you’re listening to this discussion. So you can also find other conversations with leading AI experts on our website at hai.stanford.edu, that’s hai.stanford.edu, or at our YouTube channel. Sonia, thank you again and I look forward to our next conversation.

Sonia Katyal: Thank you. It’s been a real honor. Thank you so much.

 

More News Topics

Related Content