This Stanford Ph.D. candidate and polyglot looks for the common language in pursuing AI for social good.
In this Students of AI series, we ask Stanford students what drew them to this field, their hopes and fears for the technology’s future, and what inspires them.
Meet Sharon Zhou, Ph.D. Computer Science 2021:
My first love was grammar: Latin speaks to me — and other languages I learned like French, Greek, and Mandarin. I went to Harvard to study classics, and because I liked science too, I also wanted to find something quantitative. Learning computer science for the first time felt like learning a new language, but I wasn’t completely interested until I took a course on user experience. In fact, my friends in high school used to make fun of me using technology; I used to break phones. On the first day of this user experience course, however, the professor told us ‘It’s never the user’s fault.’ This was a whole new world, I thought, to realize the technology problems I’d had were not my fault, and that I could design computer systems that didn’t make people feel stupid.
Since I enjoyed both the classics and computer science, I wanted to pursue a double major, and it was the first of its kind at Harvard. After graduation, I eventually went to Google as a product manager, but people didn’t seem hungry enough, not pushing the boundaries of something novel. So I applied to the Stanford PhD program, and I started exploring. I knew that whatever I did would also have to have a positive social good, and I’d use my classics background to reflect on systematic problems in society. One thing I’ve worked on is the Black Lives Matter Privacy Tool, which tries to block out the faces in photographs of protestors so they’re not pursued for arrest. The team I led was ad hoc, and we weren’t supported by anyone. But it’s important for the world, so we kept pushing and made it happen.
I’m also interested in generative models. Synthetic examples from generative models can help create new examples to add to the limited and biased real data we have access to. This is a big problem in health care, but a generative model can create more data on rare cancer patients and ethnic minorities, to make up for the lack of diversity, and this is one avenue I’m pushing on. In addition to medicine, I also care a lot about climate change. One project here uses generative models to let people see the effect, viscerally, of climate change, by showing any street view image undergoing a flood.
I’d like to show the world that working in AI doesn’t mean you have to be working on surveillance, or something that’s going to be dystopian down the road. You can try and stop that, and even use the same technology to combat itself. But what worries me is that people who care are leaving the field. What it boils down to, I think, is there are many people who like machines more than humans, and that will drive a different type of thing they create, right? But there’s no way we can be in different silos and work successfully on the hardest problems in society; we won’t figure things out alone. We have to think about people and the threads that tie us together. We have to speak each other’s languages, because that’s what it’s all about: to be able to collaborate and work together on some of the really important problems facing us today. Per aspera ad astra — through hardships to the stars — together.
— Story as told to Beth Jensen.