Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Sharon Zhou: We Can’t Solve Society’s Problems While Working in Silos | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Sharon Zhou: We Can’t Solve Society’s Problems While Working in Silos

Date
January 04, 2021
Topics
Design, Human-Computer Interaction
Natural Language Processing
Machine Learning

This Stanford Ph.D. candidate and polyglot looks for the common language in pursuing AI for social good.

In this Students of AI series, we ask Stanford students what drew them to this field, their hopes and fears for the technology’s future, and what inspires them.

Meet Sharon Zhou, Ph.D. Computer Science 2021:

My first love was grammar: Latin speaks to me — and other languages I learned like French, Greek, and Mandarin. I went to Harvard to study classics, and because I liked science too, I also wanted to find something quantitative. Learning computer science for the first time felt like learning a new language, but I wasn’t completely interested until I took a course on user experience. In fact, my friends in high school used to make fun of me using technology; I used to break phones. On the first day of this user experience course, however, the professor told us ‘It’s never the user’s fault.’ This was a whole new world, I thought, to realize the technology problems I’d had were not my fault, and that I could design computer systems that didn’t make people feel stupid.

Since I enjoyed both the classics and computer science, I wanted to pursue a double major, and it was the first of its kind at Harvard. After graduation, I eventually went to Google as a product manager, but people didn’t seem hungry enough, not pushing the boundaries of something novel. So I applied to the Stanford PhD program, and I started exploring. I knew that whatever I did would also have to have a positive social good, and I’d use my classics background to reflect on systematic problems in society. One thing I’ve worked on is the Black Lives Matter Privacy Tool, which tries to block out the faces in photographs of protestors so they’re not pursued for arrest. The team I led was ad hoc, and we weren’t supported by anyone. But it’s important for the world, so we kept pushing and made it happen.

I’m also interested in generative models. Synthetic examples from generative models can help create new examples to add to the limited and biased real data we have access to. This is a big problem in health care, but a generative model can create more data on rare cancer patients and ethnic minorities, to make up for the lack of diversity, and this is one avenue I’m pushing on. In addition to medicine, I also care a lot about climate change. One project here uses generative models to let people see the effect, viscerally, of climate change, by showing any street view image undergoing a flood.

I’d like to show the world that working in AI doesn’t mean you have to be working on surveillance, or something that’s going to be dystopian down the road. You can try and stop that, and even use the same technology to combat itself. But what worries me is that people who care are leaving the field. What it boils down to, I think, is there are many people who like machines more than humans, and that will drive a different type of thing they create, right? But there’s no way we can be in different silos and work successfully on the hardest problems in society; we won’t figure things out alone. We have to think about people and the threads that tie us together. We have to speak each other’s languages, because that’s what it’s all about: to be able to collaborate and work together on some of the really important problems facing us today. Per aspera ad astra — through hardships to the stars — together.

— Story as told to Beth Jensen.

 

Share
Link copied to clipboard!
Contributor(s)
Sharon Zhou
Related
  • Cody Coleman: Lowering Machine Learning’s Barriers To Help People Tackle Real Problems
    Cody Coleman
    Jan 04
    news

    This Stanford Ph.D. candidate's low-income upbringing inspired his focus — democratizing AI. 

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

How AI Shook The World In 2025 And What Comes Next
CNN Business
Dec 30, 2025
Media Mention

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Media Mention
Your browser does not support the video tag.

How AI Shook The World In 2025 And What Comes Next

CNN Business
Industry, InnovationHuman ReasoningEnergy, EnvironmentDesign, Human-Computer InteractionGenerative AIWorkforce, LaborEconomy, MarketsDec 30

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.