Sharon Zhou: We Can’t Solve Society’s Problems While Working in Silos | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Sharon Zhou: We Can’t Solve Society’s Problems While Working in Silos

Date
January 04, 2021
Topics
Design, Human-Computer Interaction
Natural Language Processing
Machine Learning

This Stanford Ph.D. candidate and polyglot looks for the common language in pursuing AI for social good.

In this Students of AI series, we ask Stanford students what drew them to this field, their hopes and fears for the technology’s future, and what inspires them.

Meet Sharon Zhou, Ph.D. Computer Science 2021:

My first love was grammar: Latin speaks to me — and other languages I learned like French, Greek, and Mandarin. I went to Harvard to study classics, and because I liked science too, I also wanted to find something quantitative. Learning computer science for the first time felt like learning a new language, but I wasn’t completely interested until I took a course on user experience. In fact, my friends in high school used to make fun of me using technology; I used to break phones. On the first day of this user experience course, however, the professor told us ‘It’s never the user’s fault.’ This was a whole new world, I thought, to realize the technology problems I’d had were not my fault, and that I could design computer systems that didn’t make people feel stupid.

Since I enjoyed both the classics and computer science, I wanted to pursue a double major, and it was the first of its kind at Harvard. After graduation, I eventually went to Google as a product manager, but people didn’t seem hungry enough, not pushing the boundaries of something novel. So I applied to the Stanford PhD program, and I started exploring. I knew that whatever I did would also have to have a positive social good, and I’d use my classics background to reflect on systematic problems in society. One thing I’ve worked on is the Black Lives Matter Privacy Tool, which tries to block out the faces in photographs of protestors so they’re not pursued for arrest. The team I led was ad hoc, and we weren’t supported by anyone. But it’s important for the world, so we kept pushing and made it happen.

I’m also interested in generative models. Synthetic examples from generative models can help create new examples to add to the limited and biased real data we have access to. This is a big problem in health care, but a generative model can create more data on rare cancer patients and ethnic minorities, to make up for the lack of diversity, and this is one avenue I’m pushing on. In addition to medicine, I also care a lot about climate change. One project here uses generative models to let people see the effect, viscerally, of climate change, by showing any street view image undergoing a flood.

I’d like to show the world that working in AI doesn’t mean you have to be working on surveillance, or something that’s going to be dystopian down the road. You can try and stop that, and even use the same technology to combat itself. But what worries me is that people who care are leaving the field. What it boils down to, I think, is there are many people who like machines more than humans, and that will drive a different type of thing they create, right? But there’s no way we can be in different silos and work successfully on the hardest problems in society; we won’t figure things out alone. We have to think about people and the threads that tie us together. We have to speak each other’s languages, because that’s what it’s all about: to be able to collaborate and work together on some of the really important problems facing us today. Per aspera ad astra — through hardships to the stars — together.

— Story as told to Beth Jensen.

 

Share
Link copied to clipboard!
Contributor(s)
Sharon Zhou
Related
  • Cody Coleman: Lowering Machine Learning’s Barriers To Help People Tackle Real Problems
    Cody Coleman
    Jan 04
    news

    This Stanford Ph.D. candidate's low-income upbringing inspired his focus — democratizing AI. 

Related News

Stanford Study: AI Experts Are Optimistic About AI. The Rest of Us… Not So Much
KQED
Apr 13, 2026
Media Mention

Sha Sajadieh, AI Index Lead, comments on HAI's 2026 AI Index findings.

Media Mention
Your browser does not support the video tag.

Stanford Study: AI Experts Are Optimistic About AI. The Rest of Us… Not So Much

KQED
Workforce, LaborSciences (Social, Health, Biological, Physical)Design, Human-Computer InteractionEthics, Equity, InclusionApr 13

Sha Sajadieh, AI Index Lead, comments on HAI's 2026 AI Index findings.

Want To Understand The Current State Of AI? Check Out These Charts.
MIT Technology Review
Apr 13, 2026
Media Mention

"If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise."

Media Mention
Your browser does not support the video tag.

Want To Understand The Current State Of AI? Check Out These Charts.

MIT Technology Review
International Affairs, International Security, International DevelopmentEducation, SkillsRegulation, Policy, GovernanceMachine LearningWorkforce, LaborApr 13

"If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise."

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

News

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform

Dylan Walsh
Computer VisionHealthcareSciences (Social, Health, Biological, Physical)Machine LearningMar 03

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.