HAI Weekly Seminar with Garance Burke - Steering Journalism Towards Data Science | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

HAI Weekly Seminar with Garance Burke - Steering Journalism Towards Data Science

Status
Past
Date
Friday, February 21, 2020 11:00 AM - 12:00 PM PST/PDT
Abstract: Algorithmic tools are transforming our daily lives, but journalism is still playing catch up. As in other times of global transition, news consumers are anxious that artificial intelligence will overtake human abilities and question whether these systems will  take our jobs, amplify racial bias or expose our privacy. As one of few technically trained data journalists, it’s clear to me that most newsrooms lack the training to understand how algorithms work, let alone how they are deployed to guide crucial decisions in hiring, banking, criminal justice and medicine. And the rapidly expanding field of algorithmic accountability reporting has yet to be codified in simple terms that most reporters can understand. Naturally, this leads to questions: How can we ensure that reporters ask the right questions? Or that a larger group of journalists can access work examining the technology's impacts on society? How can we encourage nuanced journalism about AI that accurately reflects the state of science? As an inaugural 2020 Human Centered Artificial Intelligence-John S. Knight journalism fellow, I am developing a new set of journalistic best practices to provide reporters and editors with scientifically rigorous standards for algorithmic accountability reporting. Bio: Garance Burke is an investigative journalist who applies her training in statistical analysis to reveal vital truths in the public interest. Often driven by data, her work for The Associated Press on topics ranging from immigration to cybersecurity has helped to shape presidential elections, inspire congressional hearings and spark federal investigations. As an inaugural 2020 Institute for Human-Centered Artificial Intelligence-John S. Knight Journalism fellow, she is deepening her data science skills to draft standards that will help train more reporters to produce deeper stories about the algorithmic systems they encounter on their beats. In 2019, her stories were honored as a finalist for the Pulitzer Prize in national reporting and the Anthony Shadid Award for Journalism Ethics, and received the Robert F. Kennedy Journalism Award and the National Press Club Award for Diplomatic Correspondence. Burke began her career at the Mexican financial newspaper El Financiero, then worked in Mexico City for The Washington Post and The Boston Globe. She received dual master’s degrees from the University of California, Berkeley’s Goldman School of Public Policy and Graduate School of Journalism, where she has taught as a lecturer in basic data journalism. 
Share
Link copied to clipboard!

Related Events

Arvind Narayanan | Adapting to the Transformation of Knowledge Work
May 18, 202612:00 PM - 1:00 PM
May
18
2026

The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.

Event

Arvind Narayanan | Adapting to the Transformation of Knowledge Work

May 18, 202612:00 PM - 1:00 PM

The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.

Inside the 2026 AI Index Report | Stanford HAI
SeminarMay 20, 202612:00 PM - 1:15 PM
May
20
2026

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Seminar

Inside the 2026 AI Index Report | Stanford HAI

May 20, 202612:00 PM - 1:15 PM

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition
SeminarMay 27, 202612:00 PM - 1:15 PM
May
27
2026

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Seminar

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition

May 27, 202612:00 PM - 1:15 PM

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.