Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Kilian Pohl | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peopleFaculty

Kilian Pohl

Associate Professor (Research), Psychiatry and Behavioral Sciences

External Bio

The research of my lab focuses on computational neuroscience aimed at identifying biomedical phenotypes to improve the mechanistic understanding, diagnosis, and treatment of neuropsychiatric disorders.

Share
Link copied to clipboard!

Latest Related to Kilian Pohl

Research
Your browser does not support the video tag.

Multi-Label, Multi-Domain Learning Identifies Compounding Effects of HIV and Cognitive Impairment

Jiequan Zhang, Qingyu Zhao, Ehsan Adeli, Adolf Pfefferbaum, Edith Sullivan, Robert Paul, Victor Valcour, Kilian Pohl
Mar 20

Multi-Label, Multi-Domain Learning Identifies Compounding Effects of HIV and Cognitive Impairment

Research
Your browser does not support the video tag.

Deep Parametric Mixtures for Modeling the Functional Connectome

Nicolas Honnorat, Adolf Pfefferbaum, Edith Sullivan, Kilian Pohl
Dec 23

Deep Parametric Mixtures for Modeling the Functional Connectome

Research
Your browser does not support the video tag.

Inpainting Cropped Diffusion MRI using Deep Generative Models

Rafi Ayub, Qingyu Zhao, M.J. Meloy, Edith Sullivan, Adolf Pfefferbaum, Ehsan Adeli, Kilian Pohl
Dec 12

Inpainting Cropped Diffusion MRI using Deep Generative Models

All Related

Representation Learning with Statistical Independence to Mitigate Bias
Ehsan Adeli, Qingyu Zhao, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Juan Carlos Niebles, Kilian Pohl
Dec 03, 2020
Research

Presence of bias (in datasets or tasks) is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in recent years. Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems. Controlling for all types of biases in the dataset curation stage is cumbersome and sometimes impossible. The alternative is to use the available data and build models incorporating fair representation learning. In this paper, we propose such a model based on adversarial training with two competing objectives to learn features that have (1) maximum discriminative power with respect to the task and (2) minimal statistical mean dependence with the protected (bias) variable(s). Our approach does so by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and the learned features. We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias). Our results show that the learned features by our method not only result in superior prediction performance but also are unbiased.

Representation Learning with Statistical Independence to Mitigate Bias

Ehsan Adeli, Qingyu Zhao, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Juan Carlos Niebles, Kilian Pohl
Dec 03, 2020

Presence of bias (in datasets or tasks) is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in recent years. Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems. Controlling for all types of biases in the dataset curation stage is cumbersome and sometimes impossible. The alternative is to use the available data and build models incorporating fair representation learning. In this paper, we propose such a model based on adversarial training with two competing objectives to learn features that have (1) maximum discriminative power with respect to the task and (2) minimal statistical mean dependence with the protected (bias) variable(s). Our approach does so by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and the learned features. We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias). Our results show that the learned features by our method not only result in superior prediction performance but also are unbiased.

Machine Learning
Research