Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI May Help Ensure the Medical Privacy of Adolescent Patients | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

AI May Help Ensure the Medical Privacy of Adolescent Patients

Date
October 10, 2022
Topics
Healthcare
Machine Learning

Natural language processing assists pediatricians in drawing the fine line between a teen’s right to privacy and a parent’s right to know.

Two laws, one new, the other on the books since the 1980s, have complicated the lives of pediatricians. The federal government’s 21st Century Cures Act, among its many mandates, requires physicians nationwide make available to patients their complete electronic medical records. With the click of a mouse, all personal health information, test results, prescribed medications, and clinical notes must be accessible digitally for patient review.

Meanwhile, a confidentiality law in California simultaneously demands that pediatricians protect the privacy of their adolescent patients. That is, by law they must not divulge to parents certain details about their dependent child’s mental health, sexual history, drug use, and other confidential information.

“Most states in the country have some form of confidentiality laws, so there is a conflict between the full disclosure in the federal law and the privacy protections of the state laws,” says Keith Morse, a pediatrician who leads the Translational Data Science at Packard (TDaP) program. TDaP is specifically charged with exploring the rapidly emerging applications of machine learning to clinical care at Lucile Packard Children’s Hospital.

What qualifies as confidential for a given patient is left up to the doctor’s interpretation and therein lies a problem. Doctors often disagree about such matters and now those subtle distinctions are weighted not only with medical consequence but with legal consequence, as well.

Into that maw have leapt Morse and a multidisciplinary team of medical doctors, bioinformatics experts, and computer scientists from Stanford University who have tapped artificial intelligence to help pediatricians flag potentially confidential information in their clinical notes for closer review to ensure compliance with both laws. The authors introduced their algorithm in a peer-reviewed paper that has been submitted for publication, and it is already in use by pediatricians in the Stanford Children’s Health community.

The fundamental issue is trust.

“We want the patients, in this case teens, to feel that they can be fully open and honest with their doctors without worrying that their parents might learn personal details they would rather their parents not know,” Morse says. “But the parents have an interest in their child’s medical history, too.”

The algorithm does not redact confidential information from the medical record, but rather uses natural language processing to compare clinical note content against a database to “flag” phrases, sentences, and paragraphs that it thinks might be at risk of violating confidentiality laws. The algorithm then calculates a “risk score” to help physicians make their confidentiality determinations, improve compliance with the applicable laws, and expedite the manual review process.

“It’s a very fine needle we have to thread here,” adds Naveed Rabbani, an engineer turned pediatrician at Stanford Medicine and lead author of the study. “Artificial intelligence can help make that key determination between what is confidential and what is necessary to provide the patient the very best medical care we can offer.”

Training the System

To build their algorithm and dataset, the researchers enlisted a team of five physicians to evaluate a randomly selected trove of clinical notes from 1,200 adolescent patients ages 12 through 17. The physicians who had been specifically trained in the nuances of California’s adolescent confidentiality laws annotated the dataset based on that knowledge. In their analysis the researchers estimate that slightly more than 20 percent of clinical notes for adolescents contain confidential information, indicating the issue is worthy of concern.

The 1,200 records were disbursed evenly among the five doctors, with some overlap. Not all doctors agree on what constitutes “confidential” information, and the overlap helped the AI research team identify the interpretive gray areas.

illustration of example text from a medical record

The algorithm then evaluated new, never-before-seen clinical notes for potential concerns, where it calculated the risk score and flagged content for the doctor’s review.

“The biggest surprise to me in this research was just how hard it is to determine confidentiality,” Rabbani says. “The best care and patient disclosure are sometimes contradictory goals. That’s where AI really helps.”

The Algorithm’s Future

The researchers anticipate that such an algorithm, if it was to achieve wider use, would work in real time as the doctor is entering notes — flagging content for immediate categorization as the notes are being transcribed.

With potential conflicts flagged and graded by confidentiality risk, the physician can quickly make an informed determination and either leave the note open to both parent and child or isolate confidential information in special note type that appears only when the minor patient reviews the electronic health record.

At this point however, the resulting algorithm is a proof-of-concept prototype that demonstrates both AI’s skill at discerning nuances in written language and its ability to expedite the physician’s annotation process while helping to abide by legal confidentiality requirements.

One impediment to wider use is that the dataset was based on clinical notes culled from the Stanford medical community. To be reliable universally, the researchers anticipate that future versions would need to be trained on notes from many other institutions, as note-taking style varies hospital to hospital as do confidentiality laws state to state.

On the upside, they believe that their results indicate their AI methods might be transferrable to other areas of medicine, notably reproductive health in light of the recent ruling overturning abortion protections, but also adult histories of HIV infection, sickle cell anemia, and substance use, among others.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

Share
Link copied to clipboard!
Contributor(s)
Andrew Myers

Related News

AI Reveals How Brain Activity Unfolds Over Time
Andrew Myers
Jan 21, 2026
News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

AI Reveals How Brain Activity Unfolds Over Time

Andrew Myers
HealthcareSciences (Social, Health, Biological, Physical)Jan 21

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.