What is a Neural Network? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is a Neural Network?

A Neural Network is a computational model inspired by the structure of the human brain, consisting of interconnected layers of artificial "neurons" that process and transmit information. Each neuron receives inputs, applies mathematical operations (weights and activation functions), and passes the result to neurons in the next layer. Neural Networks are the foundation of deep learning and power many modern AI applications—from facial recognition to voice assistants to self-driving cars.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Neural Networks mentioned at Stanford HAI

Explore Similar Terms:

Deep Learning | Transformer | Weights

See Full List of Terms & Definitions

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.
From Brain to Machine: The Unexpected Journey of Neural Networks
Katharine Miller
Nov 18
news

How early cognitive research funded by the NSF paved the way for today’s AI breakthroughs—and how AI is now inspiring new understandings of the human mind.

From Brain to Machine: The Unexpected Journey of Neural Networks

Katharine Miller
Nov 18

How early cognitive research funded by the NSF paved the way for today’s AI breakthroughs—and how AI is now inspiring new understandings of the human mind.

Machine Learning
Computer Vision
news
Deciphering the Feature Representation of Deep Neural Networks for High-Performance AI
Tauhidul Islam, Lei Xing
Aug 01
Research
Your browser does not support the video tag.

AI driven by deep learning is transforming many aspects of science and technology. The enormous success of deep learning stems from its unique capability of extracting essential features from Big Data for decision-making. However, the feature extraction and hidden representations in deep neural networks (DNNs) remain inexplicable, primarily because of lack of technical tools to comprehend and interrogate the feature space data. The main hurdle here is that the feature data are often noisy in nature, complex in structure, and huge in size and dimensionality, making it intractable for existing techniques to analyze the data reliably. In this work, we develop a computational framework named contrastive feature analysis (CFA) to facilitate the exploration of the DNN feature space and improve the performance of AI. By utilizing the interaction relations among the features and incorporating a novel data-driven kernel formation strategy into the feature analysis pipeline, CFA mitigates the limitations of traditional approaches and provides an urgently needed solution for the analysis of feature space data. The technique allows feature data exploration in unsupervised, semi-supervised and supervised formats to address different needs of downstream applications. The potential of CFA and its applications for pruning of neural network architectures are demonstrated using several state-of-the-art networks and well-annotated datasets across different disciplines.

Deciphering the Feature Representation of Deep Neural Networks for High-Performance AI

Tauhidul Islam, Lei Xing
Aug 01

AI driven by deep learning is transforming many aspects of science and technology. The enormous success of deep learning stems from its unique capability of extracting essential features from Big Data for decision-making. However, the feature extraction and hidden representations in deep neural networks (DNNs) remain inexplicable, primarily because of lack of technical tools to comprehend and interrogate the feature space data. The main hurdle here is that the feature data are often noisy in nature, complex in structure, and huge in size and dimensionality, making it intractable for existing techniques to analyze the data reliably. In this work, we develop a computational framework named contrastive feature analysis (CFA) to facilitate the exploration of the DNN feature space and improve the performance of AI. By utilizing the interaction relations among the features and incorporating a novel data-driven kernel formation strategy into the feature analysis pipeline, CFA mitigates the limitations of traditional approaches and provides an urgently needed solution for the analysis of feature space data. The technique allows feature data exploration in unsupervised, semi-supervised and supervised formats to address different needs of downstream applications. The potential of CFA and its applications for pruning of neural network architectures are demonstrated using several state-of-the-art networks and well-annotated datasets across different disciplines.

Machine Learning
Your browser does not support the video tag.
Research
How Artificial Neural Networks Help Us Understand Neural Networks in the Human Brain
Andrew Myers
Jul 27
news

Experts from psychology, neuroscience, and AI settle a seemingly intractable historical debate in neuroscience — opening a world of possibilities for using AI to study the brain.

How Artificial Neural Networks Help Us Understand Neural Networks in the Human Brain

Andrew Myers
Jul 27

Experts from psychology, neuroscience, and AI settle a seemingly intractable historical debate in neuroscience — opening a world of possibilities for using AI to study the brain.

Machine Learning
news
Neural Networks Help Us Understand How the Brain Recognizes Numbers
Grace Huckins
Jul 13
news

New research using artificial intelligence suggests that number sense in humans may be learned, rather than innate. This tool may help us understand mathematical disabilities.

Neural Networks Help Us Understand How the Brain Recognizes Numbers

Grace Huckins
Jul 13

New research using artificial intelligence suggests that number sense in humans may be learned, rather than innate. This tool may help us understand mathematical disabilities.

Machine Learning
news
Parallelization Techniques for Verifying Neural Networks
Haoze Wu, Alex Ozdemir, Aleksandar Zeljić, Kyle Julian, Ahmed Irfan, Divya Gopinath, Sadjad Fouladi, Guy Katz, Corina Pasareanu, Clark Barrett
Dec 10
Research
Your browser does not support the video tag.

Parallelization Techniques for Verifying Neural Networks

Parallelization Techniques for Verifying Neural Networks

Haoze Wu, Alex Ozdemir, Aleksandar Zeljić, Kyle Julian, Ahmed Irfan, Divya Gopinath, Sadjad Fouladi, Guy Katz, Corina Pasareanu, Clark Barrett
Dec 10

Parallelization Techniques for Verifying Neural Networks

Your browser does not support the video tag.
Research
State of the Art on Neural Rendering
A. Tewari, O. Fried, J. Thies, V. Sitzmann, S. Lombardi, K. Sunkavalli, R. Martin-Brualla, T. Simon, J. Saragih, M. Nießner, R. Pandey S. Fanello, G. Wetzstein J.-Y. Zhu, C. Theobalt, M. Agrawala, E. Shechtman, D. B Goldman, M. Zollhöfer
Nov 27
Research
Your browser does not support the video tag.

State of the Art on Neural Rendering

State of the Art on Neural Rendering

A. Tewari, O. Fried, J. Thies, V. Sitzmann, S. Lombardi, K. Sunkavalli, R. Martin-Brualla, T. Simon, J. Saragih, M. Nießner, R. Pandey S. Fanello, G. Wetzstein J.-Y. Zhu, C. Theobalt, M. Agrawala, E. Shechtman, D. B Goldman, M. Zollhöfer
Nov 27

State of the Art on Neural Rendering

Your browser does not support the video tag.
Research