An Optimization Algorithm is a method that helps a machine learning model get better by adjusting its settings to make fewer mistakes. It works during training by repeatedly making small changes to improve the model's accuracy.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:

How early cognitive research funded by the NSF paved the way for today’s AI breakthroughs—and how AI is now inspiring new understandings of the human mind.
How early cognitive research funded by the NSF paved the way for today’s AI breakthroughs—and how AI is now inspiring new understandings of the human mind.

AI driven by deep learning is transforming many aspects of science and technology. The enormous success of deep learning stems from its unique capability of extracting essential features from Big Data for decision-making. However, the feature extraction and hidden representations in deep neural networks (DNNs) remain inexplicable, primarily because of lack of technical tools to comprehend and interrogate the feature space data. The main hurdle here is that the feature data are often noisy in nature, complex in structure, and huge in size and dimensionality, making it intractable for existing techniques to analyze the data reliably. In this work, we develop a computational framework named contrastive feature analysis (CFA) to facilitate the exploration of the DNN feature space and improve the performance of AI. By utilizing the interaction relations among the features and incorporating a novel data-driven kernel formation strategy into the feature analysis pipeline, CFA mitigates the limitations of traditional approaches and provides an urgently needed solution for the analysis of feature space data. The technique allows feature data exploration in unsupervised, semi-supervised and supervised formats to address different needs of downstream applications. The potential of CFA and its applications for pruning of neural network architectures are demonstrated using several state-of-the-art networks and well-annotated datasets across different disciplines.
AI driven by deep learning is transforming many aspects of science and technology. The enormous success of deep learning stems from its unique capability of extracting essential features from Big Data for decision-making. However, the feature extraction and hidden representations in deep neural networks (DNNs) remain inexplicable, primarily because of lack of technical tools to comprehend and interrogate the feature space data. The main hurdle here is that the feature data are often noisy in nature, complex in structure, and huge in size and dimensionality, making it intractable for existing techniques to analyze the data reliably. In this work, we develop a computational framework named contrastive feature analysis (CFA) to facilitate the exploration of the DNN feature space and improve the performance of AI. By utilizing the interaction relations among the features and incorporating a novel data-driven kernel formation strategy into the feature analysis pipeline, CFA mitigates the limitations of traditional approaches and provides an urgently needed solution for the analysis of feature space data. The technique allows feature data exploration in unsupervised, semi-supervised and supervised formats to address different needs of downstream applications. The potential of CFA and its applications for pruning of neural network architectures are demonstrated using several state-of-the-art networks and well-annotated datasets across different disciplines.

Experts from psychology, neuroscience, and AI settle a seemingly intractable historical debate in neuroscience — opening a world of possibilities for using AI to study the brain.
Experts from psychology, neuroscience, and AI settle a seemingly intractable historical debate in neuroscience — opening a world of possibilities for using AI to study the brain.


New research using artificial intelligence suggests that number sense in humans may be learned, rather than innate. This tool may help us understand mathematical disabilities.
New research using artificial intelligence suggests that number sense in humans may be learned, rather than innate. This tool may help us understand mathematical disabilities.

Parallelization Techniques for Verifying Neural Networks
Parallelization Techniques for Verifying Neural Networks