HAI Weekly Seminar with Subutai Ahmad - Sparsity in the neocortex, and its implications for machine learning
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse.
Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.
.png&w=1920&q=100)
Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.
What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.
Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.
.png&w=1920&q=100)
Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.
In this talk, Subutai will first discuss what is known about the sparsity of activations and connectivity in the neocortex. He will also summarize new experimental data around active dendrites, branch-specific plasticity, and structural plasticity, each of which has surprising implications for how we think about sparsity. In the second half of the talk, Subutai will discuss how these insights from the brain can be applied to practical machine learning applications. He will show how sparse representations can give rise to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency.