Skip to main content Skip to secondary navigation
Page Content

Unlocking New Frontiers: AI and the Sciences

At Stanford HAI’s recent fall conference, scholars showed how artificial intelligence is opening up new approaches to studying science. 

Image
Photos of the speakers from the conference

Christine Baker

From top left, clockwise: Aditi Sheshadri, an assistant professor in Stanford's Earth System Science department; MIT PhD student Pratyusha Sharma; and computer scientist Alex Rives speak at the recent New Horizons in Generative AI conference at Stanford HAI. 

In the ever-evolving landscape of artificial intelligence, Stanford HAI’s fall conference "New Horizons in Generative AI: Science, Creativity, and Society" illuminated the profound impact of AI on scientific exploration. While generative AI for vision and language has garnered public attention, the conference delved deeper, spotlighting the diverse spectrum of generative AI research from its application in the sciences and creative disciplines to its societal implications.

The first session of the day focused on how AI provides new windows into the natural world to enhance human understanding. At the forefront of this intellectual odyssey were dynamic speakers pushing the boundaries of what AI could achieve in their respective fields: Aditi Sheshadri, an assistant professor in Stanford's Earth System Science department, guided the audience through the intricate realm of climate modeling, using AI to navigate the uncertainties inherent in climate projections. MIT PhD student Pratyusha Sharma highlighted Project CETI, research that uses machine learning to decode the language of sperm whales. And computer scientist Alex Rives unfolded the language of proteins using cutting-edge language models. By treating protein sequences as linguistic codes, Rives demonstrated how AI, particularly transformers, could decode the intricate structures and functions encoded within these sequences. 

These speakers emphasized the transformative power of AI in reshaping our understanding of science and the world around us. Watch the full session here, or read the highlights below. 

Modeling Earth’s Climate 

Sheshadri, an assistant professor at Stanford University in the Earth System Science department with a background in both machine learning and atmospheric sciences, highlighted the uncertainty in climate projections and the limitations of current climate models.

In recent research, she focused on the specific problem of atmospheric gravity waves, which play a significant role in Earth's climate. These waves, generated by processes like storms and air movement over mountains, are challenging to model due to their multiscale nature - they can vary from a meter in size to 100 kilometers. Current climate models struggle to accurately represent these waves, leading to uncertainties in climate projections.

Sheshadri presented two approaches her research team has taken to improve atmospheric gravity wave modeling. The first involves replacing the traditional parameterization with a neural network called WaveNet. This AI-based model demonstrated promising results in simulating atmospheric gravity waves.

The second approach retains the physics-based representation of atmospheric gravity waves but incorporates uncertainty quantification using AI-based tools. Sheshadri explained how ensemble-based inversion and Gaussian process emulators help calibrate and quantify uncertainties in the parameters governing atmospheric gravity wave processes.

Sheshadri concluded her talk by introducing the DataWave, a collaborative gravity wave research project that involves collaborative efforts to integrate AI, climate models, and observations and data, for a comprehensive approach to climate research. 

Decoding the Language of Whales

Sperm whales are known for their complex social structures and communication using clicks. In new research called Project CETI, Sharma showed how machine learning techniques and data collected in the Caribbean could help us understand and decode this marine mammal’s communication.

Collecting data on sperm whales is no easy feat. They live deep in the ocean and often in depths of complete darkness. The team used tagging technology affixed to whales that surfaced to record whale sounds and behaviors, although that data, Sharma said, was limited.

Her team’s research revealed that sperm whale communication is more complex than previously thought. Instead of a fixed set of codas, they identified a combinatorial coding system with four features (tempo, rhythm, ornamentation, and rubato) that can be independently varied, resulting in a more expressive communication system. Sharma showcased visualizations, such as the Exchange plot, to illustrate the variability and structure within whale conversations.

Sharma also touched upon the use of predictive models to understand the structure of whale calls. She found the whale vocalization models improved with increasing context sizes of the inputs, showing evidence of long-range dependencies in the call structure. She also saw improvement in predictions by increasing the model’s expressivity. 

“This will perhaps get us closer to understanding the meanings of the sounds of whales and maybe even allow us to communicate back with them at some point,” Sharma said. “We hope that algorithms and approaches we develop in the course of this project will empower us to better understand the other species that we share the planet with.”

Unraveling Protein Structures

In this final talk of the session, computer scientist and entrepreneur Rives explored the application of language models in the field of biology, specifically focusing on proteins and their sequences. Rives, who holds two bachelor's degrees in philosophy and biology from Yale and a Ph.D. in computer science from NYU, has made impactful contributions to protein sequence modeling during his tenure at Meta.

Proteins, encoded by sequences of amino acids, play a crucial role in various biological functions, from cancer treatment to plastic degradation and carbon fixation. The challenge lies in understanding the language of these sequences. Despite having vast databases of protein sequences, our understanding of their functions and structures is limited. 

Rivas presented the idea of treating protein sequences as a language and applying language models, similar to those used in natural language processing, to decode and extract information.

Rivas and his team trained transformer models on a large and diverse database of evolutionarily diverse proteins to see how well the models could capture biological information. They discovered that certain attention heads within the model correlated with the 3D structure of proteins, providing an interpretable representation of protein folding.

Scaling up the models revealed improved accuracy of protein structure prediction. The team introduced a model called "ESMFold," which demonstrated state-of-the-art results in protein structure prediction, rivaling the accuracy of existing methods like AlphaFold. This approach, which directly predicts protein structure from sequence, is faster and more efficient than traditional methods that involve searching across evolutionary databases.

One notable application of ESMFold is the rapid folding of an entire database of metagenomic proteins, providing a comprehensive survey of their structures. 

Rivas also touched upon the potential for these language models to be used generatively, designing new proteins by predicting their structures from given sequences. Experimental results suggested that the models could generalize well, even generating novel proteins not observed in natural evolution.

“New Horizons in Generative AI: Science, Creativity, and Society” took place on Oct. 24, 2023, at Stanford University. Learn more about these speakers and watch the full conference.

More News Topics