Sara Keen, Senior Researcher, Behavioral Ecology and AI reflects on how machine learning can help us find the signals in the noise and decipher meaning from the world of invisible information that surrounds us.
This blog provides an overview of ESP's technical roadmap - offering a path forward for our proposed and ongoing scientific work, which is focused on using AI to decode on non-human animal communication.
Machine learning (ML) has proven to be a powerful tool for learning latent representations of data (LeCun et al., 2015). For example, learned representations of human languages have enabled translation between different languages without the use of dictionaries (Artetxe et al., 2017; Lample et al., 2017). More recently, ML has been able to generate realistic images based on text descriptions (DALL-E 2), as well as predict and imitate the words we speak (AudioLM).
We hypothesize that ML models can provide new insights into non-human communication, and believe that discoveries fueled by these new techniques have the potential to transform our relationship with the rest of nature. We also expect that many of the techniques we develop will be useful tools for processing data in applied conservation settings (Tuia et al., 2022).
Whispering whales, turtle hatchlings communicating with each other through their shells, coral larvae that can hear and gravitate toward their home reefs…
Director of Development and External Affairs Jane Lawton reflects on the insights shared by Dr. Karen Bakker, Dr. Ari Friedlaender and Aza Raskin at ESP's October 25 panel discussion on interspecies communication in San Francisco - from how sophisticated the communication systems of other species are, to the explosion of new bioacoustic technologies and AI that see us on the cusp of dramatically expanding our understanding of those communication systems. And simultaneously exposing how little we, as human beings, currently know.
Katie Zacarian, CEO, Earth Species Project, reflects on the ESP journey to receiving a National Geographic Explorer grant. The project we have received funding for is focused on developing machine learning models to interpret animal motion data gathered on powerful animal-borne sensors, and it represents an important step on the journey toward decoding animal communication.
Senior AI research scientist Benjamin Hoffman is working on self-supervised methods for interpreting data collected from animal-borne tags, known as bio-loggers. Using bio-loggers, scientists are able to record an animal’s motions, as well as audio and video footage from the animal’s perspective. However, these data are often difficult to interpret, and there is often too much data to analyze by hand. A solution is to use self-supervised learning to discover repeated behavioral patterns in these data.
In December 2021, we published our first scientific paper in the peer-reviewed journal Scientific Reports which already has multiple citations. This publication focused on automatic source separation so that researchers are more easily able to distinguish between animal vocalizations when more than one animal is speaking at the same time. The research was the outcome of a close collaboration with marine biologist Dr. Laela Sayigh, who provided a dataset of bottlenose dolphin signature whistles for our project.