What We Do

Gelada monkeys groom each other. Photo by Rieke T-bo.

Gelada monkeys groom each other. Photo by Rieke T-bo.

Research projects

Wild Griffon vulture (Gyps fulvus) at Gamla Nature Reserve, Israel
Active
Active
Active

Self-Supervised Ethogram Discovery

Self-Supervised Ethogram Discovery

Senior AI research scientist Benjamin Hoffman is working on self-supervised methods for interpreting data collected from animal-borne tags, known as bio-loggers. Using bio-loggers, scientists are able to record an animal’s motions, as well as audio and video footage from the animal’s perspective. However, these data are often difficult to interpret, and there is often too much data to analyze by hand. A solution is to use self-supervised learning to discover repeated behavioral patterns in these data. This will allow behavioral ecologists to rapidly analyze recorded data, and measure how an individual’s behavior is affected by external factors, such as human disturbance or communication signals from other individuals. Ben is working closely with our partners, Dr. Christian Rutz and Dr. Ari Friedlaender, to source datasets from their labs and among researchers in the ethology community.

Read More
African elephants
Active
Active
Active

Benchmark of Animal Sounds (BEANS)

Benchmark of Animal Sounds (BEANS)

Machine learning is increasingly being used to process and successfully analyze bioacoustic data in ways that support research projects. However ongoing challenges include the diversity of species being studied, and the small data sets compared to human language data sets. Machine learning approaches have also been very specific, making it difficult to compare across species and models. Senior AI Research Scientist Masato Hagiwara has developed a benchmark for bioacoustics tasks that measures how a model can perform over a diverse set of species for which there is comparatively little data. This mirrors the standard benchmarks which have been developed for human vision and language. The development and publication of this benchmark is supporting the fair comparison and development of ML algorithms and models that do well for a diverse set of species with small data; lowers barriers to entry by open-sourcing preprocessed datasets in a common format, infrastructure code, and baseline implementations; and encourage data and model sharing by providing a common platform for researchers in the biology and the machine learning communities. This benchmark and the code is open source in the hope of establishing a new standard dataset for ML-based bioacoustic research.

Read the full paper on Arxiv

Open source data and code are available on our Github

Read More
Carrion crow in the Käfertaler Wald, Baden-Württemberg, Germany. Photo by Andreas Eichler.
Active
Active
Active

Crow Vocal Repertoire

Crow Vocal Repertoire

Crow Vocal Repertoires

Senior AI Research Scientist Dr Benjamin Hoffman is working with Professor Christian Rutz and his collaborators to map the vocal repertoires of two species of crow. The first one, the Hawaiian crow, is notable for its natural ability to use foraging tools as well as its precarious conservation status – the species sadly became extinct in the wild in 2002 and currently only survives in captivity. We are investigating how the species’ vocal repertoire has changed over time in two captive breeding populations, to inform ongoing reintroduction efforts. The second species, the carrion crow, is abundant across its European range, but has attracted attention with its unusually plastic social behavior, with groups in some populations breeding cooperatively. We are analyzing field recordings to understand the role of acoustic communication in group coordination. Mapping vocal repertoires can help uncover cultural and behavioral complexity, which in some cases has important implications for planning effective conservation strategies.

Read More
Photo by Todd Cravens on Unsplash
Active
Active
Active

Generative Vocalization Experiment

Generative Vocalization Experiment

Playbacks are a common technique used to study animal vocalizations, involving the experimental presentation of stimuli to animals (usually recorded calls played back to them) to build an understanding of their physiological and cognitive abilities. With current playback tools, biologists are limited in their ability to manipulate the vocalizations in ways that will establish or change their meaning, and their exploratory power is limited. Senior AI research scientist Jen-Yu Liu is exploring whether it is possible to train AI models to generate new vocalizations in a way that allows us to solve for a particular research question or task. Concurrently, Jen-Yu is working on denoising, which is critical to provide clean audio recordings to train ML models. He is currently working with data sets from a number of bird species, including the common chiffchaff, as well as humpback whales, in partnership with Dr. Michelle Fournet. Providing researchers with the ability to make semantic edits to vocalizations will greatly expand the exploratory and explanatory power of bioacoustics research, and is an important step on our Roadmap to Decode.

Read More
Two sibling Gelada Monkeys, Simien Mountains National Park, Ethiopia Photo by Marc Guitard
Complete
Complete
Complete

Solving the cocktail-party problem

Solving the cocktail-party problem

In December 2021, we published our first scientific paper in the peer-reviewed journal Scientific Reports which already has multiple citations. This publication focused on automatic source separation so that researchers are more easily able to distinguish between animal vocalizations when more than one animal is speaking at the same time. The research was the outcome of a close collaboration with marine biologist Dr. Laela Sayigh, who provided a dataset of bottlenose dolphin signature whistles for our project.

Read More