This experiment is part of the AI for Interspecies Communication Challenge Grant. Browse more projects

How accurate is lyrebird vocal mimicry?

Raised of $2,844 Goal
Funded on 12/02/23
Successfully Funded
  • $2,870
  • 100%
  • Funded
    on 12/02/23

About This Project

Lyrebirds are some of the world’s best vocal mimics and can accurately copy dozens of species in their Australian rainforest homes. In this project, we will use sound recordings from the forest to understand just how accurate lyrebird mimicry is. Our goal is to develop machine learning methods for distinguishing lyrebirds from the species they imitate. These methods are foundational for understanding how expert vocal mimics interact with other species in their shared acoustic space in the wild.

Ask the Scientists

Join The Discussion

What is the context of this research?

Lyrebirds typically imitate over 30 species in their environment. Some species they imitate perfectly: in playback studies, the grey shrike-thrush does not distinguish between their own species calls and lyrebirds' imitations of their calls. But some species they imitate imperfectly, showing clear biases: lyrebirds can not hit the full bandwidth of the eastern whipbird and do not hold the sustained monotone notes as long. Lyrebirds also creatively improvise upon other species' songs, injecting novel variation into the forest soundscape. Scientists are only just beginning to understand what lyrebirds use their incredible vocal abilities for.

What is the significance of this project?

If we want to understand what lyrebird vocal mimicry is used for, we first need strong quantitative tools for comparing the vocalizations of animals in a shared acoustic space. Lyrebird mimicry is an ideal study system for developing these tools because they imitate a wide variety of species with various levels of accuracy. Some applications for these tools will be conservation: passive acoustic monitoring is frequently used to conduct remote censuses, but has difficulty correctly classifying species that use vocal mimicry. Other applications would be for understanding interspecies communication: these tools will help distinguish between two species within the same acoustic space and quantify fine-grained similarities and differences in their acoustic signals.

What are the goals of the project?

Our main goal for this project is to design and collect a large data set of lyrebird vocalizations and the species they imitate in their natural environment. A key component of this project is that we will work with collaborators in machine learning from the start to ensure that the type of data we collect is tailored to the computational methods we plan to use. This data set will be used to 1) map out the shared acoustic space of lyrebirds and the species they imitate, 2) place these species within this space to understand where their signals overlap and where they differ (indicating vocal similarity and distinctiveness) and 3) understand how the variation in these signals is structured over different scales (e.g. single vocalizations as well as longer sequences).


Please wait...

We were lucky to receive 6 remote recording devices in an equipment grant awarded from Wildlife Acoustics - thanks! The items in this budget are for everything else that is needed to collect and store our data from the field site.

Data Collection:

We need to travel to the field site to deploy our recording devices, which write files to SD cards and use a lot of batteries! We'll have to tie and lock each device to a tree to prevent it being stolen by people or cockatoos. The shotgun microphone will be used to collect in-person, species-labelled data for testing the algorithm. A second mic stub needs to be added to each remote device to allow us to record at two gain settings and capture the lyrebird's full dynamic range.

Data Storage:

Machine learning requires a lot of data to be any good and we need to keep all that data somewhere! We'll keep it on hard drives and the cloud.

Endorsed by

This project will provide exciting new insight into how to understand and analyze the remarkable vocal mimicking abilities of lyrebirds. Vanessa and Grace are uniquely suited to develop these new computational tools: Vanessa has expertise in computational analyses and cultural evolution and Grace has developed several bioacoustics-focused computational analyses for detecting "information" within bird calls. I am so excited to see the results!

Project Timeline

In Phase 1 (now - April 2024), we will design our data collection protocols and collect pilot recordings from the forest, in an iterative process with feedback from colleagues in machine learning. In Phase 2 (May - November 2024), we will collect our main data set during lyrebird breeding season and from other species during Australian spring time (when they sing the most). In Phase 3 (January - March 2025), we will analyze the data with machine learning techniques.

Nov 02, 2023

Project Launched

Dec 01, 2023

Release examples recordings from first set of pilot data

Aug 01, 2024

Release example recordings from main data set

Mar 01, 2025

Release pretty plots of lyrebirds' shared acoustic space

Meet the Team

Vanessa Ferdinand
Vanessa Ferdinand
Research Fellow


University of Melbourne; Melbourne School of Psychological Sciences; Melbourne Centre for Data Science
View Profile
Grace Smith-Vidaurre
Grace Smith-Vidaurre
NSF Postdoctoral Fellow; incoming Assistant Professor


Rockefeller University; University of Cincinnati; Michigan State University
View Profile

Vanessa Ferdinand

My research focuses on modeling complex interactions between cognitive, social, and cultural behavior in humans and animals. I'm particularly interested in how cognitive biases drive cultural change over time.

Grace Smith-Vidaurre

I work on how vocal communication systems that depend on social learning are flexible or constrained over short evolutionary timescales. In one of my main lines of research, I test ideas about how animals use learned vocalizations to communicate information about their identities.

Additional Information

This research has been approved by the University of Melbourne's Animal Ethics Committee, under project # 2023-27945-44888-2

Project Backers

  • 11Backers
  • 100%Funded
  • $2,870Total Donations
  • $256.36Average Donation
Please wait...