Using AI to save endangered whales

From wiki
Revision as of 22:40, 16 December 2021 by Riham.elhabyan (talk | contribs)
Jump to navigation Jump to search
Poc2.png



Computers can learn to recognize the sound of endangered whales. DFO has developed a predictive model to identify North Atlantic Right Whales from underwater acoustic data. The insights gained from the model can be used to develop a warning system for preventing vessels from fatally striking the endangered species.




The Challenge

Annual North Atlantic Right Whale Serious Injury (SI) Cases of whales last seen alive, 2017-2021, U.S. and Canada [1].

The North Atlantic Right Whale (NARW) is one of the most endangered whale species, with only about 366 remaining in the world. In 2017, 12 individuals died in the Gulf of  St. Lawrence. The high mortality rate is mainly due to the collision with vessels and the entanglement with fishing gears.

Protection measures, which include vessel speed reduction, fishing closure, and investing in new acoustic technologies, were then put in place by the Department of Fisheries and Oceans (DFO) to prevent the recurrence of such events.

Passive Acoustic Monitoring (PAM) is an observation method where hydrophones are deployed in the ocean to capture sounds from the surrounding environment. Marine mammal acoustic experts periodically, 2 to 4 times a year, collect the recordings from the hydrophones. PAM has become a crucial tool in observing endangered whales providing continuous round-the-clock information over the season.

Currently, acoustic data analysis is performed manually by marine mammal acoustic experts. However, manual recognition is tricky, resource-intensive, time-consuming, and very few scientists can perform it on the fly. Automating this process can result in near real-time detection of NARW.

The Solution

References