How a group of data scientists are saving the whales with machine learning and image recognition

Facial recognition for whales may sound like a terrible elevator pitch, but to the National Oceanic and Atmospheric Administration (NOAA) and the data scientists at, this was a challenge worth taking on.


A group of Polish data scientists and machine learning experts from have been working on an image recognition engine for the endangered North Atlantic Right Whales.

Speaking at the Strata+Hadoop conference this week, CTO at Piotr Niedzwiedz told the story of how marine biologist Christin Khan - whose day job is flying over the ocean taking pictures of endangered whales for NOAA - was taking a break one afternoon and logged into Facebook. Here she was asked to identify her friends in photos and thought: “Why couldn’t she do something similar with whales?”


So she set up a competition on the data science website Kaggle - called Right Whale Recognition - to help them catalogue and track the small remaining population of the endangered whale. ended up winning the competition after developing an algorithm that recognises the individual whales to an 87 percent accuracy rate.

Read next: 15 machine learning tools for data scientists and developers

The main challenge, according to CTO at Piotr Niedzwiedz was simply that: "Humans are completely incapable of remembering 500 whale faces. The key to recognising this breed of whale is the unique white callosity pattern surrounding the blow hole."

How did they do it?

Senior data scientist Maciej Klimek explained the way he architected the underlying convolutional neural network (CNN) and algorithms for the system. First off, the data set was smaller than most big data projects, just 447 images, but that’s because that is probably the number of remaining species in the wild.

As the sample size was so small had to bulk out the training data set by nearly ten times. They did this by augmenting the data by translating the images by up to four pixels, rotating it slightly, rescaling, flipping or perturbing the colouring. They then used two Nvidia GPUs to train the models - they blended 15 different models in the end - called Tesla K80 and GRID K520.

Read next: Everything you need to know about deep learning and neural networks

Klimek says the five hours they took to apply universal labelling to the data - using Sloth in Python - was “the most effective spent time of the whole project by far. Maybe not the most interesting, but it led to the biggest improvement in score".

In order to normalise the data they hand drew a box around the head of the whale in each image. This allowed them to have a library of “passport photos” for the whales. Once they had done this “the problem became far more similar to human facial recognition than we expected,” says Niedzwiedz.

The full technical breakdown on how developed the system can be found on their website.

"Recommended For You"

Smarter algorithms will power our future digital lives How Google plans to bring AI and machine learning to the enterprise