Quantcast
Channel: Winners’ Interviews – No Free Hunch
Viewing all articles
Browse latest Browse all 62

The Nature Conservancy Fisheries Monitoring Competition, 1st Place Winner's Interview: Team 'Towards Robust-Optimal Learning of Learning'

$
0
0

This year, The Nature Conservancy Fisheries Monitoring competition challenged the Kaggle community to develop algorithms that automatically detects and classifies species of sea life that fishing boats catch.

Illegal and unreported fishing practices threaten marine ecosystems. These algorithms would help increase The Nature Conservancy’s capacity to analyze data from camera-based monitoring systems. In this winners' interview, first place team, ‘Towards Robust-Optimal Learning of Learning’ (Gediminas Pekšys, Ignas Namajūnas, Jonas Bialopetravičius), shares details of their approach like how they needed to have a validation set with images from different ships than the training set and how they handled night-vision images.

Because the photos from the competition’s dataset aren’t publicly releasable, the team’s recruited graphic designer Jurgita Avišansytė to contribute illustrations for this blog post.

The basics:

What was your background prior to entering this challenge?

P.: BA Mathematics (University of Cambridge), about 2 years of experience as a data scientist/consultant, about 1.5 years a software engineer, about 1.5 years of experience with object detection research and frameworks as a research engineer working on surveillance applications.

N.: Mathematics BS, Computer Science MS and 3 years of R&D work, including around 9 months of being the research lead for a surveillance project.

B.: Software Engineering BS, Computer Science MS, 6 years of professional experience in computer vision and ML, currently studying astrophysics where I also apply deep learning methods.

Do you have any prior experience or domain knowledge that helped you succeed in this competition?

P.: Yes. Research experience at my job and intuition gained from last Kaggle competition also helped (i.e., to invest the first week into building a reasonable validation method).

N.: Yes, what helped was a combination of studying in university (mostly self studying), R&D work experience, previous two Kaggle Computer Vision competitions, chilling in arxiv daily, etc.

B.: Yes. My MS thesis was on the topic of deep learning and I have some previous Kaggle experience. I’m also solving computer vision problems regularly at work.

How did you get started competing on Kaggle?

P.: I first heard about Kaggle during my first year as a data scientist, but started considering it seriously a few years later, after I transitioned into computer vision. It provides an opportunity to focus on slightly different problems/data sets and efficiently validate distinct approaches.

N.: I used to enjoy competing in algorithmic competitions such as ACM ICPC. I didn’t achieve anything too significant (had a Master rank for a short while on a popular site codeforces and got several Certificates of Achievement at various on-site competitions though), but traveling to international competitions as a Vilnius University team member were one of the best experiences of my student life. After I started working in Machine Learning and Computer Vision I started to enjoy long-term challenges more so Kaggle was a perfect fit.

B.: It just seemed like a natural step, since I enjoyed solving ML problems and Kaggle was THE platform to do that.

What made you decide to enter this competition?

P.: I wanted to experiment more with stacking and customising models for such purposes. I also wanted another reference point for comparing recent detection frameworks/architectures.

N.: Object detection is one of my strongest areas and this problem seemed challenging, as the imaging conditions seemed very “in the wild”.

B.: The main draw was how challenging this competition looked, especially due to the lack of good data.

Jurgita A.: The fact that the three guys above are incompetent at drawing and they needed diagrams and illustrations for this blog post.

Let's get technical:

Did any past research or previous competitions inform your approach?

Yes, Faster R-CNN proved to work very well for our previous competitions and we already had experience using and modifying it.

What supervised learning methods did you use?

We mostly used Faster R-CNN with VGG-16 as the feature extractor, even though one of the models was R-FCN with ResNet-101.

What preprocessing and data augmentations were used?

Most of the augmentation pipeline for training the models was pretty standard. Random rotations, horizontal flips, blurring and scale changes were all used and had an impact on the validation score. However, the two things that paid off the most were toying with night vision images and image colors.

We noticed early on that the night vision images were really easy to identify - simply checking if the mean of the green channel was brighter than the added means of red and blue channels, weighted by a coefficient of 0.75, worked in all of the cases we looked at. Looking at a color intensity histogram of a typical normal image and a night vision image one can clearly spot the differences, as regular images usually have distributions of colors that are pretty close to one another. This can be seen in the figures below. The dotted lines represent the best fit Gaussians that approximate these distributions.

What we wanted for augmentation was more night vision images. So one of the final models, which also happened to be the best performing single model in the end, took a random fraction of the training images and stretched their histograms to be closer to what night vision images look like. This was done for each color channel separately and assuming they’re Gaussian (even though they’re not) and simply renormalizing the means and standard deviations accordingly - which basically amounted to scaling back the red and blue channels as can be seen from the figures. Afterwards we also did random contrast stretching for each color channel separately. This was done because the night vision images themselves could be quite varied and a fixed transformation where the resulting mean and standard deviation is the same didn’t capture that variety.

Because this model worked quite well we also added a different model that doesn’t single out night vision images, but stretches the contrast of all images. Because this is done on each channel separately this could result in the fish or the surroundings changing colors. This also seemed to work really well, as the colors in real images weren’t very stable due to the varying lighting conditions in the data.

What was your most important insight into the data?

Firstly, it was essential to have a validation set that contains images from different ships than the ones in the training set, because otherwise the models could learn to classify fishes based on the ship features and this wouldn’t show up on validation scores, which could lead to dramatic accuracy drop for the stage2 test set.

Secondly, the fishes were of drastically different size throughout the dataset so handling this explicitly was useful.

Thirdly, there was a large number of night-vision images that had a different color distribution so handling the night-vision images differently improved our scores.

What is more, the additional data posted on the forums by the other teams seemed to contain a lot of images where the fishes looked too different from what a fish could possibly look like while lying on a boat, so filtering them out was important.

Lastly, we had polygonal annotations for the original training images, which we believe helped us achieve more accurate bounding boxes on rotated images, as they would have included a lot of background otherwise (if a bounding box of the rotated box was taken as ground truth).

Which tools did you use?

We used a customized py-R-FCN (which includes Faster R-CNN) code starting from this repository https://github.com/Orpine/py-R-FCN.

How did you spend your time on this competition?

We spent some time annotating the data, finding useful additional data from the images posted on the forums, finding the right augmentations for training the models and looking at the generated predictions for the validation images, trying to see any false patterns the models might have learned.

What does your hardware setup look like?

2x NVIDIA GTX 1080, 1x NVIDIA TITAN X

What was the run time for both training and prediction of your winning solution?

A very rough estimate is around 50 hours on a GTX 1080 for training and 7-10 seconds for prediction for each image. Our best single model, which is actually more accurate than our whole ensemble, can be trained in 4 hours and needs 0.5 seconds for prediction.

Do you have any advice for those just getting started in data science?

Read introductory material and gradually move towards reading papers, solve Machine Learning problems that interest you and try to build an intuition about what works when by inspecting your trained models, look at the errors they make and try to understand what went wrong. Computer Vision problems are quite good for this as they are inherently visual. Most importantly, try to enjoy the process as learning Machine learning is a long-term endeavour and there is no better way to maintain motivation than to enjoy what you’re doing. Kaggle is a perfect platform for learning to enjoy learning Machine Learning.

Teamwork:

How did your team form?

We are all colleagues and we all have substantial experience in object detection.

How did competing on a team help you succeed?

We already had experience working together and we learned to complement each other well.

Just for fun:

If you could run a Kaggle competition, what problem would you want to pose to other Kagglers?

We would pose the problem that our public education system is outdated, wrong and needs to change. Because it would need to be cast as a prediction problem, we would ask Kagglers to predict when a student will most likely develop repulsion towards learning based on the time series of how much forced pseudo-learning he had to endure already.

Because Kaggle is a great platform for self-learning, we’ll share this website https://www.self-directed.org/.

What is your dream job?

Job which is self-chosen, because every job that has-to-be-done is already automatized by Machine Learning.


Viewing all articles
Browse latest Browse all 62

Trending Articles