This year, Carvana, a successful online used car startup, challenged the Kaggle community to develop an algorithm that automatically removes the photo studio background. This would allow Carvana to superimpose cars on a variety of backgrounds. In this winner's interview, the first place team of accomplished image processing competitors named Team Best[over]fitting, shares in detail their winning approach.
Basics
As it often happens in the competitions, we never met in person, but we knew each other pretty well from the fruitful conversations about Deep Learning held on the Russian-speaking Open Data Science community, ods.ai.
Although we participated as a team, we worked on 3 independent solutions until merging 7 days before the end of the competition. Each of these solutions were in the top 10–Artsiom and Alexander were in 2nd place and Vladimir was in 5th. Our final solution was a simple average of three predictions. You can also see this in the code that we prepared for organizers and released on GitHub–there are 3 separate folders:
- Albu: Alexander Buslaev
- Asanakoy: Artsiom Sanakoyeu
- Ternaus: Vladimir Iglovikov
Each of us spent about two weeks on this challenge, although to fully reproduce our solution on a single Titan X Pascal one would need about 90 days to train and 13 days to make predictions. Luckily, we had around 20 GPUs at our disposal. In terms of software, we used PyTorch as a Deep Learning Framework, OpenCV for image processing and imgaug for data augmentations.
What were your backgrounds prior to entering this challenge?
Vladimir Iglovikov
My name is Vladimir Iglovikov. I got Master’s degree in theoretical High Energy Physics from St. Petersburg State University and a Ph.D. in theoretical condensed matter physics from UC Davis. After graduation, I first worked at a couple of startups where my everyday job was heavy in the traditional machine learning domain. A few months ago I joined Lyft as a Data Scientist with a focus on computer vision.
I've already competed in several image segmentation competitions and the acquired experience was really helpful with this problem. Here are my past achievements:
- Kaggle: Ultrasound Nerve Segmentation: 10th out of 923
- Kaggle: Dstl Satellite Imagery Competition: 3rd out of 419 (blog post, tech report)
- Topcoder: Konica Minolta: Pathological Image Segmentation Challenge: 5th out of 70
- MICCAI 2017: Gastrointestinal Image ANAlysis (GIANA) => 1st place (press release)
- MICCAI 2017: Robotic Instrument Segmentation => 1st place (slides)
This challenge looked pretty similar to the above problems and initially I didn't plan on participating. But, just for a sanity check I decided to make a few submissions with copy-pasted pipeline from the previous problems. Surprisingly, after a few tries I got into the top 10 and the guys suggested a team to merge. In addition, Alexander enticed me by promising to share his non-UNet approach, that consumed less memory, converged faster and was presumably more accurate.
In terms of hardware, I had 2 machines at home, one for prototyping with 2 x Titan X Pascal and one for heavy lifting with 4 x GTX 1080 Ti.
Alexander Buslaev
My name is Alexander Buslaev. I graduated from ITMO University, Saint-Petersburg, Russia. I have 5 years experience in classical computer vision and worked in a number of companies in this field, especially in UAV. About a year ago I started to use deep learning for various tasks in image processing - detection, segmentation, labeling, regression.
I like computer vision competitions, so I also took part in:
- NOAA Fisheries Steller Sea Lion Population Count: 13th out of 385
- Planet: Understanding the Amazon from Space: 7th out of 938.
- Topcoder: Konica Minolta: Pathological Image Segmentation Challenge: 10th out of 70
Artsiom Sanakoyeu
My name is Artsiom Sanakoyeu. I got my Master’s degree in Applied Mathematics and Computer Science from Belarusian State University, Minsk, Belarus. After graduation, I started my Ph.D. in Computer Vision at Heidelberg University, Germany.
My main research interests lie at the intersection of Computer Vision and Deep Learning, in particular Unsupervised Learning and Metric Learning. I have publications in top-tier Computer Vision / Deep Learning conferences such as NIPS and CVPR.
- CliqueCNN: Deep Unsupervised Exemplar Learning
- Deep Unsupervised Similarity Learning using Partially Ordered Sets
For me, Kaggle is a place to polish my applied skills and to have some competitive fun. Beyond Carvana, I took part in a couple of other computer vision competitions:
- NOAA Fisheries Steller Sea Lion Population Count: 4th out of 385 (Gold Medal).
- Planet: Understanding the Amazon from Space: 17th out of 938 (Silver Medal).
Diving Into The Solution
Problem Overview
The objective of this competition was to create a model for binary segmentation of high-resolution car images.
- Each image has resolution 1918x1280.
- Each car presented in 16 different fixed orientations:
- Train set: 5088 Images.
- Test set: 1200 in Public, 3664 in Private, 95200 were added to prevent hand labeling.
Problems with the Data
In general, the quality of the competition data was very high, and we believe that this dataset can potentially be used as a great benchmark in the computer vision community.
The score difference between our result (0.997332) and the second place (0.997331) result was only 0.00001, which can be interpreted as an average 2.5-pixel improvement per 2,500,000-pixel image. To be honest, we just got lucky here. When we prepared the solution for the organizers, we invested some extra time and improved our solution to 0.997343 on the private LB.
To understand the limitations of our models, we performed a visual inspection of the predictions. For the train set, we reviewed cases with the lowest validation scores.
Most of the observed mistakes were due to the inconsistent labeling, where the most common issue was holes in the wheels. In some cars, they were masked and in some they were not.
We don't have a validation score for the test set, but we found problematic images by counting the number of pixels where the network prediction confidence was low. To account for the different size of the cars in the images, we divided this number by the area of the background. Our ‘unconfidence’ metric was calculated as a number of pixels with scores in [0.3, 0.8] interval, divided by a number of pixels with scores in the interval [0, 0.3) + (0.8, 0.9]. Of course, other approaches based on Information theory may be more robust, but this heuristic worked well enough.
We then ranked the images by ‘unconfidence’ score and visually inspected the top predictions. We found out that most of the errors were due to incorrect human labeling of category “white van”. Networks consistently were giving low confidence predictions on such images. We believe that it was due to the low presence of white vans in the training set and to the low contrast between the van and the white background. The image below shows gray areas in the mask where the prediction confidence was low.
We weren't the only ones who encountered this issue. It was discussed at the forum and other participants implemented post-processing heuristics to address this and similar cases.
There were also a few training masks with large errors, like the one shown below. Heng CherKeng posted fixed versions of the masks at the forum, but their number was relatively small and we didn’t use them during training.
Vladimir’s Approach
My first attempt was to use UNet with the same architecture as Sergey Mushinskiy. I used this before in the DSTL Satellite Imagery Feature Detection last spring, but I was unable to get above 0.997 (~50th place in the Public LB).
In the DSTL challenge, UNet with pre-trained encoder worked exactly the same as if it was initialized randomly. I was also able to show good result without pre-trained initialization in the other challenges, and because of that I got the impression that for UNet, pre-trained initialization is unnecessary and provides no advantage.
Now I believe that initialization of UNet type architectures with pre-trained weights does improves convergence and performance of binary segmentation on 8-bit RGB input images. When I tried UNet with encoder based on VGG-11 I easily got 0.972 (top 10 at Public Leaderboard).
For image augmentation, I used horizontal flips, color augmentations and transforming a car (but not background) to grayscale.
Original Images had resolution (1918, 1280) and were padded to (1920, 1280), so that each side would be divisible by 32 (network requirement), then used as an input.
With this architecture and image size, I could fit only one image per GPU, so I did not use deeper encoders like VGG 16 / 19. Also my batch size was limited to only 4 images.
One possible solution would be to train on crops and predict on full images. However, I got an impression that segmentation works better when the object is smaller than the input image. In this dataset some cars occupied the whole width of the image, so I decided against cropping the images.
Another approach, used by other participants, was to downscale input images, but this could lead to some losses in accuracy. Since the scores were so close to each other, I did not want to lose a single pixel on this transformations (recall 0.000001 margin between the first and the second place at the Private Leaderboard)
To decrease the variance of the predictions I performed bagging by training separate networks on five folds and averaging their five predictions.
In my model I used the following loss function:
It's widely used in the binary image segmentations, because it simplifies thresholding, pushing predictions to the ends of the [0, 1] interval.
I used Adam Optimizer. For the first 30 epochs I decreased learning rate by a factor of two, when validation loss did not improve for two epochs. Then for another 20 epochs I used cyclic learning rate, oscillating between 1e-4 and 1e-6 on schedule: 1e-6, 1e-5, 1e-4, 1e-5, 1e-6, with 2 epochs in each cycle.
Few days before the end of the competition I gave a try to a pseudo-labeling and it showed consistent boost to the score, but I did not have enough time to fully leverage the potential of this technique in this challenge.
Predictions for each fold without post processing:
Alexander's approach
Like everyone else, I started with the well-known UNet architecture and soon realized that on my hardware I need to either resize input images or wait forever till it learns anything good on image crops. My next attempt was to generate a rough mask and create crops only along the border, however learning was still too slow. Then I started to look for new architectures and found a machine learning training video showing how to use LinkNet for image segmentation. I found the source paper and tried it out.
LinkNet is a classical encoder-decoder segmentation architecture with following properties:
- As an encoder, it uses different layers of lightweight networks such as Resnet 34 or Resnet 18.
- Decoder consists of 3 blocks: convolution 1x1 with n // 4 filters, transposed convolution 3x3 with stride 2 and n // 4 filters, and finally another convolution 1x1 to match the number of filters with an input size.
- Encoder and decoder layers with matching feature map sizes are connected through a plus operation. I also tried to concatenate them in filters dimension and use conv1x1 to decrease the number of filters in the next layers - it works a bit better.
The main drawback of this architecture is related to the first powerful feature that start from 4x smaller image size, so it might be not as precise as we could expect.
I picked Resnet 34 for an encoder. I also tried Resnet 18, which was not powerful enough, and Resnet 50, which had a lot of parameters and was harder to train. The encoder was pre-trained on Imagenet data set. One epoch took only 9 minutes to train and a decent solution was produced after only 2-3 epochs! You definitely should give LinkNet a try - it's blazingly fast and memory efficient. I trained it on full 1920*1280 images with 1 picture / GPU (7.5gb) for a batch.
I applied soft augmentations: horizontal flips, 100 pix shifts, 10% scalings, 5° rotations and HSV augmentations. Also, I used Adam (and RMSProp) optimizer with learning rate 1e-4 for the first 12 epochs and 1e-5 for 6 more epochs. Loss function: 1 + BCE - Dice. Test time augmentation: horizontal flips.
I also performed bagging to decrease the variance of predictions. Since my training time was so fast, I could train multiple networks and average their predictions. Finally, I had 6 different networks, with and without tricks, with 5 folds in each network, i.e. I averaged 30 models in total. It’s not a big absolute improvement, every network made some contribution, and the score difference with the second place on the private leaderboard was tiny.
Less common tricks:
- Replace plus sign in LinkNet skip connections with concat and conv1x1.
- Hard negative mining: repeat the worst batch out of 10 batches.
- Contrast-limited adaptive histogram equalization (CLAHE) pre-processing: used to add contrast to the black bottom.
- Cyclic learning rate at the end. Exact learning rate schedule was 3 cycles of: (2 epoch 1e-4, 2 epoch 1e-5, 1 epoch 1e-6). Normally, I should pick one checkpoint per cycle, but because of high inference time I just picked the best checkpoint out of all cycles.
Artsiom's approach
I trained two networks that were part of our final submission. Unlike my teammates who trained their models on the full resolution images, I used resized 1024x1024 input images and upscaled the predicted masks back to the original resolution at the inference step.
First network: UNet from scratch
I tailored a custom UNet with 6 Up/Down convolutional blocks. Each Down block consisted of 2 convolutional layers followed by 2x2 max-pooling layer. Each Up block had a bilinear upscaling layer followed by 3 convolutional layers.
Network weights were initialized randomly.
I used f(x) = BCE + 1 - DICE as a loss function, where BCE is per-pixel binary cross entropy loss and DICE is a dice score.
When calculating BCE loss, each pixel of the mask was weighted according to the distance from the boundary of the car. This trick was proposed by Heng CherKeng. Pixels on the boundary had 3 times larger weight than deep inside the area of the car.
The data was divided into 7 folds without stratification. The network was trained from scratch for 250 epochs using SGD with momentum, multiplying learning rate by 0.5 every 100 epochs.
Second network: UNet-VGG-11
As a second network I took UNet with VGG-11 as an encoder, similar to the one used by Vladimir, but with a wider decoder.
VGG-11 (‘VGG-A’) is an 11-layer convolutional network introduced by Simonyan & Zisserman. The beauty of this network is that its encoder (VGG-11) was pre-trained on Imagenet dataset which is a really good initialization of the weights.
For cross-validations I used 7 folds, stratified by the total area of the masks for each car in all 16 orientations.
The network was trained for 60 epochs with weighted loss, same as was used in the first network, with cyclic learning rate. One learning loop is 20 epochs: 10 epochs with base_lr, 5 epochs with base_lr * 0.1, and 5 epochs with base_lr * 0.01.
The effective batch size was 4. When it didn’t fit into the GPU memory, I accumulated the gradients for several iterations.
I used two types of augmentations:
- Heavy - random translation, scaling, rotation, brightness change, contrast change, saturation change, conversion to grayscale.
- Light - random translation, scaling and rotation.
The first model was trained with heavy augmentations. The second one was trained for 15 epochs with heavy augmentations and for 45 epochs with light augmentations.
Results
In total I have trained 14 models (2 architectures, 7 folds each). The table below shows the dice score on cross-validation and on the public LB.
Ensembling of the models from different folds (line ‘ensemble’ in the table) was performed by averaging 7 predictions from 7 folds on the test images.
As you can see, ensembles of both networks have roughly the same performance - 0.9972. But because of the different architectures and weights’ initialization, a combination of these two models brings a significant contribution to the performance of our team’s final ensemble.
Merging and Post Processing
We used a simple pixel-level average of models as a merging strategy. First, we averaged 6*5=30 Alexander’s models, and then averaged all the other models with it.
We also wanted to find outliers and the hard cases. For this, we took an averaged prediction, found pixels in probability range 0.3-0.8, and mark them as unreliable. Then we sorted all results unreliable pixels area, and additionally processed the worst cases. For these cases, we selected best-performing models and adjusted probability boundary. We also performed convex hull on areas with low reliability. This approach gave good-looking masks for cases where our networks failed.
Extra materials
- Video from Roman Solovyov (https://www.youtube.com/watch?v=hwCUY4mwX1I)
- Presentation at Yandex by Sergey Mushinsky (4th place solution) (English subtitles)
- 6th place JbestDeepGooseFlops solution overview