What does it take to get almost to the top? Meet Pei-Lien Chou, the worthy runner-up in our recent MLTrack Particle Tracking Challenge. We invited him to tell us about how he placed so well in this challenge.
In this contest, Kagglers were challenged to build an algorithm that would quickly reconstruct particle tracks from 3D points left in the silicon detectors. This was part one of a two-phase challenge. In the accuracy phase, which ran from May to August 13th 2018, we focused on the highest score, irrespective of the evaluation time. The second phase is an official NIPS competition (Montreal, December 2018) focused on the balance between accuracy and algorithm speed.
For more on the second phase, see the contest post here. For tips from second-place winner Pei-Lien Chow, read on!
The basics
What was your background before entering this challenge?
I hold a Bachelor’s degree in Mathematics and a Master’s degree in Electronic Engineering. I’ve been an engineer in image-based deep learning since last year.
How did you get started competing on Kaggle?
I joined Kaggle about 1.5 years ago to practice deep learning, and it helped a lot in my day job. I got a top 1% in my first competition, and won in the next. It is really exciting to be in Kaggle competitions.
What made you decide to enter this competition?
I did not pay attention at first, because the competition was not image-based, although I did experiment with some point cloud methods during this competition. But when I realized that the organizer was CERN, the people who are making black holes, I joined for sure.
Let's get technical
What was your approach?
My approach started from a naive idea. I wanted to build a model which could map all of the tracks (the model output) to the detector hits (the model input) for each event, just like we use DL for other problems. The output can be easily represented by NxN matrix if an event has N hits (usually N is around 100k), and Mij = 1 if hit-i and hit-j are in the same track, otherwise 0. But the model size was too large, so I split it into minimum units: input a pair of two hits and output their relationship (Fig.1). Unlike the real “connect the dots” game which only connects adjoining dots, I connect all the dots if they belong to the same track for robustness. Now, I'm ready to start working in this competition.
What happened?
First, I used hit location (x, y, z) as my input, and easily got an accuracy of 0.99 by training on 10 events. But I quickly discovered that this was not good enough to reconstruct tracks. The problem is that even if the false positive rate = 0.01, for a given hit, the false positive pair count = 0.01*100k = 1000, and the true positive pairs are around 10 (the true average length of tracks). But we need the overlap to be larger than 50% both on truth and the reconstructed one to start getting a score.
What happened next?
I got a 0.2 local score on my first try, which was the same as public kernels at that time. I was guessing that maybe 0.6 would win, and hoping that was possible by my approach. God knows!
How did you get to better predictions?
I tried so many methods, and I did improve much more than I expected.
- Larger model size and more training data
5 hidden layers MLP with 4k-2k-2k-2k-1k neurons, training on 3 sets of totally 5310 events, about 2.4 billion positive pairs and many more negative pairs. - Better features
27 features in one pair: x, y, z, count(cells), sum(cells.value), two unit vector come from cells to estimate the hit's direction and random invert when training (Fig.2), and assumed that the two hits are linear or helix with (0, 0, z0), calculate the abs(cos()) with previous two estimated vectors and the tangent of the curve, and the last one is z0.
- Better negative sampling
Sampling more negative pairs which are close to positive pairs, and applying hard negative mining.
Finally, I got an average of 80 false positive pairs for a given hit at 0.97 TPR, and only 6 false positive pairs’ probability are larger than the mean of true positive pairs.
How did you reconstruct tracks?
So far I have a not-so-precise NxN relationship matrix, but it is enough to get good tracks if I use all of them.
Reconstruct: Find N tracks
- Take one hit as seed (such as hit-i), find the highest probability (and larger than a threshold) pair P(i, j), then add hit-j to the track.
- Find maxima P(i, k) + P(j, k) and if the two pairs’ probability are larger than a threshold, then add hit-k to the track.
- Test the new hit to see whether it fits the circle in x-y plane by existing hits after the track has two or three hits. (Without this step I can only get to an 0.8 score.)
- Find the next hit until no further hits are qualified.
- Loop step 1 for all N hits. (Fig.3)
Merge and extend
- Calculate the similarity of all tracks as is track’s quality, which means in a track if all hits’(as seed) corresponding tracks are the same, then the merging priority of the track are higher. (Fig.6)
- Choose high priority tracks first, then extend them by loosening the constraints in the reconstruction step.
- Loop.
Other work
I added an z-axis constraint and ensemble of two models in the end, and got a 0.003 improvement.
I also tried to apply PointNet to find the track on the predicted candidates and track refining. Both performed well but not better.
Fig.3 An example of reconstruction of an event with 6 hits.
Fig.6 An example of the determination of merging priority
Fig.4 The seeds (large circles) and their corresponding candidates (of matching colors) in x-y plane. It's clear that the seeds are in a track.
Fig.5 The diameter of each hit is in direct proportion to the sum of predicted probability seeding by the nine truth hits (in red).
Here is a kernel for reference
I call this process as endless loop, and it is far from my own original idea. Nevertheless, I was very happy when I passed 0.9 in the end.
What was the run time for both the training and prediction of your winning solution?
You know, I have to train on 5k events and apply hard negative mining. And for every test event, I have to predict 100k*100k pairs, reconstruct 100k tracks (actually 800k+ in the winning solution), merge and extend them to 10k tracks. So the run time is an astronomical number. To reproduce all this work might take several months on one computer.
Words of wisdom
Is DL suitable for this topic?
In my opinion, it depends on whether the target can be well described. If so, then the rule-based method should be better. In other words, a clustering approach can get 0.8 in this competition, so applying DL is asking for trouble. But also having fun.
Do you have any advice for those just getting started in data science?
Join Kaggle now (if you haven't already) and just get started.
Bio
Pei-Lien Chou is an engineering team lead in image-based deep learning. He has 12 years of experience in the video surveillance industry. He holds a Bachelor’s in Mathematics at National Taiwan University and a Master’s in Electronic Engineering specialized in speech signal processing at National Tsing Hua University.