Quantcast
Channel: Winners’ Interviews – No Free Hunch
Viewing all articles
Browse latest Browse all 62

Facebook V: Predicting Check Ins, Winner's Interview: 3rd Place, Ryuji Sakata

$
0
0
facebook_V-3rd

The Facebook recruitment challenge, Predicting Check Ins, ran from May to July 2016 attracting over 1,000 competitors who made more than 15,000 submissions. Kagglers competed to predict a ranked list of most likely check-in places given a set of coordinates. Using just four variables, the real challenge was making sense of the enormous number of possible categories in this artificial 10km by 10km world. The third place winner, Ryuji Sakata, AKA Jack (Japan), describes in this interview how he tackled the problem using just a laptop with 8GB of RAM and two hours of run time.

The basics

What was your background prior to entering this challenge?

I'm working as a data scientist at an electronics company in Japan. My major at university (Aeronautics and Astronautics) is actually far from my current job, and I didn't have knowledge about data science at all. I got to know this field after starting work, and keep learning until now.

How did you get started competing on Kaggle?

I joined Kaggle about two and a half years ago in order to learn data mining in practice. The first competition I attended was Allstate Purchase Prediction Challenge and then I realized the fun of data mining. Since then, I spent much of my spare time on Kaggle. Competing with other Kagglers is always very exciting and it has been a good learning experience!

What made you decide to enter this competition?

This competition seemed to be something special compared with other challenges. The objective variable we had to predict has so many categories, and the number of variables we can use to predict is only 4! I was attracted to the concept that this competition was testing Kaggler's ingenious ideas.

Let's get technical

What preprocessing and supervised learning methods did you use?

My basic idea is very similar to that of Markus Kliegl as written in his Winner's Interview, which is Naive Bayes approach.

naive_bayes

What I had to do is to find place which maximizes above probability. Brief steps of my approach are as follows:

  1. To narrow down candidate places for each record, I counted check-ins of each place using 100 x 200 grid, and places which have at least 2 check-ins are treated as candidates of each cell.
  2. Estimate p(place_id) from past trend of check-in frequency by xgboost regression (refer to Figure A).
  3. Estimate each distribution by using histogram (refer to Figure B).
  4. Calculate the product of probabilities of all candidates for each record.
  5. Select top 3 places which have high probability and create submission.

Regarding the second step, it seemed that time variation of check-in frequency is quite different between places, but I believe that some patterns of trend exists. So I decided to predict the number of future check-ins of each place from history of all places. The concept is as shown in the figure below.

Concept

Figure A

Regarding the third step, I also illustrated the concept of estimation in the figure below.

Estimation

Figure B

There are some additional notes regarding the above figure:

  • After counting check-ins for each bin, counted values are smoothed by using neighbor bins.
  • About "x", "y", and "accuracy", the concept of time decay was introduced by multiplying weight when counting check-ins in the following manner.
  • time_decay

  • About "y", it seemed that distribution follows a normal distribution for many places, so I also estimated the center and standard deviation of places and calculated the probability from that.
  • About "time of day", "day of week", and "accuracy", small constant (pseudocount) is added after counting check-ins in order to avoid probability becoming zero.

Simple version of my solution is uploaded on my kernel here.

How did you spend your time on this competition?

The approach was almost fixed in the early stage of competition, and I conducted trials and errors many and many times to maximize the local validation score. I spent much time to optimize hyperparameters to estimate distributions, such as:

  • The number of bins for each variable
  • How to smooth the distribution (number of neighbors and weight)
  • How to decay the counting weight according to time elapse
  • What pseudocount should be added

At the end of competition, however, I couldn’t improve the score by changing the above parameters, so I shifted to improving the precision of p(place_id).

Which tools did you use?

I always use R for Kaggle competitions just because I am familiar with it. However, I would like to master python too in future.

What was the run time for both training and prediction of your winning solution?

My solution takes only about 2 hours by 8GB RAM laptop, and maybe this is the most prominent feature of my approach. Some other competitors seemed to apply xgboost for many small cells, and took very long time, but it was quite unbearable to me! I decided to focus on achieving both accuracy and speed. Thanks to that, I can conduct the trial and error method so many times.

Words of wisdom

What have you taken away from this competition?

Through this great competition, I gained a little more confidence in this field, but at the same time, I was surprised by other Kagglers' ideas which I would have never came up with. From this experience, I realized there are still many things I have to learn. So I would like to keep learning.

Do you have any advice for those just getting started in data science?

I am still a beginner in data science, but if there is one thing to say, “What one likes, one will do well.” For me, it is Kaggle. I always enjoy competing and communicating with other Kagglers!

Bio

Ryuji Sakata works at Panasonic group as a data scientist. He has been involved in data science for about 3 years. He holds a master's degree in Aeronautics and Astronautics at Kyoto University.


Viewing all articles
Browse latest Browse all 62

Trending Articles