The Homesite Quote Conversion competition challenged Kagglers to predict the customers most likely to purchase a quote for home insurance based on an anonymized database of information on customer and sales activity. 1925 players on 1764 teams competed for a spot at the top and team Frenchies found themselves in the money with their special blend of 600 base models. Nicolas, Florian, and Pierre describe how the already highly separable classes challenged them to work collaboratively to eke out improvements in performance through feature engineering, effective cross validation, and ensembling.
Frenchies
The team started off with Pierre and Florian as they are longtime friends. Nicolas asked to join later in the competition and it was one of the best decisions of this challenge! All of us were finalists in the “Cdiscount.com” competition hosted on datascience.net, the “French Kaggle”. It was a real pleasure for all of us to work as French guys and to demonstrate our skill on an international contest.
Nicolas Gaude
Working for Bouygues Telecom, a French telecom operator with 15M subscribers, I’m heading its data-science team with a focus on production efficiency and scalability. With a 10 years background in embedded software development, I moved to the big data domain 3 years ago and fell in love with machine learning. Kaggle is for me a unique opportunity to sharpen my skills and to compete with other data scientists around the world. And honestly Kaggle is the only place where a 0.0001% improvement matters so much that you can go for 100’s models ensemble to get to the top, and that’s a lot of fun.
Florian Laroumagne
Currently working as a BI Analyst at EDF (the major French and worldwide electricity provider) I graduated from the ENSIIE, a top French maths & IT engineering school. I had some statistical and machine learning courses however I had no opportunity in my professional life to apply it. To improve my skills, I followed some MOOC (on “france-universite-numerique” and on “Coursera”) about statistics with R, big data and machine learning. After having acquired theoretical lessons, I wanted to put them into practice. This is how I ended up on Kaggle.
Pierre Nowak
I graduated from ENSIIE & Université d’Evry Val d’Essonne with a double degree in Financial Mathematics. My interest in machine learning came with my participation in a text mining challenge hosted by datascience.net. I have been working for 7 months at EDF R&D first on text mining problems and recently changed to forecasting daily electricity load curves. Despite the fact many people say Kaggle is brute force only, I find it to be the place to learn brand new algorithms and techniques. I especially had the opportunity to learn Deep Learning with Keras and next level blending thanks to Nicolas and some public posts from Gilberto and the Mad Professors.
Background
None of us had prior background on the business of Homesite since we do not work in the same field. However, we weren’t hurt by this. We think the fact the data was anonymized brought most of the competitors approximately to the same level.
About the technologies used, there are two schools inside our team. Nicolas was pro-efficient in python while Florian was more R focused. Pierre was quite polyvalent and was the glue between the 2 worlds.
The solution
Feature engineering
We have to admit that feature engineering wasn’t very easy for us. Sure, we tried some differences between features which can then be selected (or not) via a feature selection process but at the end, we had only the basic dataset with a few engineered features. The kept ones were:
- Count of 0, 1 and N/A row-wise
- PCA top component features
- TSNE 2D
- Cluster ID generated with k-means
- Some differences among features (especially the “golden” features found in a public script)
This challenge was really fun because even at the beginning of the competition, the AUC was really high (around 97% already). As we can see, the two classes are in fact quite easily separable:
Dataset modeling
As other teams, we encoded categorical features. Most of the time, it was done using a very common “label encoder”: all features where replaced with an ID. Despite the simplicity of this method, it works quite well for tree-based classifiers. However for linear ones it’s not recommended, that’s why we also generated “one hot encoded” features. Finally we also tried target encoding in order to find a ratio of the categorical features related to the target. It didn’t improve our score a lot but was worth having in our blend.
Now that we have different versions of the dataset, we also split it. We used a full version (all features, all rows) for the majority of our classifiers but we also trained weaker models based on a subset of columns. For example we trained a model on the “personal” columns only, another one on the “geographical” columns only and so on.
Training & Ensembling
With all the different versions of the dataset, we were able to train them using well known and well performing machine learning models, such as:
- Logistic Regression
- Regularized Greedy Forest
- Neural Networks
- Extra Trees
- XGBoost
- H2O Random Forest (just 1 or 2 models into our first stage: not really important)
Our base level consists of around 600 models. 100 were built by “hand” with different features and hyperparameters of all of the above technologies. Then, to add some diversity we built a robot creating the 500 remaining models. This robot automatically trained models with XGBoost, Logistic Regression and Neural Networks, all based on randomly chosen features.
All our models were built on a 5 fold stratified CV. It allowed us to have a local way to check our improvement and to avoid overfitting the leaderboard. Furthermore, with the CV we were able to use an ensemble method.
Example of the diversity between two models, despite the fact they are highly correlated:
To blend our 600 models, we tried different ways. After some failures, we retained 3 well performing blenders: a classical Logistic Regression, an XGBoost and (a bag) of Neural Networks. Those three blends naturally outperformed our best single model and were able to capture the information in different manners. Then, we transformed our predictions into ranks and we simply averaged the ranks of the 3 blends to have our final submission.
Here is a sketch that sums up this multi-level stacking:
Words of wisdom
Here is a short list of what we learnt and / or what worked for us:
- Read forums, there are lots of useful insights
- Use the best script as a benchmark
- Don’t be afraid to generate lot of models and keep all the data created this way. You could still select them later in order to blend them… Or let the blender take the decision for you
- Validate the behavior of your CV. If you have a huge jump in local score that doesn’t reflect on the leaderboard there is something wrong
- Grid search in order to find hyperparameters works fine
- Blending is a powerful tool. Please read the following post if you haven't have already: http://mlwave.com/kaggle-ensembling-guide/
- Be aware of the standard deviation when blending. Increasing the metric is good but not that much if the SD increases
- Neural nets are capricious bag them if necessary
- Merging is totally fine and helps each teammates to learn from others
- If you are a team, use collaborative tools: Skype, Slack, svn, git...
Recommendations to those just starting out
As written just above, read the forums. There are nice tips, starter codes or even links to great readings. Don’t hesitate to download a dataset and test lot of things on it. Even if most of the tested methods fail or give poor results, you will acquire some knowledge about what is working and what is not.
Merge! Seriously, learning from others is what makes you stronger. We all have different insights, backgrounds or techniques which can be beneficial for your teammates. Last but not least, do not hesitate to discuss your ideas. This way you can find some golden thoughts that can push you to the top!