The Home Depot Product Search Relevance competition which ran on Kaggle from January to April 2016 challenged Kagglers to use real customer search queries to predict the relevance of product results. Over 2,000 teams made up of 2,553 players grappled with misspelled search terms and relied on natural language processing techniques to creatively engineer new features. With their simple yet effective features, Team Turing Test found that a carefully crafted minimal model is powerful enough to achieve a high ranking solution. In this interview, Team Turing Test team walks us through their full approach which landed them in third place.
The basics
What was your background prior to entering this challenge?
Chenglong Chen: I have received my Ph.D. degree in Communication Engineering from Sun Yat-sen University, Guangzhou, China, in 2015. As a Ph.D. student, I mainly focused on passive digital image forensics, and applied various machine learning methods, e.g., SVM and deep learning, to detect whether a digital image has been edited/doctored. I have participated in a few Kaggle competitions before this one.
Igor Buinyi: I was a Ph.D. student at the Institute of Physics, Academy of Sciences of Ukraine. Later I received my MA degree in Economics from Kyiv School of Economics, then applied my skills in the financial sector and computer games industry. I have solid skills in statistics, data analysis and data processing, but I started seriously working on machine learning only recently after having graduated with the Udacity Data Analyst Nanodegree. Now I analyze customer behavior at Elyland.
Kostiantyn Omelianchuk: I have an MS degree in System Analysis and Management from Kyiv Polytechnic Institute and 3 years of data analysis and data processing experience in the financial sector and game development industry. Now I analyze customer behavior at Elyland.
Do you have any prior experience or domain knowledge that helped you succeed in this competition?
Chenglong: I have limited knowledge of NLP tasks. In addition to the CrowdFlower Search Results Relevance competition at Kaggle (where I ended up in 1st place), reading forum posts, previous Kaggle winning solutions, related papers, and lots of Google searches have given me many inspirations.
Igor: In a student project, I used NLP and machine learning to analyze the Enron email dataset.
How did you get started competing on Kaggle?
Kostia: Almost a year ago I read an article about a solution to a Kaggle competition. I was very impressed by that text, so I registered at Kaggle and invited Igor to do the same. We then participated in Springleaf Marketing Response, however, we only finished in the top 15%.
Chenglong: It dates back 2012. At that time, I was taking Prof. Hsuan-Tien Lin's Machine Learning Foundations course on Coursera. He encouraged us to compete on Kaggle to apply what we have learnt to real world problems. From then on, I have occasionally participated in competitions I find interesting.
What made you decide to enter this competition?
Igor: I just graduated from the Udacity Nanodegree program and was eager to apply my new skills in a competition, no matter what. At that time this competition had started, so I joined and invited Kostia to the team.
Chenglong: I have prior successful experience in the CrowdFlower Search Relevance competition on Kaggle which is quite similar to this one. I also had some spare time and wanted to strengthen my skills.
Let's get technical
What preprocessing and supervised learning methods did you use?
The documentation and code for our approach are available here. Below is a high level overview of the method.
The text preprocessing step included text cleaning (removing special characters, unifying the punctuation, spell correction and thesaurus replacement, removing stopwords, stemming) and finding different meaningful parts in text (such as concatenating digits with measure units, extracting brands and materials, finding part-of-speech tags for each word, using patterns within the text to identify product names and other important words).
Our features can be grouped as follows:
- Basic features like word count, character count, percentage of digits, etc.
- Intersection and distance features (various intersections between search term and other text information, Jaccard and Dice coefficients calculated using different algorithms).
- Brand/material match features. Brands and materials were among the key determinants of the search results relevance.
- First and last ngram features. They are designed to put more weight on the first and last ngram. The general idea would be to incorporate position weighting into these features.
- LSA features. Most of our models are not efficient in dealing with sparse TFIDF vector space features. So we used the dimension reduced version via SVD.
- TFIDF features in different combinations. They allow accounting for word frequency within the whole text corpus or particular query-product pair.
- Word2vec features from pretrained models or models trained on the HomeDepot data.
- WordNet similarity features. It allowed us to assess the closeness of the words as defined by NLTK WordNet. We calculated synset similarity for pairs of important words (ideally, we wanted important words to be product names and their main characteristics) or even whole strings.
- Query expansion. The idea was to group similar queries together and then estimate how common a particular product description was. The relevance distribution was significantly skewed to 3, so in each small subset the majority of relevances would be closer to 3 than to 1. Thus, a higher intersection of a particular product description with the list of the most common words would indicate higher relevance.
- Dummies: brands, materials, important words.
How did you settle on a strong cross-validation strategy?
Chenglong: After seeing quite a few LB shakeups in Kaggle competitions, the first thing I do after entering a Kaggle competition is performing some data analysis and setting up an appropriate cross-validation strategy. This is quite important as the CV results will act as our guide for optimizing in the remaining procedure (and we don’t want to be misled). It also will help to accelerate the circle of change-validate-check. After some analysis, we have found that there are search terms only in the training set, only in the testing set and those in both training and testing set. This also applies for product uid. Taking these into consideration, we have designed our splitter for the data. Using this splitter, the gap between local CV and public LB is consistent around 0.001~0.002 which is within the 2-std range of CV for both single models and ensembled models. In the following figure, we compare different splits of product uid and search term via Venn-Diagram.
Fig. 2 Comparison of different splits on search term (top row) and product uid (bottom row). From left to right, shown are actual train/test split, naive 0.69 : 0.31 random shuffle split, and the proposed split.What was your most important insight into the data?
Chenglong: Including this competition, I have participated in two relevance prediction competitions. From my point of view, the most important features are those measuring the distance between searching query and the corresponding results. Such distance can be measured via various distance measurements (Jaccard, Dice, cosine similarity, KL, etc.), and from different level (char ngram, word ngram, sentence, document), and with different weighting strategy (position weighting, IDF weighting, BM25 weighting, entropy weighting).
Kostia: We found the data too noisy to be captured by a single unified text processing approach. Therefore, we needed to apply as many preprocessing/feature extraction algorithms as we could in order to get a better performance.
Igor: We also found it extremely important to account for the number of characters in a word while generating various features. For example, the two strings ‘first second’ and ‘first third’ have Jaccard coefficient of 1/(2+2-1)=1/3 in terms of words and 5/(11+10-5)=5/16 in terms of characters. Such incorporation of the number of characters information into the intersection features, distance features and TFIDF features was very beneficial.
Were you surprised by any of your findings?
Kostia: I discovered that routine work gives 90% of the result. Only the minor part comes from luck and knowledge of ‘magic’ data science, at least in this competition. One could also easily compensate the lack of particular knowledge with some invested time and effort. Perseverance and desire to learn are the key factors for good performance.
Chenglong: Some data in the testing set are poisoned data that are not used in both public LB and private LB.
Igor: During the competition we had to add more and more features to move forward. In total we ended up with about 500 features used in our single models. After the competition had ended, we tried to produce a simple model as required by the Kaggle solution template. We were surprised that our 10-feature model would yield private RMSE of 0.44949 without any ensembling, enough to be on the 31st leaderboard place. It shows that a solution to such a problem could be simple and effective at the same time. (To be clear, we did not use the released information about the relevance from the test set to produce our simple model).
Which tools did you use?
Chenglong: We mostly used Python with packages such as numpy, pandas, regex, mathplotlib, gensim, hyperopt, keras, NLTK, sklearn, xgboost. I also used R with Rtsne package for computing the TNSE features. Igor and Kostia have used Excel to perform some descriptive analysis and generate some charts.
How did you spend your time on this competition?
Igor: Kostia and I spent almost equal amounts of time on feature engineering and machine learning. For some tasks we employed specialization: I focused on text preprocessing and using NLTK WordNet, Kostia got the most from word2vec. At some point we realized that we had chances to win, so we had to properly document our work and adhere to the requirements of the winning solutions. So, in four final weeks of the competition we spent much time on rewriting and clearing our code and recalculating features according to a unified text processing algorithm (which did not lead to an improvement of the results; we even lose some performance due to a reduced variance of our features).
Chenglong: In the very beginning of the competition, I focused on figuring out the appropriate CV strategy. After that I have started to reproduce the solution I used in the CrowdFlower competition. Meanwhile, I spent quite some time refactoring the whole framework for the purpose of adding data processor/feature generator/model learner easily. With a scalable codebase, I spent about 70% of the time figuring out and generating various and effective features.
Kostia: During the final week the whole team spent a few days on discussing patterns in the dataset (Fig. 3) and trying to figure out how to deal with them. Since our model predictions (one coming from Chenglong with some our features and another from me and Igor) for the test set were quite different, we expected that one of our approaches would be more robust to the leaderboard shakeup. In fact, we did not observe a major difference between the models’ performance in the public and private sets.
What was the run time for both training and prediction of your winning solution?
Igor: The text preprocessing and feature generation part takes a few days. Though a single xgboost model can be trained in about 5 minutes, training and predicting all models for the winning solution takes about 1 full day of computation.
Words of wisdom
What have you taken away from this competition?
- Spelling correction/synonyms replacement/text cleaning are very useful for searching query relevance prediction as also revealed in CrowdFlower.
- In this competition, it is quite hard to improve the score via ensemble and stacking. To get the most out of ensemble and stacking, one should really focus on introducing diversity. To that goal, try various text processing, various feature subsets, and various learners. Team merging also contributes a lot of diversity!
- That said, know how to separate the effect of an improved algorithm from the effect of increased diversity. Careful experiments are necessary. If some clear and effective solution does not work as well as your old models, do not discard it until you compare both approaches in the same conditions.
- Keep a clean and scalable codebase. Keep track/log of the change you have made, especially how you create those massive crazy ensemble submissions.
- Set up an appropriate and reliable CV framework. This allows trying various local optimizations on the features and models and getting feedback without the need to submit them. This is important for accelerating the change-validate-check cycle.
Teamwork
How did your team form?
Igor: Kostia and I worked together since the start of the competition. Before the merger deadline, the competitors started to merge very intensely. We also realized that we had to merge in order to remain on the top. If we remember correctly, at the moment of our final merger Chenglong was the only sole competitor in the top 10 and there were few other teams of two in the top 10. So, our merger was natural.
How did your team work together?
Chenglong: For most of our discussion, we used Skype. For data and file sharing, we used Google Drive and Gmail.
How did competing on a team help you succeed?
Chenglong: About one or two weeks before the team merging deadline, I was the only one in the top 10 that competed solely. If I hadn't merged with Igor and Kostia, I might not have been able to enter the top 10 in the private LB. Competing on a team helps to introduce variants for obtaining better results. Also, we have learned a lot from each other.
Igor: All top teams in this competition merged during the later stages, and this fact signifies the importance of merging for achieving top performance. In our team we were able to quickly test a few ideas due to information sharing and cooperation.
Just for fun
If you could run a Kaggle competition, what problem would you want to pose to other Kagglers?
Kostia: Developing an AI or recommendation tool for poker.
What is your dream job?
Chenglong: Chef.