Quantcast
Channel: Winners’ Interviews – No Free Hunch
Viewing all articles
Browse latest Browse all 62

A Brief Summary of the Kaggle Text Normalization Challenge

$
0
0

This post is written by Richard Sproat & Kyle Gorman from Google's Speech & Language Algorithms Team. They hosted the recent, Text Normalization Challenges. Bios below.

Now that the Kaggle Text Normalization Challenges for English and Russian are over, we would once again like to thank the hundreds of teams who participated and submitted results, and congratulate the three teams that won in each challenge.

The purpose of this note is to summarize what we felt we learned from this competition and a few take-away thoughts. We also reveal how our own baseline system (a descendent of the system reported in Sproat & Jaitly 2016) performed on the two tasks.

First some general observations. If there’s one difference that characterizes the English and Russian competitions, it is that the top systems in English involved quite a bit of manual grammar engineering. This took the form of special sets of rules to handle different semiotic classes such as measures, or dates, though, for instance, supervised classifiers were used to identify the appropriate semiotic class for individual tokens. There was quite a bit less of this in Russian and the top solutions there were much more driven by machine-learning solutions, some exclusively so. We interpret this to mean that, given enough time, it is not too hard to develop a hand-built solution for English, but Russian is sufficiently more linguistically complicated that it would be a great deal more work to build a system by hand. The first author was one of the developers of the original Kestrel system for Russian, which was used to generate the data used in this competition, and he can certainly attest to it being a lot harder to get right than English.

Second, we’re sure everyone is wondering: how well does our own system perform? Since participants used different amounts of data in addition to the official Kaggle training data—most used some or all of the data on the GitHub repository, which is a superset of the Kaggle training data—it is hard to give a completely “fair” comparison, so we decided to restrict ourselves to a model that was trained only on the official Kaggle data.

In the tables and charts below, the top performing Kaggle systems are labeled en_1, en_2, en_3 and ru_1, ru_2, ru_3 for the first, second and third place in each category. Google is of course our system. Google+fst (English only) is our system with a machine-learned finite-state filter that constrains the output of the neural model and prevents it from producing “silly errors” for some semiotic classes; see, again, the Sproat & Jaitly 2016 paper for a description of this approach.

As we can see, the top performing English systems did quite a bit better overall than our machine-learned system. Our RNN performed particularly poorly compared to the other systems on MEASURE expressions (things like 3 kg), though the FST filter cut our error rate on that class in half.

For Russian, on the other hand, we would have come in second place, if we had been allowed to compete. From our point of view, the most interesting result in the Russian competition was the second-place system ru_2. While the overall scores were not quite as good as ru_1 or our system, the performance on several of the “interesting” classes was quite a bit better. ru_2 got the lowest error rate on MEASURE, DECIMAL and MONEY, for example. This system used Facebook AI Research’s fairseq system, a convolutional model (CNN) that is becoming increasingly popular in Neural Machine Translation. Is such a system better able to capture some of the class-specific details of the more interesting cases? Since ru_2 also used eight files from the GitHub data, it is not clear whether this is due to a difference in the neural model (CNN versus RNN with attention), the fact that more data was used, or some combination of the two. Some experiments we’ve done suggest that adding in more data gets us more in the ballpark of ru_2’s scores on the interesting classes, so it may be a data issue after all, but at the time of writing we do not have a definite answer on that.

Author Bios:

Richard Sproat is a Research Scientist in the speech & language algorithms team at Google in New York. Prior to joining Google he worked at AT&T Bell Laboratories, the University of Illinois and the Oregon Health & Science University in Portland.

Kyle Gorman works on the speech & language algorithms team at Google in New York. Before joining Google in 2015, he worked as a postdoctoral research assistant, and assistant professor, at the Center for Spoken Language Understanding at the Oregon Health & Science University in Portland.


Viewing all articles
Browse latest Browse all 62

Trending Articles