A few years ago, I wrote a post called “Star Trek VI: will computers ever emulate the charm of human language learners?“— I was essentially conjuring the vision of the Star Trek “Universal Translator”, and wondered how language learning will look when our future gadgets are smoothly interpreting for us.
I only bring it up this week because I saw the news about the Google Neural Machine Translation system (GNMT) , and it seemed like another reminder of just how quickly things are moving. Gradually, those ‘future gadgets’ are getting closer and closer to the here and now.
Google Translate launched ten years ago using phrase-based machine translation, and it just keeps getting better— to see that they’re moving to GNMT and starting with a challenging pair of languages (Mandarin to English) shows that they’re really not fooling around.
In addition to releasing this research paper today, we are announcing the launch of GNMT in production on a notoriously difficult language pair: Chinese to English. The Google Translate mobile and web apps are now using GNMT for 100% of machine translations from Chinese to English—about 18 million translations per day.
I wonder what will happen when machine translation starts to surpass human translation for most situations? Sound far-fetched? The image below (from Google Research Blog) shows the results after “human raters compare the quality of translations for a given source sentence. Scores range from 0 to 6, with 0 meaning “completely nonsense translation”, and 6 meaning ‘perfect translation.'”. Look at the gains that GNMT made on the old phrase-based technology and how close it is to human translation!
I bet some cool language learning tools will eventually come out of this technology.