Repetez en anglais, s'il vous plait
Most commercial language translation software is pretty bad – but there may be a better way.
While the quality of computer-rendered translations has improved greatly over the past 20 years, some results are still just as grammatically goofy as the instructions on a chopstick wrapper. Take, for example, a website for a Japanese apple farm that was converted to English using Google’s automatic translation service:
“The Someya apple garden it will pass very! It is planted in 1954, furthermore even now, it exceeds tree’s age 50 year old prosperously, large - coming the tree abnormal play alligator apple is fructified. The tasty apple where temperature difference of day and night tightened to be extreme Gunma prefecture Numata city which four seasons are clear large the nature, hard is created.”*
Yes, the big picture gets across, but much is lost by Google’s Japanese-to-English translation algorithm. Google has been offering its translation feature for a number of years, as has the Canadian-based internet company Babel Fish. More recently, though, commercial software developers have begun exploring translation beyond a static webpage or electronic document and are applying the technology to the real-time Internet instant-messaging conversations. Earlier this month, AvMedia released an instant-messaging translator designed to make chatting with friends who speak German, Spanish, French, Italian, and Portuguese easier for English speakers, and vice versa (French can also be translated to German, and German to French).
But all of this software still lacks sufficient accuracy to be useful in demanding situations, such as business negotiations or military planning. This is probably because most commercial software follows a traditional approach to machine translation, says Kevin Knight, computer scientist at the University of Southern California’s Information Sciences Institute (ISI) and co-founder of the California-based company Language Weaver.
Traditionally, machine translation software has depended on algorithms that sort through thousands of grammar rules for the two languages to be translated, Knight says. The problem, he explains, is that so many rules need to be written manually, as do the exceptions to these rules, and inaccuracy creeps in when complex sets of rules contradict each other. “If you write the 5000th rule, sometimes you break things,” Knight says.
With Language Weaver and his research at USC, Knight, as well as a handful of other researchers throughout the world, approach the problem differently. Instead of following rigid grammatical rules, Language Weaver matches correct words and phrases across languages based on the probability that such words and phrases are correct in a given context.
This statistical approach draws from a large number of examples from already translated documents, says Michael Collins, a computer engineer at MIT who uses the same method for a software application he’s building to perform German-to-English translations. IBM pioneered this approach in the 1990s, he says, in part, by taking advantage of a huge database of Canadian parliamentary proceedings published in both French and English versions.
The statistical variety of machine translation not only produces better results than the traditional method, says Knight, but also the software is designed to continue to improve on its own. The more translated documents the software encounters, the more likely it will match phrases correctly. “A few years ago, for our Chinese and Arabic languages, all we could get was the basic topic of what an article was about,” Knight says. “Now, the resolution is at the sentence level.” [Continued on next page]
*Correction, January 18, 2006, 10:00 a.m. EST: In the original version of this story, we cited the following translation of the Someya Apple Farm website: “The apple orchard with big trees over 50 years old. The natural environment around Numata, with the huge temperture difference between day and night, creates uniquly delitious apple.” In fact, this was an excerpt from Google’s translation of the apple farm’s own English version of its website, not from Google’s translation of the original Japanese page. Therefore it was not a valid example of the poor quality of some machine translation algorithms. In the story we have now substituted Google’s translation of the original Japanese site. Thanks to our readers for pointing out the error. - Eds.
The U.S. Defense Advanced Research Projects Agency (DARPA) is one of the major funders of statistical machine translation. Last August, DARPA sponsored machine translation tests for Chinese and Arabic documents; a research group from Google scored the highest, nudging out USC’s Information Sciences Institute and IBM’s machine translation arm. Google, which also uses the statistical approach, might have had an edge, Knight notes, because they could use a huge number of computers for the word-crunching, and could draw from the entire Internet for their database of pretranslated documents.
In 2005, DARPA also announced the Global Autonomous Language Exploitation (GALE) program, intended to speed up the computer processing of huge numbers of translated documents acquired by its parent program, the Philadelphia-based Linguistic Data Consortium.** GALE is currently in the first year and will be transcribing speech from broadcast news sources and talk shows in Arabic, Chinese, and English, and also cataloguing text newswire feeds, Web news discussion groups, and blogs in those languages. For now, the project is focused mainly on data collection from these genres, with researchers in the computer and engineering science department at the University of Pennsylvania doing much of the work.
But even with a large collection of translated material, there will still be language issues to sort out. The next step in machine translation research, beyond matching words and phrases, Knight says, is to smooth out the grammatical inconsistencies that arise when words and phrases are strung together. This smoothing can be accomplished by indexing millions of sentences whose structures have been diagrammed at the University of Pennsylvania in the 1990s (the data came from 50,000 sentences in the Wall Street Journal). Similar to the way a database full of words and phrases allows translation software to choose the most statistically probable combination of words, these specific examples of grammar from the diagrammed sentences help the software assign the likelihood of word order, says MIT’s Collins.
This is an advance over the traditional method in which grammar rules were set in an algorithm, he says. Rather than obeying encoded grammar conventions in an algorithm, as with traditional machine translation, the diagrammed sentence database lets the software assign “probabilities and weight on those rules” says Collins. “[The software] is more likely to learn the context,” he says.
In some ways, however, the statistical approach will only be as good as the common instant-messaging translator. Proper names, for instance, still trip up even the most well-read machine translator, and they often just get translated along with the rest of the text. According to his system, Knight admits, the Spanish version of his surname is still “Kevin Caballero.”
** Correction, January 20, 2006: The original version of this story, published January 18, stated that the Linguistic Data Consortium was launched in 2005. In fact, the consortium was launched in 1992, and its Global Autonomous Language Exploitation project was launched in 2005. – Eds.
Be the leader your company needs. Implement ethical AI.
Join us at EmTech Digital 2019.