As a professional translator, my heart drops every time I hear that a company has opted for machine translation when localizing media such as manga or video games.
Some see machine translation (known as MT in the business) as the solution for an apparent shortage of human translators, and on the surface it seems perfect for speeding up the localization process: The source language is fed into a computer program and then the target language comes out the other end moments later.
However, despite some fantastic technological advances, a machine is still a machine and will often spit out robotic-sounding sentences, which then takes away from the consumer experience.
MTs are programmed with corpuses, collections of written texts, that are meant to train algorithms to select the “best” translation for a given term. In a nutshell, translations are generated based on patterns found in previously translated texts. So, if the MT software you are using includes a corpus of technical translations, then it will be better at translating texts with similar technical terms. Thus, the software is only as good as the algorithm and corpuses that it owns.
For example, the commonly used Google Translate is relatively good at translating European languages into English. This is because it has been fed with years and years of open-source documents from the European Parliament and United Nations, material that was translated by professional human translators into many different languages (at least six — Arabic, Chinese, English, French, Russian and Spanish — in the case of the U.N.).
Translating from Japanese and other “minor” languages is more limited, however, due to a lack of open-source human translations for the corpus. Additionally, language is complicated and there are exceptions to grammatical rules that make it impossible to give a machine a large enough database to take all of them into account to deliver a “perfect” translation.
Japanese-to-English translation is particularly difficult because Japanese is a high-context language, meaning information is often not explicitly stated. Subjects, number of objects and gender are a few of the things that get left out when producing the language. A sentence as simple as プリン食べられた (purin taberareta) could translate as “someone ate my pudding,” “he ate all my pudding” or “my pudding was eaten.”
Idiomatic phrases, innuendos, slang and regional dialects introduce even more exceptions to certain grammar rules, and it’s still incredibly difficult to train a computer to handle them appropriately. Take, for instance, the phrase 仕方がない (shikata ga nai). In English, it basically means “it couldn’t be helped” but, depending on the context, it could be translated as “tough luck,” “it was bound to happen” or “that’s too bad.”
What MT lacks are gut instinct, empathy and the human experience. “With anything but the simplest of sentences, machines still struggle to produce a correct translation between Japanese and English,” says translator Joshua VanValkenburg, who works with Japanese, French and English. “Even when they do, the English rarely reads naturally.”
Most companies generally tend to agree with VanValkenburg, which is why some attempt to remedy the issue with machine translation post-editing (MTPE). This is when a machine translates the language and a human translator checks and tweaks the final product. Some larger companies claim that this method improves the productivity of human translators, increases translation speed and brings down costs. Translators, on the other hand, feel differently.
“The issue is that the amount of rewriting required rarely saves time or effort,” says VanValkenburg, “so I avoid taking MTPE jobs whenever possible. If the source language text is not provided, I’m often left guessing what the original was and this inevitably leads to more errors. If the source language text is provided, I’d rather translate from it directly and ignore the machine translation.”
Video game translator Andrew Echeverria isn’t a fan of the post-editing process, either.
“The ‘writing’ was stilted, unnatural and difficult to parse,” he says. “There were a few wonky translations, but most of the workload and frustration came from having to rewrite a majority of what was given to me. It read like an alien trying to communicate, and poorly at that.”
Echeverria also says he avoids taking MTPE jobs. “It just creates more work for the editor,” he says, “who in essence has to just become the translator anyway — while getting half the pay or less than they normally would.”
Machine translations are not about making the lives of translators easier or filling a labor shortage, but about the profit margins for language service providers (LSPs), which enable communication between a brand and its potential audience in other countries.
“Some LSPs claim there is a shortage of skilled translators,” says media translator and localization specialist Katrina Leonoudakis. “This is only partially true: There is a shortage of skilled translators who are willing to accept extremely low pay.”
Leonoudakis remains optimistic, though. “It’s up to us to educate media creators, LSPs and distributors about the value of investing in good localization,” she says. “When done well, localization is a profit enabler. When done poorly, it is an insult to the creators of the original work and the target audience of the shoddy translation. Show creators and viewers that you value them and spend those extra few dollars on good localization.”
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.