National / Media | Japan Pulse

Twitter Japan confronts hate speech with mixed results

by Alisa Yamasaki

Contributing Writer

Following countless claims that Twitter has become a breeding ground for hate speech, the social networking service has started to take measures against hate speech on their platform over the past two years. After several events in September, however, it’s clear that Twitter Japan is still struggling to answer the needs of Japanese users who face harassment on it.

In addition to the Hateful Conduct Policy the service introduced in December 2015 that prohibits discriminatory attacks or violent threats, Twitter made it easier for users to report abusive content in November 2016. While these efforts have shown some success in Europe, the United States and other English-speaking countries, Twitter Japan has had to review its approach when it comes to regulating hate speech following an incident with popular model and actress Kiko Mizuhara.

Mizuhara, who is of American and Zainichi Korean descent, became a target of hate speech on Twitter in early September after she appeared in a series of promotional Twitter posts by Suntory’s Premium Malts account. The advertising campaign features domestic TV personalities enjoying beer, but this seemingly harmless content did not protect Mizuhara from receiving abuse online. Claiming that she is pretending to be Japanese or that she is anti-nationalistic, users on Twitter flooded the replies section with comments that are within the definition of hate speech, at least according to Twitter’s own policy. The incident was so overwhelming that Mizuhara made a statement on her personal Twitter account calling for the end of discrimination based on race and gender.

“Neo-nationalists flocking to PR tweets that use Kiko Mizuhara, repeating abusive and discriminatory comments that you can’t even bear to read. This is the reality of the internet in the 21st century,” a user called @galasoku wrote.

Another user called @visio1206654 pointed out that ignoring hate speech only hurts the company, saying: “Adding to the obvious point that hate speech is bad, incidents like this leads to fewer brands wanting to use Twitter.”

At the same time that Mizuhara was being subjected to harassment on Twitter, volunteer anti-hate speech organization Tokyo No Hate protested in front of the Twitter Japan headquarters on Sept. 8. By displaying and distributing printouts of hateful tweets that have targeted minority groups, the group highlighted the harassment many people face on Twitter everyday.

Twitter has drafted a guideline about its policy against hate speech, but enforcing such regulations seems to be the biggest challenge. On Aug. 20, a Twitter user tweeted about killing a pesky mosquito, which spurred the social networking service to suspend the account permanently.

Directing a tweet at Twitter Japan under a new handle @DaydreamMatcha, they wrote: “My previous account was suspended permanently after I said I killed a mosquito. Is this a violation? Is Twitter (17th-century shogun) Tsunayoshi Tokugawa?” Many users suspected that Twitter’s algorithm picked up the threatening keywords out of context.

Some believe algorithm-based support systems that aren’t monitored by native Japanese speakers are bound to fail.

Yu Koseki, Buzzfeed Japan’s director of business development, provided some insight into the reason why so many Japanese accounts have recently been frozen.

“The company believes users will return when the business isn’t affected economically. Twitter places a low priority on non-English language users, so they don’t think there’s a need to hire Japanese employees. In typical tech company fashion, user support is handled algorithmically,” he wrote under his Twitter handle @youkoseki.

Following the incident with Mizuhara, Twitter Japan responded in a tweet posted on Sept. 6 saying that it “has been expanding its domestic response team in order to resolve problems in a timely manner.”

If Twitter aims to serve as “a platform that allows people to express themselves safely,” as the service wrote in its tweet, the response team to harassment and abuse should comprise of more than just artificial intelligence.

While a localized algorithm for user support could improve future incidents, human intervention appears to be the most effective solution at present.