• AFP-JIJI

  • SHARE

World Athletics President Sebastian Coe described as “disturbing” the results of a study conducted during the Tokyo Olympics to identify and address targeted, abusive messages sent to athletes via social media.

The survey to gain an understanding of the level of online abuse in athletics drew its findings from a sample of 161 Twitter handles of current and former athletes involved in the Games (derived from a list of 200 athletes selected by World Athletics).

They were tracked during the study period, starting July 15, one week prior to the Olympic opening ceremony, and concluding the day after the Olympic closing ceremony on Aug. 9.

The survey found 23 of the athletes received targeted abuse, with 16 of those being women. In total, 115 of the 132 identified abusive posts were directed at female athletes, meaning female athletes received 87% of all abuse.

Two athletes — both Black and female — received 63% of the identified abuse.

Unfounded doping accusations accounted for 25% of the abusive messages, while 10% consisted of transphobic (9%) and homophobic (1%) posts.

When it came to racist abuse, 89% was directed at U.S. athletes, despite Americans representing only 23% of the study set.

The two most common categories of abuse were sexist (29%) and/or racist (26%) in nature, accounting for 55% of all identified abuse.

“This research is disturbing in so many ways,” Coe said in a statement.

“What strikes me the most is that the abuse is targeted at individuals who are celebrating and sharing their performances and talent as a way to inspire and motivate people.

“To face the kinds of abuse they have is unfathomable and we all need to do more to stop this.

“Shining a light on the issue is just the first step.”

In the study timeframe, 240,707 tweets, including 23,521 images, GIFs and videos, were captured for analysis.

This included text analysis through searches for slurs, offensive images and emojis and other phrases that could indicate abuse.

It also used AI-powered Natural Language Processing to detect threats by understanding the relationship between words, allowing it to determine the difference between “I’ll kill you” and “you killed it,” for example.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.

SUBSCRIBE NOW

PHOTO GALLERY (CLICK TO ENLARGE)