The news about the Wuhan coronavirus is bad and is getting worse. In terms of its potential for devastation, the current virus is in close competition with its 2003 cousin, the SARS coronavirus. Infections have already surpassed that of SARS cases, and while the new disease doesn’t seem as deadly as SARS, fatalities are edging upward. The human toll also comes with an economic one, and China’s economy is far more essential to the global economy now than it was in 2003, giving new meaning to the old nostrum, when China sneezes, the world catches a cold.

And this time the world’s capacity to catch that cold is far worse, in part due to the rise of social media. During the SARS scare, the world wasn’t hooked on social media. Today, we can expect digital viruses — the re-tweetable tweets, the likeable posts, the shareable memes — to rideshare with the coronavirus. Viral misinformation could worsen the global public health emergency.

No doubt, social media can help as a powerful tool for public health messaging, educating and debunking myths. Unfortunately, the myth-makers tend to beat the educators and debunkers; according to a recent MIT study, false news is 70 percent more likely to be retweeted than true stories, with truth traveling six times slower than falsehood.

But during this outbreak, the social media industry has performed better than it’s gotten credit for. It is de rigueur to give the social media giants a failing grade; I would give the industry a B thus far. But it can do more.

To be sure, the scale of the challenge is daunting. Rumors have gone into hyper-drive across platforms: they have stoked waves of Sinophobia and racism, blaming the outbreak on false claims that the Chinese have a regular habit of eating bats.

The short-video sharing app, TikTok, has been particularly active, with numerous posts spreading misinformation. One misleading video was viewed 2.4 million times before it was removed and yet video duets — reactions to the original — still lingered on, showing how difficult it is to kill digital falsehoods. Other posts baselessly claimed that the virus was created by the government for population control. The conspiracy group QAnon falsely claimed in a video that the virus’ creation was backed by Bill Gates. Needless to say, such falsehoods travel far.

Alarmist statistics have also been spreading — a tweet with over 140,000 “likes” predicted 65 million deaths, a debunked claim — along with false remedies, prophylatics and cures. Some posts recommend drinking bleach. Others peddle falsehoods about the benefits of cannabis, homeopathy and air purifiers. Virality is assured when the misinformation jumps platforms. A thread retweeted thousands of times by a YouTube conspiracist suggests the coronavirus was developed for use as a vaccine. It’s now found another new life on Facebook.

The problem of containment gets worse when power users, such as politicians, give viral misinformation a boost. In the U.S., President Donald Trump helped amplify tweets from supporters of QAnon, the conspiracy group active in spreading coronavirus rumors. Republican Party official Solomon Yue tweeted to more than 100,000 followers that the virus was stolen from Canada for use as a bio-weapon. Jim Banks, a Republican congressman from Indiana, tweeted out a link, shared over 1,000 times, claiming that the virus was part of a covert Chinese biological weapons program.

The lamest responses have come from Twitter and Google. Twitter prompts users searching for coronavirus to first visit authoritative sources, such as the Center for Disease Control. A corresponding search on Google-owned YouTube, reportedly, links to a New York Times article. Google says it is promoting experts and reliable sources at the top of search results and in “watch next” panels on YouTube. Neither seems ready to just take down patently false content. Fortunately, other platforms have been more proactive. TikTok has removed some coronavirus misinformation and WeChat claims to have done the same. (They may find this easier because they have a history of censorship.)

The biggest and most pleasant surprise is Facebook. Its past strategy favored labeling content as misleading rather than removing it. This time, it is limiting the distribution of posts rated false by third-party fact-checkers and using the News Feed to steer users to authoritative sites. It is giving free advertising credits to organizations running coronavirus education campaigns and has added a resource page for spotting falsehoods. However, the biggest change at Facebook is its announcement that it would remove content from Facebook and block or restrict hashtags on Instagram that spread coronavirus misinformation.

This is a radical departure from Facebook’s past record, including its controversial insistence on permitting false political ads. What changed? I would quote Facebook’s head of Health: “We are doing this as an extension of our existing policies to remove content that could cause physical harm.”

This elevated “physical harm” standard is one that all other social media platforms, including Twitter and Google/YouTube, ought to adopt. Of course, this requires establishing reliable fact-checking partnerships. There are 195 fact-checking organizations; so, that isn’t impossible. YouTube has launched a limited fact-checking initiative but urgently needs to expand it. Twitter needs to launch one.

Social media’s response to this virus could not only slow the speed of viral falsehood, but also slow the rate at which the public is losing trust in the industry.

Bhaskar Chakravorti is dean of global business at the Fletcher School at Tufts University

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.