BLOOMBERG – Some people are fuming at Facebook for allowing unfiltered political ads, while others are fuming at Twitter for banning them. There’s lots of confusion and speculation, but what we know is that these social media companies have fundamentally changed how people exchange information. What we need to figure out is whether they also change how people spread disinformation — and if so, how to fix it. It’s a question researchers are actively investigating.
After “fake news” became the catchphrase of the 2016 election, experts in psychology, political science, computer science and networks stepped up research on disinformation, learning in more detail how it travels through social media and why some things stick in people’s heads.
There’s a good reason not to ban political ads on social media: People in democratic societies should be able to see and hear from candidates directly, not just through interview and debate formats. Social media ads are relatively cheap, so candidates with less funding can still make themselves heard. The fear is that politicians might lie, mislead and manipulate on social media in ways that were impossible in the days of television and newsprint.
Some see a particular threat in the way Facebook allows advertisers to precisely target ads based on personal data. “Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare,” writes former Facebook insider Yael Eisenstat in The Washington Post.
How dangerous is this information warfare? Experiments show that people can be misled easily and that wrong ideas tend to stick. USC psychologist Norbert Schwarz says people tend to believe messages for many reasons that have nothing to do with credibility. People are more likely to believe messages when they’re presented simply, in an easy-to-read font or spoken without an accent, and repeated often. People are also more easily influenced by messages they think their friends also believe.
Stephan Lewandowsky, a psychologist at the University of Bristol, says the extremely fine-grained targeting abilities of social media might interfere with a free marketplace of ideas. Rather than making claims in ads that anyone is free to see, politicians might tailor messages to individual social media users. The propaganda might never even be seen by fact-checkers or opponents who might challenge it. “My main concern is that we’re replacing public debate with manipulation,” he says.
There is still hope for democracy, however. There’s little evidence that targeted ads have the power to change minds or votes, says Harvard law professor Yochai Benkler, co-author of the book “Network Propaganda.” Belief in targeted ads in general is more faith-based than evidence-based, he says. Advertisers assume the targeting causes people to buy things — though this is far from proven.
In 2018, there was outrage when it came out that the company Cambridge Analytica claimed it could use the seemingly superficial tastes of consumers to delve deep into their psyches, gain personality information that even their friends didn’t know, and, in theory, use it to manipulate their voting behavior.
But in researching a 2018 column on the phenomenon, I learned that the evidence is thin to nonexistent that Cambridge Analytica was able to glean meaningful information or manipulate voting behavior.
Benkler says if he had access to enough Facebook data, he and other researchers could find out who saw which ads, and infer from other information if and how people voted. But it probably isn’t in Facebook’s interest to give out that kind of information. It might reveal that Facebook ads are suppressing voting, or that the ads don’t matter. Either way, it could look bad for the company.
Benkler points to a recent paper in the journal Marketing Science that shows it’s not clear whether an ad causes people to buy a particular product, or whether the people who are targeted are already more likely to buy. Other research papers report on the limited power of fake news on Facebook and Twitter. For example, one study that looked at Twitter activity during the 2016 election concluded that 80 percent of fake news was shared by just 0.1 percent of users, making it a fringe activity.
People tend to focus on new threats, Benkler says, when there are known masters of manipulation out there. The ads, fake news and other so-called content on social media have been getting a lot of attention, but their impact still pales in comparison to that of old-fashioned platforms like cable news and radio. In research reported in his book, he and his co-authors trace stories using of certain words or phrases — like the child sex ring rumor or the conspiracy theories surrounding the Seth Rich murder — from their origins on small-scale blogs and fringe publications to Fox News and conservative talk radio, where they blew up.
It’s true that there’s still a lot we don’t know about social media. But instead of giving Facebook more power — by encouraging it to police ads for misleading content — we should make rules to force the company to reveal its targeting practices.
If someone sees a Trump ad because she went to church and stopped at the liquor store on the way home, she has the right to know it, says Benkler. And the more information Facebook and others provide, the better scientists can understand how much social media is shaping the free marketplace of ideas, and whether we should be focused on other, more substantial threats to democracy.
Science writer Faye Flam is a Bloomberg Opinion columnist.
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.