Lies spread faster than the truth
There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed.Abstract
We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.From the NYTimes piece:
Surprisingly, Twitter users who spread false stories had, on average, significantly fewer followers, followed significantly fewer people, were significantly less active on Twitter, were verified as genuine by Twitter significantly less often and had been on Twitter for significantly less time than were Twitter users who spread true stories. Falsehood diffused farther and faster despite these seeming shortcomings.
And despite concerns about the role of web robots in spreading false stories, we found that human behavior contributed more to the differential spread of truth and falsity than bots did. Using established bot-detection algorithms, we found that bots accelerated the spread of true stories at approximately the same rate as they accelerated the spread of false stories, implying that false stories spread more than true ones as a result of human activity.
Why would that be? One explanation is novelty. Perhaps the novelty of false stories attracts human attention and encourages sharing, conveying status on sharers who seem more “in the know.”
Our analysis seemed to bear out this hypothesis. Using accepted computerized methods for inferring emotional content from word use, we found that false stories inspired replies on Twitter expressing greater surprise than did true stories. The truth, on the other hand, inspired more joy and trust. Such emotions may shed light on what inspires people to share false stories.
The social media advertising market creates incentives for the spread of false stories because their wider diffusion makes them profitable. If platforms were to demote accounts or posts that disseminated false stories, using algorithms to weed out falsehoods, the financial incentives would presumably be reduced. The tricky question, of course, would be: Who gets to decide what is true and false?
Some notion of truth is central to the proper functioning of nearly every realm of human endeavor. If we allow the world to be consumed by falsity, we are inviting catastrophe.