Categories
Uncategorized

Here’s the Truth: We Believe Misinformation Because We Want To

Pravish Agnihotri

Why does misinformation spread, even in the face of hard evidence? Interactions between our socio-historical context, our psychology, and business models of social media companies might hold the answer.

By Pravish Agnihotri

On September 14, Buzzfeed News published a leaked memo from a former data scientist at Facebook Sophie Zhang revealing Facebook’s deep and muddy entanglement in manipulating public opinion for political ends. “I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count”, Zhang said. 

This memo follows a piece by the WSJ, where Facebook was blamed for inaction in removing inflammatory posts by leaders of the ruling party BJP, fanning the flames of a deadly riot targeted against Muslims in Delhi. As the upcoming Bihar election campaign goes online, social media platforms and their ability to moderate hate speech and misinformation would come under further scrutiny. A look at past events does not bode too well. 

In March, videos of Muslims licking currency, fruits, and utensils were circulated online blaming the Muslim community in India for the coronavirus outbreak. Health misinformation also abounds on social media where a variety of unfounded treatments like cow urine and mustard oil are being claimed as possible cures of the coronavirus. Along with the rise in misinformation, we are also seeing a rise in a parallel, albeit much smaller group of fake news debunking news organisations. Misinformation, however, remains rampant. 

Why does misinformation spread, even in the face of hard evidence? Interactions between our socio-historical context, our psychology, and business models of social media companies might hold the answer. 

The Context

The dissemination of information was once a monopoly of states and a few elite media organisations. Information flowed from a top-down hierarchy with the state at the apex. Naturally, the media reflected elite interests. Information was scarce and its sources limited, thus it was trustworthy. This changed with the arrival of the TV and completely revolutionised with the arrival of the internet. Waves of information explosions not only changed how it was distributed but also how much information was trusted. In his book, The Revolt of the Public, Gurri argues, “once the monopoly on information is lost, so is our trust”. The shift from mere consumers of scarce media to hybrid creator-consumers of exponentially abundant information meant that every piece of information in the public domain became an object of scrutiny. In a world where everything could be false, anything could be the truth. It is in this context that we begin to understand misinformation. 

Historian Carolyn Biltoft terms this new context the dematerialisation of life. Under this context, beliefs are no longer formed on the basis of individual experience, but are constantly challenged by heavily circulated new information. Additionally, believing new information calls for larger leaps of faith, especially when related to science, technology, or the suffering of a distant community. Spiritual beliefs, beliefs in the superiority of a race, gender, or a form of family, all of which were strong sources of belongingness are now under question. 

The Individual

Individuals increasingly find themselves unable to explain the world around them, unsure of their identity, and unable to look at themselves and their social group in a positive light. It is precisely this condition which makes these individuals vulnerable to misinformation. Various studies have found that people are more likely to believe in conspiracies when faced with epistemic, existential, and social dilemmas. Misinformation allows them to preserve existing beliefs, remain in control of their environment, and defend their social groups. 

One might expect that once presented with evidence, a reasonable individual would cease to believe in misinformation. Psychologists Kahneman and Haidt argue that the role of reason in the formation of beliefs might be overstated to begin with. Individuals rely on their intuition, and not their reason, to make ethical decisions. Reason is later employed to explain the decision already taken through intuitive moral shorthands. 

How are these intuitions formed? Through social interaction with other individuals. Individuals do not and cannot evaluate all possible interpretations and arguments about any topic. They depend on the wisdom of those around them. Individuals who share beliefs trust each other more. Formation of beliefs, hence, is not an individual activity, but a social one based on trust. 

The ability of one’s social networks to influence their beliefs has remained constant. The advent of social media, however, now provides us with the ability to carefully curate our social networks based on our beliefs. This creates a cycle of reinforcement where existing beliefs, informed or misinformed, get solidified. 

Even in homogeneous societies, one is bound to encounter those who disagree with their belief. Although these disagreements can be expected to prevent misinformation, studies have found that they can actually have the opposite impact. Olsson finds that social networks who agree with each other increase the intensity of their belief over time, and in the process lose trust in those who disagree with them. A study also finds that correction of misinformation can actually backfire, leading people to believe misinformation even more than before. Our instinct to learn from those we trust, and mistrust those we disagree with creates a wedge between groups. Engagement becomes an unlikely solution to misinformation. 

Our socio-historical context predisposes us to misinformation, its social nature strengthens our belief in it, and makes us immune to correction. Social media then, acts as a trigger, to the already loaded gun of misinformation. 

The Platform

The misinformation epidemic cannot be attributed to human biases alone. Social media companies, and their monetisation models are part of the problem. Despite coronavirus slashing ad revenues, and an ad-boycott by over 200 companies over its handling of hate speech, Facebook clocked in $18.7 billion in revenue in the second quarter of 2020. Twitter managed to rake in $686 million. Advertising revenues constitute the largest part of these astronomical earnings. 

The business model for all social media companies aims to maximise two things: the amount of time users spend on their platform, and their engagement with other individuals, pages and posts. All this while, these companies collect a host of information about their users which can include demographics, preferences, even political beliefs to create extremely accurate personality profiles.

A recent study found that computers outperform humans when it comes to making personality judgements using an individual’s digital footprint. According to the study, the computer models require data on 10, 70, 150 and 300 of an individual’s likes to outperform their work colleagues, friends, family members, and spouses respectively. These models are sometimes better than the individual themselves in predicting patterns of substance abuse, health, and political attitudes. This data is then used for customising content and advertisements for every individual, creating echo chambers. In another study, Claire Wardle finds that humans regularly employ repetition and familiarity in order to gauge the trustworthiness of new information. If an individual’s beliefs are misinformed to begin with, these algorithms can further strengthen them through sheer repetition. These models can also predict what an individual finds most persuasive, and then ‘microtarget’ them with content, legitimising misinformation in the consumer’s eyes. 

As Facebook’s revenue shows, public opinion can be an extremely valuable commodity. It determines what you buy, what precautions you take (or don’t) in a global pandemic, even who you vote for. By arming those with vested interests in public opinion with accurate and effective tools of persuasion, the business models of social media companies end up facilitating the spread of misinformation. 

The truth is often nuanced, resists simplification and — if it disagrees with your beliefs — off-putting. This doesn’t necessarily make the truth worthy of going viral. Misinformation, on the other hand, tends to be reductive, sensational and perhaps most dangerously, easier to understand. It also relies on emotion to make the reader believe in it. This makes misinformation more likely to spread throughout the internet. A study conducted by MIT corroborates this claim. Falsehoods on Twitter were found to be 6 times faster in reaching users than truths. 

The ultimate goal for social media algorithms is to maximize engagement. As engagement with a post with misinformation increases, algorithms can expand its reach due to its likely popularity. Further, microtargeting ensures that such posts are shared with individuals who are more likely to agree with the information, and share it themselves. When controversial content leads to higher engagement, misinformation becomes profitable. Economic reasoning alone can lead social media companies to condone, and in worse cases, actively promote its dissemination. 

Our unique context, our instincts and biases, and the business models of social media platforms interact endlessly to create layers upon layers of reinforcing mechanisms that spread misinformation and make us believe in it. Artificial Intelligence is now being called on to fight and weed out misinformation from social media platforms. However, for any solution to be effective, it would need to address the interactions between the three. 

Pravish is a student of Political Science, International Relations, Economics and Media Studies at Ashoka University.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis). 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s