Categories
Issue 3

Divorced from Reality: Why are we attracted to the Disinformation Ecosystem?

Is the Covid-19 virus an act of bioterrorism? Was Sushant Singh Rajput murdered by the entrenched “insider” Bollywood mafia? Is there a paedophilic deep state about to take over the world (QAnon)? Was the moon landing faked? Do vaccines cause autism? Is global warming a hoax? Was there a second shooter on that grassy knoll on 22nd November, 1963? Was 9/11 engineered by the US government? 

No matter how many times scientific evidence refutes these new and old claims/conspiracy theories and fake news, legions of people continue to believe in them. Before we examine the primary reasons for continued belief in fake news, conspiracy theories, disinformation, misinformation etc. let us first pin down the widely accepted definitions of these various terms in the ‘information pollution ecosystem’. According to a report by the State Department of the United States, ‘The Weapons of Mass Distraction’,

misinformation is generally understood as the inadvertent sharing of false information that is not intended to cause harm, just as disinformation or fake news is widely defined as the purposeful dissemination of false information. Conspiracy theories are narratives about events or situations, that allege there are secret plans to carry out sinister deeds.”

What makes this false information ecosystem so pervasive and appealing in an age of instant access to legitimate news sources? 

Despite claims that all of these forms of “information pollution” have multiplied manifolds due to the technology now available, it is important to remember that all of these “pollutants” have thrived throughout known human history. So, for all the technological changes which have inarguably turbocharged the breadth and depth of the dissemination, possibly the single most important element for fake news and conspiracies to thrive has remained unchanged i.e.  the inherent human biases and behaviours which are exploited to feed the engine of this false information ecosystem.

There is a vast amount of empirical evidence emanating from psychology, sociology and communication to show that human beings are not always rational in their beliefs and behaviours, including the kind and sources of information they choose to consume and believe. Research shows that many of us buy into alternative explanations because the world is a big, scary, chaotic place and we crave a sense of belonging and identity and prefer immediate, comforting answers. There are several explanations for why a lot of us are attracted to fake news and conspiracy theories.

Neuroscience has shown that our limbic system is kickstarted into looking for patterns and explanations for threat recognition, evaluation and solutions when confronted with difficult, uncontrollable situations like a disease outbreak or earthquake. This is called illusory pattern perception – our propensity to detect patterns where none exist – and this is pretty much hardwired into our brains. Researchers posit that this tendency evolved as a defence mechanism for our hunter-gatherer ancestors to detect and avoid danger. This tendency to see patterns or conspiracies when dealing with an unfathomable phenomenon spawns any number of theories such as what we are witnessing with the Covid-19 crisis – from 5G towers to bioterrorism to Bill Gates spreading the infection to market a world conquering vaccine. 

Second, there is the principle of confirmation bias, which refers to our tendency to search for information that is congruent with our existing beliefs. Conceding that we are mistaken about something is a tough thing for most of us to do and it is especially so for beliefs and ideas that are fundamental to our worldview. Therefore, we cling even harder to ideas, evidence and information which confirm our worldview and ignore any contradictory information. Cue the now familiar concepts of “echo chambers” and “filter bubbles” of our algorithm led newsfeeds which keep us comfortably ensconced in our comfort zone of like-minded people, facts and opinions on social media – the subject of so much policy debates. And the more the same material is repeated the more we believe it to be true, also called the illusory truth effect. A related concept which lends credence to oft repeated information (true or false) by those in our circle or important others is that of social proof (if my social group believes it, it must be true).

Third, humans are cognitively lazy. Our brains work in a dual processing mode where for most of the time we are on autopilot (System 1), take in information on face value and make intuitive decisions which are good enough (satisficing) without critically appraising it for veracity. We conserve our cognitive energy for more “important tasks” which require us to take a more rational, well-thought, informed, and reflective approach (System 2).  Research shows that over 90% of the time in the entire lives, our information processing and decision-making happens in the system 1 mode and this may result in choosing to believe the fake news report rather than digging deeper to verify it. 

Fourth, let’s look at proportionality bias which is our tendency to believe that large events have large causes. The idea that just one guy with a gun (Lee Harvey Oswald) could murder one of the most powerful people in the world (President John F Kennedy) is unsatisfying, and we intuitively search for bigger forces at work. That is why multiple conspiracy theories of government, mafia and foreign involvement seem more reasonable despite the evidence otherwise.

Fifth, we have the exact opposite of cognitive laziness i.e.  motivated reasoning or the tendency to apply higher scrutiny to ideas that are inconsistent with our beliefs.  We use motivated reasoning to further our quest for social identity and belonging. Further, research shows that naïve realism plays an important role during the consumption and evaluation of information. Naïve realism results in our belief that only our perception of social reality is accurate and based in facts and that those who disagree are simply ignorant or unreasonable. 

Interestingly, an important predictor of belief in conspiracy theories is past belief in another one. So once you believe a sinister cabal engineered one event, it becomes much more likely that you’ll look for shadowy cabals at every opportunity. And that is another problem: sometimes conspiracy theories turn out to be right. Watergate did happen, the CIA did conspire to topple governments, scientists did visit unimaginable horrors on human subjects during “medical experiments”.  Proven conspiracies unveiled after much investigation open the door to conjecture about other events with alternate plausible explanations. 

Another reason for believing in disinformation is our own sense of morality as a proxy for that of other people. So, people who think they themselves might create a deadly disease (for whatever reason) are likely to believe that scientists created AIDS or Covid-19 in a lab. Political extremism also leads people to question the narrative of the establishment. Being less educated or having less money is also associated with a tendency to believe fake news, although this could be partly because belonging to lower socio-economic categories is also associated with greater feelings of disenfranchisement, less control over one’s life and greater uncertainty, which in turn makes conspiracy theories more appealing. And last but not the least there is a certain sexiness to fake news – it is very “novel” and mostly negative – two features that attract human attention much more than cold, hard, verified facts.

The internet facilitates the spread of disinformation faster than ever before and this ecosystem of false information can have a powerful effect on our behaviour. Studies have shown that fake news and conspiracy theories can lead to lower participation in politics, lower vaccination rates, disregard of scientific or medical advice, reduction in environment friendly behaviours, even incite murders and killing sprees. So, it’s imperative to understand why people continue to believe disinformation despite factual, verifiable evidence to the contrary. What it is in our own minds that can make any person vulnerable to believing in this disinformation is also important to locate, is the place to begin.

Purnima Mehrotra is the Associate Director – Research and Capacity Building at the Centre for Social and Behavioural Change, Ashoka University. She has experience across industries – education, research, advertising and non-profit.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis). 

Categories
Uncategorized

When should I stop watching the news?

By Siddhartha Dubey

The simple answer to that question is now. Like, right now, today. 

There will be two immediate advantages. One, you will save money and two you will be better informed. 

TV News is rubbish. Right from the fake news and opinion infested Republic to the boring and increasingly shallow NDTV. You will be better off reading broadsheets and consuming your news online. I don’t need to tell you what’s online and the great multimedia content that is created every day by teams at the Wall Street Journal, Vice and so many others.

There is so much online, to the point that there is TOO much. Hundreds and thousands of dollars are being spent on digital newsrooms around the world. New hires must be able to report, edit, shoot, produce and naturally write.  

Photo Credits: Mike Licht

My basic issue with television news (in India) is that it has (largely) become a platform for lies, half-truths, reactionary and dangerous opinions and a place where the government and its militant supporters are able to get their views across without being questioned.  

The quest to curry favor with the rulers of the nation and Dalal Street means ‘whatever you tell us, we will air.’ This translates into advertising rupees, government favors and protection. 

The race for television ratings or TRPs is a discussion for another day. 

So, what we have is a system geared to do anything but inform you, and analysis or even sensible commentary. 

So NO, Times Now did not have its hands on a “secret tape” given to the channel by “security agencies” of two prominent political activists criticising the Popular Front of India.

The recently aired recording was from a publicly available Facebook Live. 

And NO, the banknotes which were printed after 500- and 1,000-Rupee notes were made illegal in early November 2016, did not have microchips embedded in them so as to ‘track’ their whereabouts at any given time. 

Yet television news teams and program hosts spent days vilifying the social activists and comparing them to terrorists out to destroy India. Or in the case of demonetization, championing the government’s “masterstroke” against corruption and undeclared cash.

There is a monstrous amount of fake news swirling around the airwaves and invading your homes. And a large part of it comes from bonafide TV channels which employ suave, well-spoken anchors and reporters. 

Given the commissioning editor of this piece gave me few instructions on how she wanted this article written, I am taking the liberty of writing it in first person. 

I don’t own a TV because I hate the news. I get angry really easily. Calm to ballistic happens in seconds and the trigger more than often are clips posted on social media of Arnab Goswami from Republic TV, or Navika Kumar and her male clone Rahul Shivshankar of Times Now. 

My friend Karen Rebello at the fact-checking website Boom News says “fake news follows the news cycle.”

Rebello says the COVID pandemic has given rise to an unprecedented amount of lies and half-truths. 

We see so many media houses just falling for fake news. Some of it is basic digital literacy.” 

Rebello says very few news desks, editors and anchors who play a strong role in deciding what goes on-air question the source of a video, quote or image.

And then there are lies and bias such as Times Now’s “secret tapes” or supposed black magic skills of actress Rhea Chakraborty. The story around the unfortunate suicide of Sushant Singh Rajput is a veritable festival of un-corroborated information released by (largely male) news editors and personalities committed to destroying the character of Ms. Chakraborty. 

I am not on Twitter. 

I used to be. 

But took myself off it as I became so angry that I become stupid. 

So, I don’t know what hashtags are trending right now. 

Guessing there are some which link drugs and Bollywood, Muslims and COVID and Muslims with the recent deadly communal riots in Delhi. Oh yes, I am sure there is a happy birthday prime minister hashtag popping up like an orange in a bucket of liquid. 

Hashtags are sticky, ubiquitous and designed for a reason. Often, they act like an online lynch mob; a calling to arms around a particular cause or issue. And often they are not such as the simple #PUBGBAN.

What a hashtag does is put a spotlight on a particular issue and that issue alone. 

So, when a hashtag linking Ms. Chakraborty with illegal drugs is moving rapidly around the Internet and TV news channels, people quickly forget that quarterly economic growth in India is negative 24 percent, or new data shows over six and a half million white-collar jobs have been lost in recent months. 

Get it? Check my new lambo out, but ignore the fact that I mortgaged everything I own to buy it. 

Thanks for reading this and for your sake, don’t watch the news!

Ends.

Featured Image Credit: SKetch (Instagram: @sketchbysk)

Siddhartha Dubey is a former television journalist who has worked with in newsrooms across the world. He is currently a Professor of Journalism at Ashoka University.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis). 

Categories
Uncategorized

Here’s the Truth: We Believe Misinformation Because We Want To

By Pravish Agnihotri

On September 14, Buzzfeed News published a leaked memo from a former data scientist at Facebook Sophie Zhang revealing Facebook’s deep and muddy entanglement in manipulating public opinion for political ends. “I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count”, Zhang said. 

This memo follows a piece by the WSJ, where Facebook was blamed for inaction in removing inflammatory posts by leaders of the ruling party BJP, fanning the flames of a deadly riot targeted against Muslims in Delhi. As the upcoming Bihar election campaign goes online, social media platforms and their ability to moderate hate speech and misinformation would come under further scrutiny. A look at past events does not bode too well. 

In March, videos of Muslims licking currency, fruits, and utensils were circulated online blaming the Muslim community in India for the coronavirus outbreak. Health misinformation also abounds on social media where a variety of unfounded treatments like cow urine and mustard oil are being claimed as possible cures of the coronavirus. Along with the rise in misinformation, we are also seeing a rise in a parallel, albeit much smaller group of fake news debunking news organisations. Misinformation, however, remains rampant. 

Why does misinformation spread, even in the face of hard evidence? Interactions between our socio-historical context, our psychology, and business models of social media companies might hold the answer. 

The Context

The dissemination of information was once a monopoly of states and a few elite media organisations. Information flowed from a top-down hierarchy with the state at the apex. Naturally, the media reflected elite interests. Information was scarce and its sources limited, thus it was trustworthy. This changed with the arrival of the TV and completely revolutionised with the arrival of the internet. Waves of information explosions not only changed how it was distributed but also how much information was trusted. In his book, The Revolt of the Public, Gurri argues, “once the monopoly on information is lost, so is our trust”. The shift from mere consumers of scarce media to hybrid creator-consumers of exponentially abundant information meant that every piece of information in the public domain became an object of scrutiny. In a world where everything could be false, anything could be the truth. It is in this context that we begin to understand misinformation. 

Historian Carolyn Biltoft terms this new context the dematerialisation of life. Under this context, beliefs are no longer formed on the basis of individual experience, but are constantly challenged by heavily circulated new information. Additionally, believing new information calls for larger leaps of faith, especially when related to science, technology, or the suffering of a distant community. Spiritual beliefs, beliefs in the superiority of a race, gender, or a form of family, all of which were strong sources of belongingness are now under question. 

The Individual

Individuals increasingly find themselves unable to explain the world around them, unsure of their identity, and unable to look at themselves and their social group in a positive light. It is precisely this condition which makes these individuals vulnerable to misinformation. Various studies have found that people are more likely to believe in conspiracies when faced with epistemic, existential, and social dilemmas. Misinformation allows them to preserve existing beliefs, remain in control of their environment, and defend their social groups. 

One might expect that once presented with evidence, a reasonable individual would cease to believe in misinformation. Psychologists Kahneman and Haidt argue that the role of reason in the formation of beliefs might be overstated to begin with. Individuals rely on their intuition, and not their reason, to make ethical decisions. Reason is later employed to explain the decision already taken through intuitive moral shorthands. 

How are these intuitions formed? Through social interaction with other individuals. Individuals do not and cannot evaluate all possible interpretations and arguments about any topic. They depend on the wisdom of those around them. Individuals who share beliefs trust each other more. Formation of beliefs, hence, is not an individual activity, but a social one based on trust. 

The ability of one’s social networks to influence their beliefs has remained constant. The advent of social media, however, now provides us with the ability to carefully curate our social networks based on our beliefs. This creates a cycle of reinforcement where existing beliefs, informed or misinformed, get solidified. 

Even in homogeneous societies, one is bound to encounter those who disagree with their belief. Although these disagreements can be expected to prevent misinformation, studies have found that they can actually have the opposite impact. Olsson finds that social networks who agree with each other increase the intensity of their belief over time, and in the process lose trust in those who disagree with them. A study also finds that correction of misinformation can actually backfire, leading people to believe misinformation even more than before. Our instinct to learn from those we trust, and mistrust those we disagree with creates a wedge between groups. Engagement becomes an unlikely solution to misinformation. 

Our socio-historical context predisposes us to misinformation, its social nature strengthens our belief in it, and makes us immune to correction. Social media then, acts as a trigger, to the already loaded gun of misinformation. 

The Platform

The misinformation epidemic cannot be attributed to human biases alone. Social media companies, and their monetisation models are part of the problem. Despite coronavirus slashing ad revenues, and an ad-boycott by over 200 companies over its handling of hate speech, Facebook clocked in $18.7 billion in revenue in the second quarter of 2020. Twitter managed to rake in $686 million. Advertising revenues constitute the largest part of these astronomical earnings. 

The business model for all social media companies aims to maximise two things: the amount of time users spend on their platform, and their engagement with other individuals, pages and posts. All this while, these companies collect a host of information about their users which can include demographics, preferences, even political beliefs to create extremely accurate personality profiles.

A recent study found that computers outperform humans when it comes to making personality judgements using an individual’s digital footprint. According to the study, the computer models require data on 10, 70, 150 and 300 of an individual’s likes to outperform their work colleagues, friends, family members, and spouses respectively. These models are sometimes better than the individual themselves in predicting patterns of substance abuse, health, and political attitudes. This data is then used for customising content and advertisements for every individual, creating echo chambers. In another study, Claire Wardle finds that humans regularly employ repetition and familiarity in order to gauge the trustworthiness of new information. If an individual’s beliefs are misinformed to begin with, these algorithms can further strengthen them through sheer repetition. These models can also predict what an individual finds most persuasive, and then ‘microtarget’ them with content, legitimising misinformation in the consumer’s eyes. 

As Facebook’s revenue shows, public opinion can be an extremely valuable commodity. It determines what you buy, what precautions you take (or don’t) in a global pandemic, even who you vote for. By arming those with vested interests in public opinion with accurate and effective tools of persuasion, the business models of social media companies end up facilitating the spread of misinformation. 

The truth is often nuanced, resists simplification and — if it disagrees with your beliefs — off-putting. This doesn’t necessarily make the truth worthy of going viral. Misinformation, on the other hand, tends to be reductive, sensational and perhaps most dangerously, easier to understand. It also relies on emotion to make the reader believe in it. This makes misinformation more likely to spread throughout the internet. A study conducted by MIT corroborates this claim. Falsehoods on Twitter were found to be 6 times faster in reaching users than truths. 

The ultimate goal for social media algorithms is to maximize engagement. As engagement with a post with misinformation increases, algorithms can expand its reach due to its likely popularity. Further, microtargeting ensures that such posts are shared with individuals who are more likely to agree with the information, and share it themselves. When controversial content leads to higher engagement, misinformation becomes profitable. Economic reasoning alone can lead social media companies to condone, and in worse cases, actively promote its dissemination. 

Our unique context, our instincts and biases, and the business models of social media platforms interact endlessly to create layers upon layers of reinforcing mechanisms that spread misinformation and make us believe in it. Artificial Intelligence is now being called on to fight and weed out misinformation from social media platforms. However, for any solution to be effective, it would need to address the interactions between the three. 

Pravish is a student of Political Science, International Relations, Economics and Media Studies at Ashoka University.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).