Categories
Uncategorized

When should I stop watching the news?

By Siddhartha Dubey

The simple answer to that question is now. Like, right now, today. 

There will be two immediate advantages. One, you will save money and two you will be better informed. 

TV News is rubbish. Right from the fake news and opinion infested Republic to the boring and increasingly shallow NDTV. You will be better off reading broadsheets and consuming your news online. I don’t need to tell you what’s online and the great multimedia content that is created every day by teams at the Wall Street Journal, Vice and so many others.

There is so much online, to the point that there is TOO much. Hundreds and thousands of dollars are being spent on digital newsrooms around the world. New hires must be able to report, edit, shoot, produce and naturally write.  

Photo Credits: Mike Licht

My basic issue with television news (in India) is that it has (largely) become a platform for lies, half-truths, reactionary and dangerous opinions and a place where the government and its militant supporters are able to get their views across without being questioned.  

The quest to curry favor with the rulers of the nation and Dalal Street means ‘whatever you tell us, we will air.’ This translates into advertising rupees, government favors and protection. 

The race for television ratings or TRPs is a discussion for another day. 

So, what we have is a system geared to do anything but inform you, and analysis or even sensible commentary. 

So NO, Times Now did not have its hands on a “secret tape” given to the channel by “security agencies” of two prominent political activists criticising the Popular Front of India.

The recently aired recording was from a publicly available Facebook Live. 

And NO, the banknotes which were printed after 500- and 1,000-Rupee notes were made illegal in early November 2016, did not have microchips embedded in them so as to ‘track’ their whereabouts at any given time. 

Yet television news teams and program hosts spent days vilifying the social activists and comparing them to terrorists out to destroy India. Or in the case of demonetization, championing the government’s “masterstroke” against corruption and undeclared cash.

There is a monstrous amount of fake news swirling around the airwaves and invading your homes. And a large part of it comes from bonafide TV channels which employ suave, well-spoken anchors and reporters. 

Given the commissioning editor of this piece gave me few instructions on how she wanted this article written, I am taking the liberty of writing it in first person. 

I don’t own a TV because I hate the news. I get angry really easily. Calm to ballistic happens in seconds and the trigger more than often are clips posted on social media of Arnab Goswami from Republic TV, or Navika Kumar and her male clone Rahul Shivshankar of Times Now. 

My friend Karen Rebello at the fact-checking website Boom News says “fake news follows the news cycle.”

Rebello says the COVID pandemic has given rise to an unprecedented amount of lies and half-truths. 

We see so many media houses just falling for fake news. Some of it is basic digital literacy.” 

Rebello says very few news desks, editors and anchors who play a strong role in deciding what goes on-air question the source of a video, quote or image.

And then there are lies and bias such as Times Now’s “secret tapes” or supposed black magic skills of actress Rhea Chakraborty. The story around the unfortunate suicide of Sushant Singh Rajput is a veritable festival of un-corroborated information released by (largely male) news editors and personalities committed to destroying the character of Ms. Chakraborty. 

I am not on Twitter. 

I used to be. 

But took myself off it as I became so angry that I become stupid. 

So, I don’t know what hashtags are trending right now. 

Guessing there are some which link drugs and Bollywood, Muslims and COVID and Muslims with the recent deadly communal riots in Delhi. Oh yes, I am sure there is a happy birthday prime minister hashtag popping up like an orange in a bucket of liquid. 

Hashtags are sticky, ubiquitous and designed for a reason. Often, they act like an online lynch mob; a calling to arms around a particular cause or issue. And often they are not such as the simple #PUBGBAN.

What a hashtag does is put a spotlight on a particular issue and that issue alone. 

So, when a hashtag linking Ms. Chakraborty with illegal drugs is moving rapidly around the Internet and TV news channels, people quickly forget that quarterly economic growth in India is negative 24 percent, or new data shows over six and a half million white-collar jobs have been lost in recent months. 

Get it? Check my new lambo out, but ignore the fact that I mortgaged everything I own to buy it. 

Thanks for reading this and for your sake, don’t watch the news!

Ends.

Featured Image Credit: SKetch (Instagram: @sketchbysk)

Siddhartha Dubey is a former television journalist who has worked with in newsrooms across the world. He is currently a Professor of Journalism at Ashoka University.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis). 

Categories
Uncategorized

Here’s the Truth: We Believe Misinformation Because We Want To

By Pravish Agnihotri

On September 14, Buzzfeed News published a leaked memo from a former data scientist at Facebook Sophie Zhang revealing Facebook’s deep and muddy entanglement in manipulating public opinion for political ends. “I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count”, Zhang said. 

This memo follows a piece by the WSJ, where Facebook was blamed for inaction in removing inflammatory posts by leaders of the ruling party BJP, fanning the flames of a deadly riot targeted against Muslims in Delhi. As the upcoming Bihar election campaign goes online, social media platforms and their ability to moderate hate speech and misinformation would come under further scrutiny. A look at past events does not bode too well. 

In March, videos of Muslims licking currency, fruits, and utensils were circulated online blaming the Muslim community in India for the coronavirus outbreak. Health misinformation also abounds on social media where a variety of unfounded treatments like cow urine and mustard oil are being claimed as possible cures of the coronavirus. Along with the rise in misinformation, we are also seeing a rise in a parallel, albeit much smaller group of fake news debunking news organisations. Misinformation, however, remains rampant. 

Why does misinformation spread, even in the face of hard evidence? Interactions between our socio-historical context, our psychology, and business models of social media companies might hold the answer. 

The Context

The dissemination of information was once a monopoly of states and a few elite media organisations. Information flowed from a top-down hierarchy with the state at the apex. Naturally, the media reflected elite interests. Information was scarce and its sources limited, thus it was trustworthy. This changed with the arrival of the TV and completely revolutionised with the arrival of the internet. Waves of information explosions not only changed how it was distributed but also how much information was trusted. In his book, The Revolt of the Public, Gurri argues, “once the monopoly on information is lost, so is our trust”. The shift from mere consumers of scarce media to hybrid creator-consumers of exponentially abundant information meant that every piece of information in the public domain became an object of scrutiny. In a world where everything could be false, anything could be the truth. It is in this context that we begin to understand misinformation. 

Historian Carolyn Biltoft terms this new context the dematerialisation of life. Under this context, beliefs are no longer formed on the basis of individual experience, but are constantly challenged by heavily circulated new information. Additionally, believing new information calls for larger leaps of faith, especially when related to science, technology, or the suffering of a distant community. Spiritual beliefs, beliefs in the superiority of a race, gender, or a form of family, all of which were strong sources of belongingness are now under question. 

The Individual

Individuals increasingly find themselves unable to explain the world around them, unsure of their identity, and unable to look at themselves and their social group in a positive light. It is precisely this condition which makes these individuals vulnerable to misinformation. Various studies have found that people are more likely to believe in conspiracies when faced with epistemic, existential, and social dilemmas. Misinformation allows them to preserve existing beliefs, remain in control of their environment, and defend their social groups. 

One might expect that once presented with evidence, a reasonable individual would cease to believe in misinformation. Psychologists Kahneman and Haidt argue that the role of reason in the formation of beliefs might be overstated to begin with. Individuals rely on their intuition, and not their reason, to make ethical decisions. Reason is later employed to explain the decision already taken through intuitive moral shorthands. 

How are these intuitions formed? Through social interaction with other individuals. Individuals do not and cannot evaluate all possible interpretations and arguments about any topic. They depend on the wisdom of those around them. Individuals who share beliefs trust each other more. Formation of beliefs, hence, is not an individual activity, but a social one based on trust. 

The ability of one’s social networks to influence their beliefs has remained constant. The advent of social media, however, now provides us with the ability to carefully curate our social networks based on our beliefs. This creates a cycle of reinforcement where existing beliefs, informed or misinformed, get solidified. 

Even in homogeneous societies, one is bound to encounter those who disagree with their belief. Although these disagreements can be expected to prevent misinformation, studies have found that they can actually have the opposite impact. Olsson finds that social networks who agree with each other increase the intensity of their belief over time, and in the process lose trust in those who disagree with them. A study also finds that correction of misinformation can actually backfire, leading people to believe misinformation even more than before. Our instinct to learn from those we trust, and mistrust those we disagree with creates a wedge between groups. Engagement becomes an unlikely solution to misinformation. 

Our socio-historical context predisposes us to misinformation, its social nature strengthens our belief in it, and makes us immune to correction. Social media then, acts as a trigger, to the already loaded gun of misinformation. 

The Platform

The misinformation epidemic cannot be attributed to human biases alone. Social media companies, and their monetisation models are part of the problem. Despite coronavirus slashing ad revenues, and an ad-boycott by over 200 companies over its handling of hate speech, Facebook clocked in $18.7 billion in revenue in the second quarter of 2020. Twitter managed to rake in $686 million. Advertising revenues constitute the largest part of these astronomical earnings. 

The business model for all social media companies aims to maximise two things: the amount of time users spend on their platform, and their engagement with other individuals, pages and posts. All this while, these companies collect a host of information about their users which can include demographics, preferences, even political beliefs to create extremely accurate personality profiles.

A recent study found that computers outperform humans when it comes to making personality judgements using an individual’s digital footprint. According to the study, the computer models require data on 10, 70, 150 and 300 of an individual’s likes to outperform their work colleagues, friends, family members, and spouses respectively. These models are sometimes better than the individual themselves in predicting patterns of substance abuse, health, and political attitudes. This data is then used for customising content and advertisements for every individual, creating echo chambers. In another study, Claire Wardle finds that humans regularly employ repetition and familiarity in order to gauge the trustworthiness of new information. If an individual’s beliefs are misinformed to begin with, these algorithms can further strengthen them through sheer repetition. These models can also predict what an individual finds most persuasive, and then ‘microtarget’ them with content, legitimising misinformation in the consumer’s eyes. 

As Facebook’s revenue shows, public opinion can be an extremely valuable commodity. It determines what you buy, what precautions you take (or don’t) in a global pandemic, even who you vote for. By arming those with vested interests in public opinion with accurate and effective tools of persuasion, the business models of social media companies end up facilitating the spread of misinformation. 

The truth is often nuanced, resists simplification and — if it disagrees with your beliefs — off-putting. This doesn’t necessarily make the truth worthy of going viral. Misinformation, on the other hand, tends to be reductive, sensational and perhaps most dangerously, easier to understand. It also relies on emotion to make the reader believe in it. This makes misinformation more likely to spread throughout the internet. A study conducted by MIT corroborates this claim. Falsehoods on Twitter were found to be 6 times faster in reaching users than truths. 

The ultimate goal for social media algorithms is to maximize engagement. As engagement with a post with misinformation increases, algorithms can expand its reach due to its likely popularity. Further, microtargeting ensures that such posts are shared with individuals who are more likely to agree with the information, and share it themselves. When controversial content leads to higher engagement, misinformation becomes profitable. Economic reasoning alone can lead social media companies to condone, and in worse cases, actively promote its dissemination. 

Our unique context, our instincts and biases, and the business models of social media platforms interact endlessly to create layers upon layers of reinforcing mechanisms that spread misinformation and make us believe in it. Artificial Intelligence is now being called on to fight and weed out misinformation from social media platforms. However, for any solution to be effective, it would need to address the interactions between the three. 

Pravish is a student of Political Science, International Relations, Economics and Media Studies at Ashoka University.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).