Categories
Issue 7

“Mark as Read” to “Mark has Read”: Privacy Policies in India

There are three imponderables when it comes to Privacy: the definition of privacy in today’s data-is-the-new-oil world, how to balance the desires of the individual and the powers-that-be (government or local law enforcement), and how to actually implement and enforce these ideas, once we’ve come up with them. In short, it wouldn’t be too wrong to say that we don’t really know what we’re doing when it comes to privacy!

Further, there’s usually a dichotomy proposed between privacy and security: you can have privacy, but that means criminals/terrorists would be able to operate without the government being able to track them. So, if you want to have security from all these evil people, you must consent to let the government snoop on your data as well.

This is actually a common thing: to protect the population from the wiles of food producers, the government sets up certain standards that these producers must obey. The government may send inspectors to check upon the processes followed, and then punish producers who do not conform. Here, however, every single one of us is a producer.

Fortunately (or otherwise) this relentless production of data by individuals is mediated by companies like Facebook, who collate and process this data, profiting from the detailed profiles they build of us in the process. So, it might be possible to regulate things simply by applying the regulations on these corporations instead of at an individual level. But it also means that there are now two entities (albeit with somewhat different incentives) who may want to read what we write, i.e., the government and the corporation. One thing is very clear: individual-level policies are insufficient. Most people do not (and cannot be expected to) have a deep understanding of privacy issues – just like we don’t all have a deep understanding of food safety norms. Some kind of aggregated negotiation tactic, then, appears to be the only solution.

Given that the government (an entity interested in seeing our data) is the one representing the population in this negotiation, civil society must be extremely vigilant about what the details are. Many people (loosely) propose some structure of the following nature: private messages between individuals must remain secret, both from the government and the corporation. However, if the government comes to the corporation with a warrant, the latter must hand over the data. This last bit, of course, is impossible in an “end-to-end encrypted” system, where only the sender and receiver can read information.

WhatsApp’s recent change is an interesting nuance in this 40,000-ft view. Your private messages in WhatsApp are still end-to-end encrypted and unreadable to anyone but the parties directly involved: nothing has changed on that front. What many may not have noticed, is that WhatsApp actually makes two different apps: one for private use, and one for businesses. WhatsApp’s new policy allows them to look only at communications with these business accounts.

Note that WhatsApp could already look at the metadata: they would know, for example, that you had been chatting with a number of mattress companies (but might not know what kind of mattresses you were looking for). Facebook could then advertise mattresses on your feed. With this new policy, WhatsApp can share data about your interactions with business accounts, so that Facebook can find and suggest the exact kind of mattress you were looking for. As far as changes in privacy go, it’s actually rather minor. Your private messages are just as private as before.

As discussed above, even an end-to-end encrypted system can reveal a lot about one’s preferences and behaviour; this is actually the main difference between WhatsApp and Signal. They use the exact same set of encryption protocols; WhatsApp provides more services (e.g., it is rolling out payments in India), but retains metadata. Signal retains no metadata whatsoever. It just knows the time you last logged in and some other basic information, nothing more, and backups are encrypted. In either case, your actual chats are end-to-end encrypted and cannot be seen by anyone else; this is with the notable exception of backups: unencrypted backups (WhatsApp does not have an option to encrypt) can be read by Google or Apple (and thus by a government with a warrant).

Any state regulation on these encryption and privacy policies would be incredibly difficult, and that’s without getting into the international nature of the problem (what happens with software written in Germany that facilitates a chat between a Japanese citizen and an Australian citizen, with the latter physically residing in India?). I think the short answer is “non-starter”. 

Perhaps the nearest we can get is a set of minimum standards, some rules about consent, and privacy scores. Such consent rules are also hard to frame, e.g., “a corporation cannot access any data belonging to a user without direct, time-limited consent, with sufficient granularity (not all-or-nothing options)”, but we have a lot of good lawyers who I am sure can do a much better job of this than I! In the short term, however, the best idea is almost certainly privacy scores, calculated by an independent government agency, providing something like a star rating to companies operating in India: this could be one way to provide citizens with the information they need to choose what is right for them. 

Debayan Gupta is currently an Assistant Professor of Computer Science at Ashoka University, where he teaches a course on security and privacy as well as an introductory programming class. He is also a visiting professor and research affiliate at MIT and MIT-Sloan. 

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).

Categories
Uncategorized

To End or Not to End Privacy

Imagine, if you will, a murder. Some letters are found, all written in a strange language. In Conan Doyle’s “The Adventure of the Dancing Men,” it took Sherlock Holmes to decipher such a script and find the murderer.

Inventing a secret language is rather difficult, except that we now have standardized ways to do it: encryption algorithms. Essentially, we have language-inventing software, which can create different languages based on a secret password. If you know the password, you can translate the language back into plain English. Today’s techniques produce incredibly secure ciphers that would leave even Holmes clueless. 

This has led to governments trying to subvert or weaken cryptography. Inevitably, every time an atrocity occurs, we hear this argument again and again. Donald Trump has stated that the US should be able to “penetrate the Internet and find out exactly where ISIS is and everything about ISIS.” It was perhaps David Cameron who best articulated this sentiment: “In our country, do we want to allow a means of communication between people, which even in extremis, with a signed warrant from the home secretary personally, that we cannot read? … are we going to allow a means of communication where it simply isn’t possible to do that? My answer is no, we are not.” The justification, of course, is that these powers are needed by “intelligence agencies and security agencies and policing in order to keep our people safe.”

The “deal”, then, is this: You can communicate securely, as long as you make the encryption easy enough for The Government to decipher. This “easy enough” requirement is currently being enforced by various means, including the infiltration and bribery of companies that produce commercial cryptographic software. Many activists and technologists have written about the ethical problems with having a government that is capable of snooping on all of our communications. I argue that legalising this is not only unethical, but operationally impossible.

I am sure you can already spot the problem — if something is easy enough for one person to decipher, then it is easy enough for many others. You cannot have one and not the other, since our government employees are not magically cleverer than their US, Chinese, or Russian counterparts, or the many cyber-criminals that prowl the internet. Broken security renders us vulnerable to anyone with the expertise, not just some government agencies. Mathematical laws care little for the laws of any country.

A commonly proposed solution is for the government to have some kind of “backdoor,” such as a master key. This is difficult to do, both technically and operationally. Given that we have substantial problems implementing and deploying our current (comparatively simple) systems, shifting to such a complicated new technology would inevitably lead to more security holes.

Even if one government has a master key for a certain set of encryption systems, we still have problems. What if the master key gets stolen? We are artificially introducing a critical weakness — such a key would certainly be a prime target for any adversary, and having the key stolen is not a negligible possibility. Over the past few years, hackers have been able to steal everything from the blueprints of the F-35 fighter jet, to financial data from credit rating agencies, to healthcare data from hospitals. Trusting governments with master keys when they haven’t been able to safeguard their own military technology seems like a terrible idea.

Further, if a criminal knows that the government has a master key to software #420, she’s not going to use it. She’ll find a system with no master key (these, of course, already exist). So, the people suffering from weak encryption are mostly going to be law-abiding citizens, who will now be more vulnerable to hackers.

The global nature of the internet adds yet another layer to this. Other governments are not going to sit around and use compromised (from their point of view) communication systems – they’ll build their own software, probably with their own master keys, and stop trusting software made by residents of other countries, essentially creating import control on software. How would multinational companies secure their data? Would they be required to provide keys to every government in the world, or, perhaps a branch of the UN? The creation of a global body to govern these master keys presents a herculean challenge. Further, nothing prevents the governments from adding their own backdoors to subvert that body as well.

Practically every expert in the field believes that subverting cryptosystems (and the bulk surveillance that inevitably accompanies it) is foolish, immoral, and dangerous.

This is why companies like Apple, Facebook, Google, and Microsoft are supporting stronger encryption. Some people who don’t really understand how encryption works have come up with many good reasons for exceptional access backdoors and opined that regulators and legislators must find a way to provide some privacy while allowing law enforcement access. This won’t work. Yes, there are many good reasons for having backdoors (roll-down windows on airplanes might have many advantages), but the numerous fatal problems that they create should have obviated this discussion long ago. Governments should stop trying to build backdoors and support strong, end-to-end security and privacy.

Debayan Gupta is currently an Assistant Professor of Computer Science at Ashoka University, where he teaches a course on security and privacy as well as an introductory programming class. He is also a visiting professor and research affiliate at MIT and MIT-Sloan.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis). 

Categories
Issue 2

National Digital Health Mission & Privacy: Should we be worried?

On the 15th of August, Prime Minister Modi announced the new National Digital Health Mission and its pilot launch in six government territories. According to the policy draft, this new mission will radically change healthcare – lowering cost, increasing transparency, and bringing healthcare services even to the most remote corners of the country. However, ever since its announcement, many have been voicing privacy concerns. 

While some believe that this will be a revolutionary change to India’s Healthcare system, making all things healthcare easier than ever before, others argue that such a program is unnecessary and distracts us from real issues. According to the Indian Medical Association, such a plan may distract us from real problems such as the lack of sufficient and proper medical health structure. The association also said that strengthening public health infrastructure and addressing social determinants of health should be our priority right now, and the new plan will possibly divert funds from these issues, further jeopardizing public health care. The biggest concern, however, is confidentiality and privacy. 

Confidentiality and health privacy of patients is one of the most fundamental principles of Healthcare. This is not only to protect the patient from any kind of stigma and discrimination but also to establish trust between health facilitators and patients. Another important cause for this is the idea of self-ownership and bodily integrity; as rightful owners of our bodies, only we have the right to own our biological data and decide with whom our data is shared. The Digital health mission repeatedly emphasizes its stringent security and consent-based shared guidelines on the policy document but many still believe that the national health mission could pose a grave threat to the privacy and security of citizens. 

One reason behind this is the numerous privacy breaches of the past. With data as sensitive as this, how can we trust the government to believe the same will not happen again? The Aadhaar, for example, has been a subject of data leaks on far too many occasions, yet there is still no accountability from the government. Furthermore, if our Health IDs will be connected to our Aadhaar, the risks of our personal health data being accessed and misused by an unauthorized person or entity further increase. Similarly, with the Aarogya Setu app, there was no transparency on whether data collected was being deleted after patients recovered and how the app functions on the phone due to the code not being open source. 

Another area of concern is the possibility of our health data sold off to private companies. According to the National Health Data Management policy, anonymized, de-identified, and aggregated data may be made available to organizations (page 21). In the past year alone, there has been an explosion in the number of cases of big tech and pharma companies collecting personal health data. The NHS was found to be providing data to Amazon, Google bought the health records of 50 million Americans from insurance companies, and pharmaceutical company GlaxoSmithKline paid DNA testing company $300 million for customer data. Big companies are chasing after our data, and with the implementation of the National Digital Health Mission, India might become a new market for getting such data. 

Big tech companies like Google, Amazon, and Apple are collecting health data for the use of building their own AI Health technologies. While these AI health technologies could prove to be highly useful and beneficial in the future, there are still concerns about whether these companies are using this data ethically. For example, last year when Google bought personal data from insurance companies and medical institutions in the USA, we found that even though those records were stripped of identifying details such as name, contact information. Google combined these records with the information Google already had from the database. This included all personal information collected form smartphones that could easily establish the identity of the patient’s medical records. This is not just true for big tech companies, research has shown that even anonymous information can be easily re-identified, even if data sold is anonymized and aggregated. How do we know that big tech companies will only use this research as they claim, and not for profit? Although tech companies say this data will only be used for research, an anonymous whistleblower has claimed Google does use this data to mine patient information, run analytics and then sell this data to third parties to be able to target healthcare ads based on the patient’s medical history. 

The truth is that there is no guarantee of how this data will be used. Although personalized healthcare ads might be a relatively light issue, we cannot even comprehend how this data could be used and what harm it could cause to us in the worst-case scenarios. The Cambridge Analytica scandal is one example of how our data was used against us to sway elections. With our sensitive health data involved and big tech companies beginning to work on AI based healthcare, the implications of possible data misuse could be even graver. 

Besides big tech companies, pharmaceutical and insurance companies are chasing after our medical data. Pharmaceutical companies have also been found to use digital record-keeping systems in hospitals to gather information and use it to sell drugs. There are also some findings that insurance companies are beginning to gather data on race, marital status, how much TV you watch, and even the size clothing you wear. With our entire medical history available, insurance companies would have more power than ever before, patients could be denied health coverage or be charged higher based on their medical history or even one’s genetic data. 

While it is true that digitizing the healthcare ecosystem in India could be beneficial for streamlining healthcare, the privacy concerns are difficult to ignore. Along with bringing transparency and accessibility, it also opens new doors for data misuse and possible stigma and discrimination. Without a data protection law in place and actual technological infrastructure to protect our data, the implementation of this project is dangerous. 

Aradhya is a psychology major at Ashoka University. In her free time you’ll find her reading books, drinking chai and cycling at odd hours.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).