Imagine, if you will, a murder. Some letters are found, all written in a strange language. In Conan Doyle’s “The Adventure of the Dancing Men,” it took Sherlock Holmes to decipher such a script and find the murderer.
Inventing a secret language is rather difficult, except that we now have standardized ways to do it: encryption algorithms. Essentially, we have language-inventing software, which can create different languages based on a secret password. If you know the password, you can translate the language back into plain English. Today’s techniques produce incredibly secure ciphers that would leave even Holmes clueless.
This has led to governments trying to subvert or weaken cryptography. Inevitably, every time an atrocity occurs, we hear this argument again and again. Donald Trump has stated that the US should be able to “penetrate the Internet and find out exactly where ISIS is and everything about ISIS.” It was perhaps David Cameron who best articulated this sentiment: “In our country, do we want to allow a means of communication between people, which even in extremis, with a signed warrant from the home secretary personally, that we cannot read? … are we going to allow a means of communication where it simply isn’t possible to do that? My answer is no, we are not.” The justification, of course, is that these powers are needed by “intelligence agencies and security agencies and policing in order to keep our people safe.”
The “deal”, then, is this: You can communicate securely, as long as you make the encryption easy enough for The Government to decipher. This “easy enough” requirement is currently being enforced by various means, including the infiltration and bribery of companies that produce commercial cryptographic software. Many activists and technologists have written about the ethical problems with having a government that is capable of snooping on all of our communications. I argue that legalising this is not only unethical, but operationally impossible.
I am sure you can already spot the problem — if something is easy enough for one person to decipher, then it is easy enough for many others. You cannot have one and not the other, since our government employees are not magically cleverer than their US, Chinese, or Russian counterparts, or the many cyber-criminals that prowl the internet. Broken security renders us vulnerable to anyone with the expertise, not just some government agencies. Mathematical laws care little for the laws of any country.
A commonly proposed solution is for the government to have some kind of “backdoor,” such as a master key. This is difficult to do, both technically and operationally. Given that we have substantial problems implementing and deploying our current (comparatively simple) systems, shifting to such a complicated new technology would inevitably lead to more security holes.
Even if one government has a master key for a certain set of encryption systems, we still have problems. What if the master key gets stolen? We are artificially introducing a critical weakness — such a key would certainly be a prime target for any adversary, and having the key stolen is not a negligible possibility. Over the past few years, hackers have been able to steal everything from the blueprints of the F-35 fighter jet, to financial data from credit rating agencies, to healthcare data from hospitals. Trusting governments with master keys when they haven’t been able to safeguard their own military technology seems like a terrible idea.
Further, if a criminal knows that the government has a master key to software #420, she’s not going to use it. She’ll find a system with no master key (these, of course, already exist). So, the people suffering from weak encryption are mostly going to be law-abiding citizens, who will now be more vulnerable to hackers.
The global nature of the internet adds yet another layer to this. Other governments are not going to sit around and use compromised (from their point of view) communication systems – they’ll build their own software, probably with their own master keys, and stop trusting software made by residents of other countries, essentially creating import control on software. How would multinational companies secure their data? Would they be required to provide keys to every government in the world, or, perhaps a branch of the UN? The creation of a global body to govern these master keys presents a herculean challenge. Further, nothing prevents the governments from adding their own backdoors to subvert that body as well.
Practically every expert in the field believes that subverting cryptosystems (and the bulk surveillance that inevitably accompanies it) is foolish, immoral, and dangerous.
This is why companies like Apple, Facebook, Google, and Microsoft are supporting stronger encryption. Some people who don’t really understand how encryption works have come up with many good reasons for exceptional access backdoors and opined that regulators and legislators must find a way to provide some privacy while allowing law enforcement access. This won’t work. Yes, there are many good reasons for having backdoors (roll-down windows on airplanes might have many advantages), but the numerous fatal problems that they create should have obviated this discussion long ago. Governments should stop trying to build backdoors and support strong, end-to-end security and privacy.
Debayan Gupta is currently an Assistant Professor of Computer Science at Ashoka University, where he teaches a course on security and privacy as well as an introductory programming class. He is also a visiting professor and research affiliate at MIT and MIT-Sloan.
We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).