Categories
Issue 19

The Pegasus Controversy: Locking the Stable Door

Born of the gorgon Medusa, Pegasus was a winged horse so powerful and valiant that the god Zeus turned him into a constellation, sharing the sky with Leo, Draco, Gemini, Orion, and the like. The flying white horse is a compelling emblem: the Israeli cybersecurity firm NSO Group clearly found it so, naming one of their deadliest systems after it. Their Pegasus was a chimeric attack software, capable of infiltrating the latest and most expensive smartphones. Critically, unlike many others, it did not require a target to make a mistake: you didn’t have to click a dodgy link or download a file to get infected. These were “zero click” attacks, which leveraged vulnerabilities in common software, like Apple’s iMessage.

Pegasus clients could get access to phone data in many ways: if a targeted “spearphishing” email with a link worked, fine. If it didn’t, then they’d use zero-click attacks or other means, including physically getting access to a device and infecting it. The latter was necessary in some cases where the target had reduced their vulnerability to attack by having separate devices which they did not otherwise use. Once installed, it could intercept phone calls, chats, and emails, access photos and videos, grab location data, and even activate the microphone or camera remotely. Finally, it could erase itself, practically without a trace, once access was no longer required.

While the tool has been around for over a decade, it came to public attention in mid-2021, due to a data leak (the irony!). This leak comprised around 50,000 phone numbers that were allegedly targeted by Pegasus. What alarmed the group of journalists analysing the leak was the fact that the numbers included many journalists and activists. In other words, a military-grade cyberattack tool, intended to target terrorists and the like, was being used against innocent citizens.

There are three questions we must tackle: (1) How bad is this? (2) Clearly, some bad things have happened, so who is to blame? (3) What can we do this fix things in the long term, so that such incidents do not occur in the future?

The answer to the first question isn’t as obvious as it first appears, especially in the backdrop of planetary-scale mass surveillance by the US government and many others. The level of utter betrayal involved in things like the Belgacom scandal (where the British government infiltrated a government-controlled Belgian telecom giant) or the Gemalto hack (where the US and the UK together broke into a Dutch company’s systems to obviate the new security systems it was installing on SIM cards) might make this particular case seem banal. It is critically different, however: this is a private company producing military-grade products and should be treated like a missile producer. Worse, unlike a missile, code can be replicated with ease. If Lockheed-Martin sells one Hellfire missile to the wrong client, it is still practically impossible for that client to make more. Not so with this (though, of course, this kind of attack software needs to be constantly updated in a cat-and-mouse game with companies patching their defences). Clearly, there needs to be strong, international regulation of the sale of such systems, with sufficient sanctions built in to prevent misuse.

When it comes to blame, there is a lot to go around. It is important to note that the sale of NSO’s cyberattack software is regulated by the Israeli defence minister, who grants individual export licences, presumably making sure that only vetted, “good” nations get access to it. The leaked data and subsequent forensic analysis, however, indicate that the majority of these vetted nations swiftly reneged on their promises (to use this power to target criminals) and started targeting journalists and activists. This is not to say that the blame lies only with these nations: it beggars belief that NSO and the Israeli defence ministry, both supremely competent institutions, were unaware that their vetted clients were doing bad things. It would appear that they decided to look the other way. In India’s case, we have neither a strong data protection bill nor real public pressure around data security and privacy (along with outdated laws and oversight in this area). Misuse is practically inevitable, especially given that it would be almost impossible to prove in court.

What can be done? Here, I strongly agree with many other experts: laws, technical defences, and good cyber hygiene are all necessary but not sufficient. At the end of the day, the main thing that will stop this from happening in the future is strong and steady public awareness, and anger at such incidents: a government must know that this is an issue that can lose it an election. We do not have anything of the sort in India today: outrage at a privacy breach is a coffee table conversation, and, frankly, not even a heated one. If Shark Tank produces more emotion than Pegasus, don’t expect privacy breaches to be taken seriously. Until that time, the Indian government, among others, will pay only lip service to protecting privacy and security. After all, the government represents its citizens – and we, clearly, don’t seem to care.

Debayan Gupta is currently an Assistant Professor of Computer Science at Ashoka University. He is also a visiting professor and research affiliate at MIT and MIT-Sloan. Debayan’s primary areas of interest include secure computation, cryptography, and privacy.

Picture Credits: Kaspersky Daily

We publish all articles under a Creative Commons Attribution-NoDerivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).