The battle over encryption is often caricatured as a battle between spies desperate to scoop up as many secrets as they can and the liberty-loving tech companies standing in their way.
Of course the truth is rather more complicated. The spies are also in charge of keeping secrets, so understand how important encryption is to modern societies. And many tech companies are happy to unencrypt and sift through the data we entrust to them — and sell on what they find to advertisers.
Amongst this posturing we often lose sight of the fundamental questions: what level of privacy is appropriate to protect the individual, and what level of intrusion into that privacy is needed to keep society safe?
The right to privacy is usually considered a qualified right — that is, it can be overridden: it is generally agreed you shouldn’t have an absolute right, say, to plot a murder in secret.
What has happened over the last few years with the increased use of encryption in general (and the use of end-to-end encryption in particular) is that total privacy has become reality. Now, with good enough encryption you can plot murder in secret and the police won’t be able to listen in (of course dissidents or campaigners living under a brutal regime can use the same technology to protect themselves from their oppressors).
In a speech at MIT earlier this week, the director of the UK’s surveillance agency GCHQ Robert Hannigan made an attempt to navigate through these complex waters, making one of the most nuanced contributions to the discussion so far.
“At its root, the ethical problem presented by encryption is the problem presented by any powerful, good invention, including the internet itself, namely that it can be misused,” he said — and insisted that he neither wanted to ban encryption nor mandate backdoors into it.
Hannigan used the example of Tor, a means of communication invaluable to those who need high degrees of anonymity like human right advocates and journalists but also an invention “that is these days dominated in volume at least by criminality”. His argument: “The technology of the internet and the web is morally neutral, but those of us who use it aren’t.”
He said the right response is not to assume that encryption is bad, but to look for a “sensible, pragmatic and proportionate response to a shared problem: the abuse of encrypted services by a minority of people who want to do harm to others”.
He added: “I am not in favour of banning encryption just to avoid doubt. Nor am I asking for mandatory backdoors. I am puzzled by the caricatures in the current debate, where almost every attempt to tackle the misuse of encryption by criminals and terrorists is seen as a ‘backdoor’.”
Tech companies argue that by changing how they use encryption (or introducing those backdoors) they would inevitably allow hackers to gain access to data that should remain private, and that if they agree to allow the US or UK to have access to customer data then it’s much harder to say no to Russia or China.
But perhaps tech companies are not qualified to make that decision on their own — and neither are the spies or their political masters, said Hannigan.
“Democracy, for all its flaws, remains our best defence against the abuse of power and infringement of liberty and privacy, whether by governments, or industry, or the individual. It is, after all, those democratic values from which the internet was created and flourished, not vice versa. The internet is enhancing democracy I’d argue in exciting new ways, but it is not a replacement for the democratic process, or a parallel universe.”
It’s a far more conciliatory approach than the one Hannigan used when first appointed GCHQ director, when he said social media networks were being used as “the command-and-control networks of choice for terrorists and criminals”. That approach just infuriated tech companies.
There’s also the ongoing problem of trust to address. The last time the public encryption debate took place in the late 1990s, it seemed that the intelligence agencies had accepted that public access to encryption was a good thing — and then spent the next two decades secretly trying to break it.
But perhaps the other part of the reason for the new softly, softly approach is that spies and police know they won’t get access to everything.
Even if new laws in the UK give them more access, or the courts force Apple to unlock iPhones, there will still be services beyond the reach of law enforcement that criminals (and those who want to protect their privacy will use).
But if Hannigan and others can persuade the biggest tech companies to modify their approach to privacy, that would at least make the haystacks smaller when searching for a needle. As he argues, the answer is not that encryption should be weakened “but neither is it true that nothing can be done without weakening encryption”. Compromise is possible, is his argument.
It may not be, technically at least. Companies won’t want backdoors into their systems: they know that intelligence agencies have a long history of overstepping the mark and gathering as much information as they can. As ever more of our lives are recorded digitally, where do we draw the line? Perhaps we have already given too much away.
It may be that as a society we are willing to shoulder the risk that criminals can plot in secret so that we may all enjoy greater privacy. Equally we may decide that the needs of security outweigh those of privacy. But Hannigan is right — there’s too much at stake to leave the decision to the spies and the techies.