Ogres are like onions…we both have layers.
As I so often am these days, I’m currently in an airport terminal, waiting for departure — this time out of Dublin. And as someone who’s often in the air and does some work with computer security, I often think about airplane security. There are many parallels between them, as well as many shared fallacies that people have about them. A big one is to not see a security system as layered. The fallacy was in action beautifully after the failed Christmas Day attack back in 2009, when in the aftermath Janet Napolitano correctly stated that the system had overall worked. She made the mistake that almost every politician will eventually make – telling the truth – and had to make the usual ablutions, but it didn’t make her any less right.
Why are layers so important? Simple. It is utterly impossible for a single measure to be 100% effective against all threats. Let’s say you somehow invent perfect passenger screening so that there was no risk from them. That would do nothing against risk from the pilots themselves, such as that that downed Germanwings Flight 9525, nor any of the other risks to aircraft.
And of course any measure that is taken is by necessity imperfect. People have accidentally and intentionally smuggled guns, knives, etc. past airport security in the past, and they will surely continue doing so in the future. Does that make screening worthless? Not in the slightest. Imperfect security still constitutes a barrier. Having your attack discovered by security is far worse than never attempting the attack at all. Plus the added hurdle of having to make the device you use sneaky by necessity makes it more unreliable. This is best illustrated by the aforementioned Christmas Day attack. The bomber had to cram some makeshift explosives in his underwear in order to get past security, and the final result was that he just ended up wasting time in the lavatory burning himself instead of destroying the plane. The parallel to computer security is things like scrypt on user password tables. They don’t prevent an attacker from getting passwords out of a database, but they make it much more expensive and force the attacker to do things like try lists of commonly-used passwords.
Security layers can complement each other too, where each layer helps fill in the gaps from the others. To continue the earlier examples, let’s say you got a gun onto a plane. What would you do with it? You could shoot a few people, but that isn’t much more dramatic than doing it on the ground. You could depressurize the cabin, which would provide the drama that terrorism requires, but depressurization alone is not a life-threatening problem. Passengers put their masks on, the plane descends to a breathable altitude and lands at the nearest airport, life goes on. Hijacking the plane is the best case, but most airliners now have a locked-and-reinforced door between you and the cockpit, and after September 11th it’s no longer possible to carry out a credible hijacking — the passengers will soon be in the cockpit as well. The same principle applies in computer security. Gaining access to a system may let you cause damage, but good systems also have internal controls. Just because you can read database A doesn’t mean that you can write to it, or that you can read from database B.
And there are social layers. These are not commonly recognized as security but they are still powerful. The same factors that make one able to successfully attack a plane also make them extremely unlikely to want to. I do not speak of moral factors here – all of human history proves that a great cause will “justify” any sort of atrocity – but pure ability and motivation. Anyone who studies terrorism can hardly notice how ineffective it almost invariably is. bin Laden’s goals with the September 11th attacks were pretty clear: withdrawal of US foreign policy from the Middle East, with a secondary goal of making us change our “decadent” ways at home. To state the obvious, this did not happen.
This needs to be looked at in a nuanced manner. A strong enough terrorist force can certainly accomplish goals, but if it is too weak (and most are), it only results in severe blowback.1 The attack of Robert-François Damiens on Louis XV did not result in the increased intra-Catholic tolerance that he hoped, but rather a disgusting hours-long spectacle of his torture and execution that took place in the middle of Paris. There are countless other examples, but I need to bring this back to computer security.
A doubly-useful way to improve computer security is via bug bounties, where companies pay people who find bugs and tell them about it. Not only does this directly improve security by finding and fixing the bugs that inevitably crop up in software, but it provides a profitable outlet for people with the necessary skills. They can find bugs and sell them to the companies instead of exploiting them for profit. Unfortunately this second benefit is sometimes controversial. To them I would simply reply that Manichaeism has never worked, ever, and is unlikely to start now.
So airplane and computer security are both layered, but are they both effective? The answers are clearly yes and no, respectively, but that can hardly be blamed on the layers. If computer systems changed as little as air infrastructure did they would undoubtedly be far more secure. And if air infrastructure was run by Silicon Valley I would keep my feet firmly planted on the ground.
- And the building-up of strength that might make you successful makes you more at risk of discovery from the authorities; yet another security factor. ↩