2

It's the people, stupid.

 3 years ago
source link: https://www.netmeister.org/blog/its-the-people.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

It's the people, stupid.

September 23rd, 2016

The following an approximate transcript of my talk It's the people, stupid from Velocity NY 2016. The slides are available in PDF format here or on slideshare. When/If O'Reilly posts the video of the talk, I will link it here as well.




slide01.png

And now for something completely different...

slide02.png
slide04.png

Aaaaah, yes, the internet... Websites. Apps. Servers. Mobile devices. Networks. DevOps. ChatOps.

slide05.png

NoOps. Serverless... just kidding, these last two things don't actually exist. They're like those magical golden encryption keys.

slide06.png
slide07.png

Your coffee machine, toilet, toaster, fridge, self-driving cars and the internet of things. (Note: the Internet of Things is quite a bit older than you might think. RFC2324 was published in 1998!)

Clown computing.

What do all these have in common?

slide08.png

Software. Code. Vulnerabilities.

slide09.png

People. People are the ones writing the software. People are the ones using the software. Actual Humans. What a concept!

Even in the Internet of Things, where components communicate with one another unattended, you inevitably have a human end-user. You are always operating on layer 9.

slide10.png
slide11.png

Everything we build serves the user. If there is no user, then the thing we built is use(r)less.

We tend to forget this. We engage in heated discussions about our pet editors (vi), coding styles (tabs), programming languages (they all suck), and -- in my particular corner of the internet -- the theoretical possibility of a faceless nation-state attacker breaking cryptographic algorithms while often forgetting that there are actual people involved. Actual humans. On all sides: developers and users, attackers and defenders, it's people.

slide12.png
slide13.png

Remember this? Back in 1992, when the first President Clinton (fingers-crossed) ran for office, her (I mean: his) campaign ran on these three simple statements.

And you know what? That pretty much translates one-to-one to information security and web operations.

slide14.png
slide15.png
slide16.png
slide17.png
slide18.png
slide19.png

We need change vs. more of the same.

People working with computers tend to become cynical with time. I'm sure all of you are familiar with the five stages of grief - I mean, the six stages of debugging. This is the first level on the downward spiral of cynicism. It goes from developers ("How did this ever work?") down to #infosec ("Not like that." and "Nobody knows what they're doing.").

And then there are New Yorkers, who "believe that people living anywhere else have to be, in some sense, kidding." (By the way, folks from out of town: why do you keep taking pictures of our squirrels?)

Finally, at the bottom of the spiral, we find New York Infosec SysAdmins -- those people are the worst. We assume the worst. In many ways, we have to. We're professionally paranoid, but we also are continously reminded to assume incompetence. Our culture prides itself in feeling smug and superior and ridiculing others for their incompetence.

slide21.png

But Hanlon's razor -- "Never attribute to malice that which is adequately explained by stupidity" -- is not a safe tool: we cut ourselves on each use. In reality, we observe a variety of human errnos, a variety of reasons for people's actions. Most of them have to do with conflicting -- or at least different -- priorities.

Here are some of the things where we eye-rollingly assume incompetence:

slide22.png
slide23.png
slide24.png
slide25.png
slide26.png

chmod 777 (go ahead, search your own codebase and weep), code injections of all sorts, passwords in your shell history files, secrets uploaded to GitHub, and passwords written on whiteboards. Goddammit, people are so stupid, right?

slide27.png

But the assumption of incompetence is harmful, because it suggests that people would behave differently if only they knew. If only they understood our reasoning, they would happily abide by all our requirements! I don't think that's true.

Instead of asking yourself "Why would they do that?", have you considered actually asking them why they might do that? chmod 777 has virtually no safe, valid use case, but boy does it get the job done when you need to ensure access by your cooperating processes.

We, who pride ourselves in understanding threat models ridicule wifi passwords shared on whiteboards in closed rooms...

And man, oh man, do we have remedies for all these problems!

slide29.png

We come up with static code scanners that open tickets when they encounter a variable named "password", inevitably leading to developers naming their variables "assword".

We come up with password requirements that make remembering a password impossible, so that end-user systems implement a "security question", keeping our net security just about the same, if not worse: the answers to security questions can trivially be found on Facebook or via Google. But security questions are a usability mechanism that should teach us that there was a problem...

(Infosec nerds chuckle and snidely note that you shoud not answer them correctly, instead "just" filling them with another 50 character complex password. Cool story - actual people do not understand this, and neither does that solve their initial problem (not being able to remember the complex password).)

slide30.png

I know passwords are an easy target and ripping on them is like shooting fish in a barrel, but I do believe that they're an excellent example of how we confuse our users: we require these complex, site-specific passwords, but tell people not to write down passwords...

slide31.png

...except when we suddenly do a 180 and tell everybody they should write them down.

slide32.png

But not like that!

"Not like that" is a popular antipattern I see a lot. Now I'm a big fan of password managers, but it's yet another application that people have to learn to use and trust after we spent decades telling them not to write down passwords. How do we expect people to understand our reasoning?

If we continue to give conflicting advice that's hard to follow, people will develop the habit of not only ignoring us, but of avoiding us.

slide33.png

This is Mr. E.R. Bradshaw of Napier Court, Black Lion Road London SE5. He can not be seen.

We can't help people who have internalized the importance of not being seen by their security team.

And changing other people's habits is a wicked problem.

slide35.png

Here's another example. This one is perhaps a bit more targeted towards the technical people who you hope understand at least some of the risks involved.

In many environments, people rely on using long-lived SSH connections between and screen sessions on e.g. from their laptop or development environment and production systems.

slide36.png

Security minded people tend to not like that so much. So they put in place restrictions that, for example, terminate idle sessions, or only allow certain connections.

You won't believe what happens next...

slide37.png

That's right, people find a workaround. They use cron(8) to set up periodic pings to keep the network connection alive, or to set up reverse ssh tunnel hopping across network boundaries, storing private keys on production systems, etc. etc.

Which of course makes things actually worse for you. There's nothing more detrimental to your goals than putting security restrictions in the way of an engineer who just wants to get her job done.

And the actions of the people working around your silly little restrictions are entirely understandable: they are not "stupid" -- they have different priorities. If you get in their way, and they do not understand or agree with your reasoning, they will work around you.

This picture here is a good example of what is known as a "desire path" or "desire line", and is a common pattern observed in planning of public spaces.

slide38.png

Desire paths form where the design of the space does not allow people to follow their preferred paths. Desire paths tend to form as a shortest path between two points, or where the formal design does not accomodate a common use case.

The picture above is from the Netherlands, where many people ride bikes. If you're riding your bike, you're certainly not going to like riding up to the stairs, getting off your bike, and carrying it down the stairs. Instead, you risk a little tumble in return for a quick ride down the hill on the side.

Desire paths thus also illustrate how people adjust their own behaviour based on their own perception of risk versus the (perceived or real) gain (e.g. saving a few seconds, feeling like they are in control ("nobody tells me where to walk"), etc.).

slide39.png

The really interesting thing here is the concept of "risk compensation". In many circumstances, people will behave in such a way as to keep the level of risk they (believe) they expose themselves to the same.

"Skiers wearing helmets go faster on average than non-helmeted skiers", says Wikipedia [citation], and wearing all my protective gear in the skatepark I am willing to attempt riskier tricks than without. The result? A broken wrist, thumb, and a dislocated shoulder. Hey, you win some, you lose some; do as I say, not as I do.

Paradoxically, then, certain measures deployed to increase safety may actually reduce it. (As a side-note, there are likewise measures that seem to decrease safety but actually increase (see 'shared spaces'), because the awareness of higher risk may reduce risky behaviour and lead to overall reduction of harm.

slide40.png

The way people accept risk is important to understand: people will perceive risk to be lower if they voluntarily engage in the risky behaviour. They will perceive risk to be higher, if they are involuntarily exposed to it.

That is, autonomy factors into an individuals risk perception. A second factor is familiarity or, perhaps, conceivability:

"The easier it is for you to think of an example of something happening, the more frequently you think that thing happens."

Anybody here a parent of a young child that just started to walk to and from school by herself? This is terrifying, right? So many terrible things could happen! What most people are afraid of here is abduction of children, kidnapping by strangers. But that is actually an incredibly rare crime.

But because we can easily imagine it, we think it happens more frequently than it does. When it happens, it's all over the news -- which, by definition, is an event that is rare. It wouldn't be news otherwise. But this reinforces our perception of this risk being high.

slide41.png

So perceived risk is weird. People are notoriously terrible at estimating actual risk. And our inability to correctly assess risk leads us to diverting resources to defend against unlikely threats.

People are more afraid of air crashes than traffic accidents, although the odds of you being hit by a car while crossing the street are significantly higher than you dying in a plane crash. Here in New York, you are more likely to get run over while you're crossing the street than you are being a victim of a pipe bomb hidden in a trashcan, but in the last few days, it really hasn't felt that way, has it?

You are 35,079 times more likely to die from heart disease, 33,842 more likely to die from cancer, 4,311 times more likely to die from diabetes (triggered by poor eating habits), 1,904 times more likely to get hit by a car, 26 times more likely to die falling out of your bed, and 5 times more likely to get hit by lightning than to die in a terror attack (numbers from here and here, amongst others), but at the airport we all dutifully take off our shoes and belts, ready to buy some more fast food and diet soda once we're through "security".

(Footnote: Some of these numbers are a bit polemic, since "terrorism spending" is not actually a number. Instead, many reports combine military spending on wars relating to or triggered by terrorism-related activity -- i.e. "the war on terror" -- within this budget.)

slide42.png

Information security is a lot like that. That's right: we behave much like the TSA. We react to previously successful attacks by deploying, with great fanfare, elaborate and expensive defense mechanisms that we claim will defeat the attackers, but people clicking on links in emails still gets us pwned.

All too often do we focus on the outliers, the spectacular events. We build up our ego with the self-image of the glamorous knight defending the nation state attacker burning 0-days left and right (APT!) while little bobby tables exploits five-year old vulnerabilities in our libraries and code bases.

But I've got bad news:

slide43.png
slide45.png

Security is not a value. We heard something similar this morning in Katherine's keynote: 'Technology is a means to an end, not the end goal itself'. This permeates the stack:

In #infosec, we pretend that 'security' is an end-goal, a value. We try to "make software secure" -- a fatal expression, suggesting we can duct-tape "security" onto the system after the fact, which exhibits a flaw in our thinking. "Security" is not a value, it's an outcome. "Security" is not an end in itself -- even though all too often we act as if it was.

(Footnote: See also Eleanor Saitta's talk at Velocity in Santa Clara earlier this year.)

Security is not a value. It's a property built into a system that actual humans -- people -- will use to interact with the world.

The defenses and solutions we deploy in our effort to achieve "security" are marketed as 'sophisticated', 'military grade encryption', 'APT-proof', and, most importantly, 'cyber'. Now while the cyber is undeniably YUGE -- are these defense mechanisms effective?

slide46.png

Defense mechanisms can only be effective if they're people drive, because both attack and defense are people driven. Both sides have actual humans with their own specific sets of goals and motivations running their own cost-benefit assessment.

slide47.png

Attackers are humans, they are people, too. This is not to downplay any attacks you're seeing, or justifying their actions, but it's important to recall that we are facing actual people. Attackers act out of human motivations, such as seeking fame, money, a feeling of patriotic duty, or whatever else you can think of, all with their own risk of loss (e.g. of anonymity, resources, recognition, time and effort, ...).

It's critical that we understand this, as it allows us to focus our attention on those defense mechanisms that actually make a difference, that force the attacker to re-evaluate their cost-benefit model.

Attackers will act rationally and in line with their motives. Attackers will not wear ski masks while hax0ring around the scary darknet. Attackers will continue to employ the cheapest, most effective attack until it ceases to be that. Nobody is going to burn a $1M 0-day if they can compromise your infrastructure with a few simple PHP or SQL code injections.

Yet, the cyber security products we're buying are pitched pretending that their latest and greatest APT-WAF-SIEM-Inabox will help you defend your infrastructure against the biggest nation-state attackers.

This is what these products are, by and large:

slide50.png

Fences. Barriers. Individual spot checks.

If we only react, we do not shift attackers’ methods, just the individual attack points.

slide51.png
slide52.png

Some of the products pitched take a much harder approach and are really dedicated to building these walls. But it's useful to remember that even if they're YUGE and very cyber and beautiful and no matter who pays for them, building walls to keep people locked in or out rarely works, because you get into people's desire path.

slide53.png

Let us go back to desire paths and public spaces...

People have all sorts of silly ways of walking. They don't cross intersections in neat right angles. They j-walk or cross in the middle. Which isn't very safe for them to do, normally.

But if you talk to actual architects or city planners (I know, I know, nothing's higher than architect) or civil engineers, you will find that they don't talk about the security of their intersections, their buildings, their spaces. They will talk about safety.

By understanding the desire paths and building infrastructure that accounts for the users' instincts, they can increase safety.

slide54.gif

This is a concept that we in the computer/internet industry have yet to adopt. (See also: Alex Stamos's presentation from AppSec California 2015)

slide55.png
slide56.png

You should not be able to use a tool in an insecure manner. Note: I'm not talking about "foolproof" software, because there it is again, this smug superiority towards our users, developers, "those people". What I'm looking for instead is poka-yoke design:

slide57.png

When was the last time you plugged in a USB connector the right way on the first try? It's frustrating, right? There's only one way it fits in, but it's not obvious which way around. That's a poka-yoke fail.

When was the last time you started your automatic car while in gear? Or took the car keys out while the motor was still running? In most cars, you can't. You have to be in 'park' and you have to have the break engaged to start the car. To take the keys out, your motor has to be off, and to turn off the motor, you have to be in 'park' or 'neutral'.

This is how we should build software: safe defaults that make 'incorrect' use impossible. What's even better about poka-yoke design: it shapes behaviour and builds habits. You probably don't even think about putting your foot on the break to start the car, but you do it.

Changing undesirable habits is hard, but building new habits is not. Habits are your kick-ass lever to move your world. Habits are what build desire paths, and the habits you want to encourage will steer the people to walk the paths you designed for them.

slide59.png

The most convenient and intuitive way to use your system must be the safest way to use the system. This requires your application to have safe defaults. Defaults are amazing, they can affect great change with little effort.

Things to consider in your environment might include setting a umask of 077, disabling the shell history file, providing well-tuned ssh config file, ...

slide61.png

Make sure that the less default settings require effort; if you have less secure settings, users will find and use them. The trick is to discourage that behavior: insecure practices should require jumping through hoops, secure practices should require no effort.

slide63.png

Failure must not lead to a user changing their settings.

Yesterday, we saw a video of USDS where an end user encountered an invalid certificate warning. The click-through instructions here were reasonably safe, because the excemption is temporary, and a new browser session would warn again.

Compare this to how many developers use self-signed certificates for development instances of their services: because those don't validate, they will change their tools to ignore all certificate warnings (curl -k), which now likely leaks into production and your end-product will happily accept any MitM certs...

slide64.png

Rolling out critical security updates must not require a lot of effort. Rotating secrets must not be difficult, or else it doesn't happen. Renewing TLS certificates must be automated, trivial, a non-event (shout-out to Let's Encrypt).

slide65.png

Safe defaults, safe failure modes, automated, regular, unattended updates, a focus on usability, and encouraging desired behaviour. Poka-yoke software follows the users' desire path.

slide66.png

One critical aspect of desire paths is that they reflect the - duh - desire of the users. If you want to build safe systems, you need to align with the users desires, with their objectives. You need them to show you the path.

This necessitates that you are walking the path with them: you cannot issue edicts from high atop the infosec ivory tower and expect the rest of the organization to fall into line. If this sounds familiar to you from the olden days of pre-devops, then this is no coincidence:

The same way that Dev and Ops got together is how we need to drive information security and DevOps closer to one another: by the principle of skin in the game.

Making any change in software systems has a number of costs, some explicit, some hidden. These changes hopefully also have some benefit (such as an eliminated vulnerability). If the team(s) reaping the benefits are distinct from those shouldering the cost, then you're going to have a really difficult time of promoting the change.

For example, software updates and security patches. In your organization, which team(s) are carrying the cost? That is, which team(s) are doing the legwork of finding or building patches, merging software updates, of backporting functionality, and of deploying the changes?

Does that team see an immediate and measurable benefit to this non-trivial cost? For most developers, the immediate benefit of a lowered risk of compromise by a given attack vector is close to zero. No surprise they are not motivated to make these sorts of changes all the time!

slide67.png

But money is just one of the many incentives that drive people. Remember, it's the people, stupid! People like to be useful. People like to take responsibility. People like to feel like they're in control, that they have authority and autonomy.

We value what we build. This is known as the IKEA effect: If we are taking part in the design of the safety aspects of our systems, then we will value them just as much as the availability or usability aspects. We will take responsibility for how our system behaves in these regards if we poured our sweat and code and frustration and satisfaction in.

So to get your teams to take responsibility, you need to strengthen the IKEA effect, by giving them the autonomy to build the system themselves:

You need to provide them with the right tools; make sure no team works alone; encourage and faciliate collaboration; provide guidance and offer help, but let people build and maintain the systems themselves.

slide68.png

Change vs. more of the same.

The people, stupid.

And don't forget about malware.

(Malware is also all about people: most effective when it targets non-technical users by exploiting human behavior (e.g. phishing and social engineering).)

slide69.png

"Fundamentally, the problem isn’t about security. It’s people."

We need to understand human behaviour, objectives, and motivations. We need to observe and understand the desire paths our engineers and our users create -- listen to them. Understand that attackers are also humans and will likewise act according to their own rationale.

In the end, information security and computer safety is about people, and they're not stupid.

Thanks. And now for something completely different...


Related:

September 23rd, 2016


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK