~ 11 min read
Thinking Fast and Slow in Application Security
Imagine if we applied behavioral economics principles to application security methodologies and practices, what would be able to unlock? System1 and System2, All Systems Go.
No doubt, if you read Daniel Kahneman’s book “Thinking, Fast and Slow” you’ve likely been similarly fascinated by his insights into the two systems of thinking. And no doubt. He is, after all, a Nobel prize winning psychologist who revolutionized the field of economics. Together with Amos Tversky, they created immense value and contribution into the understanding the nuances and oddities of human decision making.
Inspired by this, I set out to see how could we apply the same human behavior principles to the domain of application security.
System1 and System2 in AppSec
System1 and System2 are two modes of thinking that we humans apply to various tasks during decision making. System1 is known for its speed and intuition while System2 is more deliberate and analytical.
How do we apply this foundational thinking fast and slow behavioral systems to the application security domain to derive better results?
System1: Fast Thinking in AppSec
To put System1 into the application security context, I’ll suggest some areas where this applies:
- Running software composition analysis (SCA) tools: They are fast, deterministic and can be easily automated by security teams and developers alike. Developers in fact pay little overhead to running these tools - whether they get automated pull requests to upgrade vulnerable dependencies, or they spin off a
snyk test
command on the CLI to check for vulnerabilities in their project, this process is fast. The scanning process is fast, and the decision making is fast too - upgrade the dependency or not. - Generating SBOMs: Like SCA scanning, this is also a minimal effort win for developers and security teams. Generating an SBOM is an automated process too and have been largely commodotized.
System2: Slow Thinking in AppSec
Where System1 maps to automated tools, almost boolean decision making, System2 requires more work. Let me put some practical appsec exercises into the context:
- Threat modeling: mostly a manual process that gets stakeholders in a room to discuss security threats and risks to a system. Can it be automated? Sure, but its effectiveness comes from the human interaction that uncovers hidden layers, assumptions and biases. You often threat model early into the design process of a feature or a project, so at that point, you also have little data to work with.
- Secure code review: One of the most foundational and yet illusive concepts in secure coding is context. Context is key to understanding if a risk we identified is applicable or not. Does the data flow from the user impact the security of the system? Depends. It depends on the context of how the system uses the data, and which system uses the data. A secure code review can be automated to an extent but is more often than not a valuable exercises when expert and deep understanding of the system involved is present.
- Analyzing SAST security findings: Somewhat similar to secure code review, when a developer reviews a list of security findings from a static analysis tool (SAST), they are burdened with figuring out call path flow, data flows, and context of the finding. Much of the information is also not easily available. For example, a developer who reviews the code may not have enough understanding of what type of data is being processed. Perhaps they have a bias to think the data is always text, a string, a literal, but in reality, an attacker may end up sending a payload that gets interpolated to an array by a middleware or a library. Analyzing SAST security findings requires understanding of source-to-sink and some security expertise to be easily actionable by developers in terms of applying mitigation and security controls effectively.
Fixing AppSec with Systematic Thinking
Kahneman research demonstrated that System1 thinking is in charge 95% of the time where as rational thinking through System2 is only 5% of the time. That makes sense, because System2 thinking takes effort, and is relatively slow. System2 requires more energy and time to process and apply logical thinking.
From the prism of developers, they are inherently less likely to focus on cross-cutting concerns like security due to the foundational business aspects (focus on features, value, delivery). This is where System1 thinking comes into play, especially for security tasks that developers would want automated so they don’t need to divert their train of thought to System2 thinking in order to address security issues brought up by the security team.
What impact would we unlock if we could move traditional System2 tasks into System1 for developers?
Anchoring Bias in AppSec
Anchoring bias is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered when making decisions. So imagine you walk into a store and you see a $1,000 Synology NAS (yes, I’ve been on the market for that :D) and then you see a $450 NAS. Your initial anchor is the $1,000 NAS, so you’re more likely to think the $450 NAS is a good deal.
So we established that during decision making we anchor initial piece of information as the relative point to which we make subsequent judgments.
Let me show you a real-world example of this in everyday life of a developer. Imagine a developer is installing a new package in their project. Their workflow is as follows:
npm install lodash
added 1 package, and audited 57 packages in 15s
found 17 vulnerabilities (1 low, 16 high)
run `npm audit fix` to fix them, or `npm audit` for details
How did the 17 vulnerabilities that they were told about just now impacted them? They shipped to production. Nothing happened. They looked into some of them, found that they are as high as CVSS 9.8 but they are irrelevant to them because they are detected in development dependencies. Their anchor is set to “I’ve seen this before, it’s not a big deal”. Their first impression has been skewed to a point where they are less likely to seriously consider vulnerabilities as a threat to their system.
This is, in fact, a common problem in developer security and has been coined the term “vulnerability fatigue”. Developers are provided with too much information and findings of security vulnerabilities that get reported to them, most of which were justified to be deemed as irrelevant.
Loss Aversion in AppSec
To put simply, you’re more likely to feel awful about losing a $100 than you would feel good about randomly finding $100 on the back pocket of your jacket. This is loss aversion in action.
Or let me try to maybe give it a developer spin for an analogy - imagine a developer has now spent 10 hours fixing a ridiculously hard bug in their code. They’ve been debugging, reading logs, and working their brains out to fix this thing relentlessly. Suddenly, and right on point with a developer analogy, they hit the wrong git
command and poof all of the staging area changes are gone. They’ve lost 10 hours of deep work and a proper solution to the bug. In that context, developers are more likely to feel awful about losing 10 hours of work than they would feel good about finding an npm dependency that saved them 10 hours of work.
How do we apply loss aversion to application security? To an extend, this depends on the sort of spin you want to give it. Do you want to use loss aversion to deter developers from making security mistakes? Or do you want to use it to encourage developers to fix security issues? Some ideas come to mind:
- Rely on loss aversion to encourage developers to unveil the invisibility cloak of security issues by showing them the impact of a security vulnerability. You can do this by practicing a hands-on vulnerable and exploitation demonstration that creates the “aha” moment for developers.
- Another option that is practiced more regularly is to set a policy where the CI pipeline breaks when a security vulnerability of a certain severity is detected. This is a form of loss aversion since developers are more likely to fix the issue to begin with by applying secure code review, secure coding practices, or even better - running static security analysis in their IDE, to already catch the issue before it gets to the CI pipeline and frustrates them.
Have more dramatic ideas on how to apply loss aversion to application security? I had some red-team thoughts but maybe these are a bit too abusive for developers and better kept in the drawer ;-)
Availability Bias in AppSec
If our brain can recall something easily we’re more likely to overestimate its significance.
How do we instill the availability bias to raise awareness for application security? Here are some suggestions:
- Regularly share security incidents, data breaches and security news with the team.
- Put developers into the “blue team” shoes by running security exercises like capture the flag (CTF) events but instead of attacking, they’re defending.
Developers will be more likely to remember security incidents that they were actively involved in the process or defending against during one of those fun exercises, and as a consequence will have a higher awareness of security issues day in and day out.
Confirmation Bias in AppSec
What would confirmation bias look like fo developers?
- Developers are possibly more likely to confirm their own beliefs about how the organization handles security. For example, if a developer believes that security of the application (or the business) is the responsibility of the security team, they are more likely to confirm this belief by not taking security at all into account in their daily work. That’s someone else’s job, right?
- What if developers were swayed by application security own’s catch 22? For example, if a developer believes that the application is secure because they’ve never seen a security incident, they never had a data breach, or they were never confronted with a security issue, they are more likely to confirm this belief by, once again, not practicing security in their day to day development practices.
Planning Fallacy in AppSec
Ever heard a developer say “this one is going to be a quick fix” - famous last words, right? Only to discover that there’s just so much more involved, your peer is out of office today, and some system credentials you needed are unavailable to you so the task ends up taking 3 days to fix.
Applying planning fallacy to application security can be reasoned with tasks relating to handling invalid user input. Let’s put that into a practical example - a developer implements a GET HTTP request which receives user input in the form of query parameters. Here’s a code example to illustrate:
app.get('/search', (req, res) => {
const query = req.query.q;
// do something with query
});
Developers fall into the trap of planning fallacy in two ways:
- They either overestimate the simplicity of this task and don’t even consider handling invalid user input, validating or sanitizing the input as needed.
- Or maybe they do take input validation into account but they underestimate the complexity of the task and end up with a half-baked solution. For example, consider the following code snippet:
app.get('/search', (req, res) => {
const query = req.query.q;
if (typeof query == 'string') {
const sanitizedQuery = query.trim();
}
// do something with sanitizedQuery
});
But now what if the attacker sends in an array as the query parameter? Such as /q[]=1&q[]2
. The developer didn’t account for that because they haven’t even thought of considering the possibility of an array being passed in as the query parameter. This is a popular attack vector known as HTTP parameter pollution.
Where do we go from here?
I would say that being aware of these cognitive biases and how developers may have their own biases is a good start for application security teams and security champions in the organization because it can clarify and provide deeper understanding of why developers may not be as security conscious as they should be. And that’s not something to hold against them. It’s not that developers don’t care about security, there are deeper forces at play that impact their decision making.
Not sure when we discover about System3 thinking, but for the time being, goodluck with System1 and System2 in application security! ;-)