When it comes to the people designing the bridges I drive across, I want them to use blueprints. I want them to run their design through programs to calculate the exact weight the bridge can hold and when it would collapse. They need to have looked at the bridge from every single angle to figure out exactly what could go wrong if there is an earthquake or a hurricane. They should create a threat model for the bridge.
What I don't want is for the builders just to go straight to building the bridge because they've done it before, a new bridge means new variables like a different environment.
This principle also applies to web applications, which is why the new #4 on the OWASP Top 10 2021 list is Insecure Design.
Along with some other important changes (like the addition of SSRF), the inclusion of Insecure Design highlights a significant type of threat, one that calls out issues resulting from design and architectural flaws. And that's why this list item is unique – it's harder to mitigate. In this case, the vulnerability is deeply ingrained in the fundamental design of the product or application. The mitigation strategy requires that the proper secure development processes and requirements are in place.
We can't just jump into building web applications. We need to have a design phase. And it has to include a solid reference architecture and threat model. A design phase without these things is not going to be enough to mitigate the insecure design risk. It would be like designing the blueprints for the bridge but not actually calculating the weight the bridge can hold or if the ground is suitable for that specific bridge type.
If you're questioning the importance of Insecure Design as compared to others on the list, it's likely because the impact is not as obvious as it is with things like cryptographic or authentication failures. Here's three insecure design choices that lead to application vulnerabilities and data leakage.
Often overlooked, CWE-209: Generation of Error Message Containing Sensitive information seems relatively benign. But it's an easy vector for a brute force or phishing attack.
When a user attempts to log in to an application with wrong credentials, the error message returned is generally something like "Incorrect email or password." But, which is it? The email or the password?
While it can be annoying that the error doesn't indicate which credential was wrong, this is a secure design concept. Serving errors such as, "This password does not match the email given," or "We do not have an account with that email" is gold to an attacker. It indicates which email addresses are connected to valid accounts. This information is then used to attack the application (brute force) or the user (phishing).
Other common insecure design vulnerabilities include bad password policies. While secure password policies are ever-changing and always debated, limiting the maximum size leaves an application open to a brute force attack.
No one can seem to agree for very long on exactly what the strongest and best passwords are. Do they need to include special characters or numbers? Exactly how long should they be? The truth is that it's all relative.
The most secure applications don't set a max password character limit. But situations do exist that require a limit. If this is the case, the longer the password is, the better. A 60-character password is much harder to brute force than a 12 character one.
Remember those pesky security questions no one could remember the answers to? They've become a really insecure design decision. If an application still uses them, run away. You should never use them either. (I would argue they were never secure.)
Recovery questions don't validate anything except that the person answering knows the answer. In the age of social media, uncovering this information is relatively easy, especially for a malicious actor. It's easy to figure out the answer to these questions, especially childhood pets or parents' middle names. How many scenes in movies or tv do we need about someone getting into others' accounts due to recovery questions to scrap them entirely. Recovery questions are so well known to be insecure and easy to guess that it has become a popular media trope. Yet, there are still applications that use them to recover account information – tsk tsk.
Building a solid reference architecture and threat model requires your team to know what to avoid and what to look out for, so be sure to include many different perspectives in the design process.
Before beginning the design phase, you need a set of security and business requirements. It won't be an extensive, all-encompassing list of all secure design considerations yet. But it should include bigger ticket items like confidentiality, integrity, availability and any required regulations or laws. Going back to the bridge example, requirements might include things such as weather, traffic patterns, bridge length, budget and max weight - perhaps there is a lot of traffic in the area, and so the bridge must hold a large number of cars at one time.
The design phase heavily impacts secure design. It's the backbone of secure code. These requirements guide application design and planning. By including security concerns in this step, you can start the design process with security in mind.
Next, a solid planning and design phase includes creating user and attacker stories while building and continuously improving a threat model. User stories are what a user would want or expect to happen when using the application. These help you determine the steps a user may take when entering the application, giving you different areas to consider for your threat model.
Attacker stories are next, and in my opinion, more interesting. Like user stories, we use them to determine the steps an attacker may take when invading the application – but from the perspective of exploiting it. In this instance, look at your application through the lens of an attacker and ask yourself where an attacker may focus their attention and the creative ways they may infiltrate your application.
User and attacker stories are an interesting dichotomy to consider. On the one hand, you have to answer the question: "If I were a user, what would I expect my experience to be?". Then flip that thinking to: "Okay, but if I were an attacker, how would I attack this application."
You have to balance the user stories and attacker stories. You want to make sure to keep the users happy while not also making the attackers happy.
Both of these narratives will feed the next step – your threat model.
The threat model is perhaps the most important aspect of mitigating insecure design. It's done throughout the build at logical sections of the application. For example, you may have multiple different threat models for one application. Maybe one threat model focuses on the user interface and how users send and receive information from the application. This threat model might focus on input and output validation, proper authentication, and authorization, and it would attempt to balance the user experience told in user stories and the narrative of the attacker stories.
Another threat model might focus on some more internal aspects of the application. How the different areas communicate, if there is the separation of privilege (who can access and edit different parts of the application), things like that may be included in this threat model. You may even have a whole threat model focused just on the software supply chain. These are just a few examples of what you may create a threat model for. The truth is there is not an aspect of your application that you cannot and should not threat model.
For our bridge example, you may have a threat model focusing on traffic patterns, how many lanes the bridge should have, and the maximum weight the bridge can hold. Then you could have another threat model that focuses on the weather in the area to determine the best material for the bridge as well as where is the most stable spot for the bridge to sit and how high it should be above the water.
Threat modeling is a collaborative effort that should include people who have different roles and perspectives. It's best to have a diverse group of people with varying skills review the model. It should not include people from just the development team or just the security team. It especially should not be just one person. Everyone has different perspectives, different aspects of development, and secure code that they focus on. Suppose only security people are involved in threat modeling. In that case, they may miss something a developer might notice or make suggestions for changes that will make the application nearly impossible to use. I suggest taking a look at the Threat Modeling Manifesto for a good guide to help develop your threat model methodology. It is not a checklist of things to do for threat modeling. Rather it includes some values and principles you can use to guide how you do threat modeling.
For the planning of a bridge, the engineers will not be the only ones involved in threat modeling. DOT officials, geologists, meteorologists and ecologists will also provide unique viewpoints.
I do not expect Insecure Design to make its way off the OWASP top 10 list anytime soon. Threat modeling and a solid reference architecture are only one step to mitigating this weakness. It also requires people to put what they create in the threat model into practice and be willing to revisit their threat model every six months or so. Things change so quickly in this industry that what may be considered a secure design may end up insecure in a few months or years. Remember what we just talked about with passwords and security questions. What was once considered secure or insecure is now the opposite. We must be flexible, willing to change and learn new things to ever truly have a securely designed application.