Published on
Application security lists help focus on specific weaknesses or vulnerabilities within your system. But, do you understand their approach to ranking? If not, can you really trust them? Some vulnerability list ranking methodologies bias one aspect of security over another, and some may not work with partially unknown vulnerabilities.
At Security Journey, we recommend the CWE Top 25 and the OWASP Top Ten. We often use the vulnerabilities they report in our training modules. Let's explore why we trust them and how they can be a resource for your program.
CVSS, CWSS, and Scoring Bias
There are two central scoring systems for computer security that are often used as the basis for ranking - the Common Vulnerability Scoring System (CVSS) and the Common Weakness Scoring System (CWSS). They are similar scoring systems that differ in a few ways. CVSS is a reactive approach because vulnerabilities already exist before ranking. CWSS is a proactive approach, as you are working with software, hopefully, before releasing it into production.
CVSS is used to calculate the severity of the vulnerabilities within a system and prioritize the fixing of vulnerabilities. It ranks vulnerabilities from most to least severe. CVSS uses a method based on three basic metrics scored in a range of 0 to 10:
- Base – the characteristics of a vulnerability
- Temporal – metric that changes over time due to external forces
- Environmental – metric that measures the impact of the vulnerability on your organization,
It is a generally simple system that can be useful for non-security experts to rank their network's vulnerabilities. If a vulnerability is not complete, CVSS cannot accurately quantify it and assumes it's dangerous, therefore, applying bias towards a harsher score. Therefore, it is best to use CVSS when a vulnerability is already verified to avoid inaccurate rankings. CVSS also tends to bias the impact of the physical system's vulnerability instead of its impact on data or functionality.
CWSS is used to determine which software weaknesses should be a priority for developers to focus on fixing first when overwhelmed with issues. It's a complex scoring system, based on three metrics scored in a range between 0 and 100:
- Base – the risk of weakness, confidence in finding accuracy, and effectiveness of controls
- Attack Surface – the availability of the weakness for the attacker
- Environmental – the parts of the weakness that is specific to an environment
CWSS can score a weakness before the investigation of the vulnerability concludes – or if the information is not comprehensive enough for CVSS scoring. CWSS accepts the input of unknown values, allowing developers to use the score to determine if the weakness is a priority even if there is not much known about it. CWSS tends to focus on containing the weakness within the system. A researcher uses a methodology depending on their preference and what they hope to get out of the research.
An important thing to remember is that CVSS and CWSS scores are not compatible. Even if you were to normalize them to the same 0-10 or 0-100 range, an 80 in CVSS would be different from an 80 in CWSS.
At Security Journey, our view is that CWSS and CVSS are not useful for an application/software security program. CVSS is reactive, and when it comes to application security, reactive is never the goal. (The most secure code has a security mindset from the first phase of development.)
A foundational philosophy of our methodology is to be proactive and determine issues early – shifting left. CWSS has had no revision since 2014, according to their website, so we wonder how useful it is for modern live program evaluation. The complexity of CWSS also makes us leery of recommending it as a strategy that developers should build into their daily flow.
Why OWASP and CWE Get It Right
The CWE Top 25 and OWASP Top Ten are two lists that share the most dangerous software weaknesses and web application security risks. We trust these lists because of how they calculate information and the data they measure. As proper research should, each organization is transparent by including the methodology and data in reports. When you understand how they reached the stated conclusions, you can derive the motive behind it.
The authors of the CWE Top 25 used the National Institute of Standards and Technology (NIST), specifically their National Vulnerability Databases (NVD), where they measure Common Vulnerabilities and exposures (CVE) data. These CVEs are mapped to specific CWEs. The researchers then use a calculation of the CVSS score to prioritize the final list. They then used a derivation of CVSS to calculate the order of the lists. First, they calculated the frequency of each CWE within the NVD. They then calculated the severity of the weaknesses using the minimum and maximum CVSS scores for each of the CVEs within the CWE and then calculated the CWE score.
OWASP Top Ten did not use either of these scoring systems; instead, they focused on risks rather than weaknesses or vulnerabilities, meaning that ultimately some entries covered multiple CWE's. Their metric uses both a survey and frequency of vulnerabilities analysis within the submitted data sets. The survey collected data on respondent suggestions for inclusion in the Top Ten list, using categories from a previous data collection and feedback from the last Top Ten lists. The data analysis determined every instance of a vulnerability and the number of applications in which scanning tools or penetration tests found it.
Bottom Line? Trust Your Application Security Programs & Tools
When looking at any type of research, your first move should be to look at the methodology, and the ways data is collected. There is no correct or incorrect way to gather data or calculate results. However, the methods used will enlighten you about the value of the research. For example, knowing that the CWE Top 25 uses CVSS instead of CWSS tells you that the CWEs included are verified and complete, as CVSS does not work well on incomplete information. It also tells you that the Top 25 is more geared towards the impact of the CWEs on the physical system over its functionality or data security. OWASP Top Ten analyzes the risks rather than the weaknesses or vulnerabilities, which means it's focused more on prevention than the impact or containment of the issue.
At Security Journey, we recommend that you use all applicable lists as a guide but base your priority on the vulnerability and threat data you determine directly from your application security program and scanning tools. Examine the lists that you use to decide whether they are using the measurement that best lines up your security preferences. However, while lists are helpful tools, they cannot replace your data and experiences, and you must not rely on them too much.