Our dataset has been generated using the output from manually guided penetration tests and not through the use of fully automated vulnerability scanners. All vulnerabilities have been identified and confirmed manually, and therefore the dataset provides a credible and high-quality resource. It consists of over 14,000 findings identified from around 1300 tests.
The following graph shows the occurrences of findings. The amber bars indicate the total number of all findings for each category. The red bars show the subset of findings that Context rated as High and Critical. Note that the number of Medium, Low and Informational findings for Weak Authentication, TLS and Session Management considerably overflow the graph and have been cropped to allow the rest of the graph to remain useful.
High and Critical impact vulnerabilities are those that would normally result in an attacker gaining unauthorised access, or compromising user data or application functionality that could lead to financial or legal impact. Medium and below findings are those where either a problem doesn’t directly lead to a compromise (although it likely makes it easier to achieve this for an attacker), or impacts a much smaller subset of users or data.
High and Critical Vulnerabilities
Out of our 14,000 findings in this data set, over 1700 (13%) were rated High or Critical. This means that most systems Context test have more than one High or Critical finding. In reality the distribution is not as even as this, as there are applications tested that have several High or Critical impact findings and others that only feature low impact findings.
Looking at the graph it can be seen that there are three leading causes of High and Critical vulnerabilities. Cross Site Scripting, Weak Authentication problems and TLS configuration between them account for over 60% of High and Critical vulnerabilities! If an organisation was to focus on educating developers and their supply chain to prevent cross site scripting and authentication problems, and create robust deployment processes for TLS on their systems, a large proportion of problems could be addressed earlier in a systems creation.
Another statistic which is interesting to look at is the ratio of Critical/High findings to total findings for each category. In doing this we can see areas where training, standards or technical interventions will have a greater impact. Although there are over 400 Session management problems identified, only 2% of these lead to a direct route to compromise. Compare this with Cross site scripting which has only been found around 300 times, but half of them are Critical or High and could lead directly to a compromise of something valuable. Similarly, 50% of findings in the categories Injection and Insufficient Access Control are rated as having a High or Critical impact on their application. This shows that interventions around addressing these areas at source will have a greater impact on the risk profile of an application or organisation. For each finding caught and addressed there is a much higher likelihood that it’s an important one.
More broadly the largest number of findings are in the Weak Authentication category, with every 3rd finding relating to this. We believe this may be because of the breadth of potential problems, and that following best practice isn’t just a technical challenge. Where some of the other problem areas such as Cross Site Scripting have provably correct solutions, Weak Authentication doesn’t have such clear solutions. It covers many areas that are hard to do right, are programmed on a per system basis and need interaction with architects and designers to fully be addressed. To solve this problem, a system has to correctly design and implement everything from password strength choices, password storage mechanisms and reset processes through to how cookies are created and handled - these are all very sensitive areas with their own background of research and best practices, and developers and architects are not yet aware enough of the nuances or the impact a wrong choice can have.
It’s also worth looking at issues related to the Communication Channel as there are close to 1000 issues with more than 100 High/Critical ones. This shows that choosing to use, then correctly configuring, TLS and its additional security headers is still not well understood or applied. We highlight this separately to the other findings above as, although reported against and impacting applications, it often needs to be addressed at the infrastructure layer and may not be under the control of developers.
As such we wanted to dive into the details of these problems to see where people are struggling the most. Of the 107 High and Critical vulnerabilities identified, a huge 93% of them exist because an encrypted channel was partially or wholly missing. This could be someone using telnet or FTP or HTTP to transfer sensitive data or conduct administrative actions, or prompting people to login to a web application over an unencrypted login. There were an additional 124 Medium rated findings where some less critical part of a system was not delivered over SSL, perhaps some images or a single page in an application. These findings highlight that as an industry we still have work to do on the ‘encrypt everything’ front.
Most of the remaining High and Critical Communication Channel findings relate to remote code execution vulnerabilities from out of date encryption components, all of which would represent an easy path to Root or System access for an attacker. These come back to the age old problem of patching systems and keeping them current. There is enough other content on the Internet about this so we won’t say anything more about it today.
Next down the list are 2 high findings, and 600 Medium findings, all of which are instances of poor configuration. These vary between systems vulnerable to POODLE or BEAST, or configured with insecure protocols and ciphers. This really highlights how hard infrastructure teams find it to cut through all noise, opinions and potential configuration advice to arrive at an actually secure configuration. Getting to the right configuration is further complicated by drivers such as needing to support legacy client systems, or if the system being tested is its-self legacy and perhaps can’t support the latest version of TLS. In these situations analysis of usage traffic is usually the best way forward, which enables a fact based discussion on securing the system. Depending on the results of this analysis, options to become secure usually resolve around either the shutting out of legacy clients (with a suitable warning period and pointers to more modern options), or segregating them onto their own system so that users with modern clients are not also at risk. If the risk is centred on a legacy server, then a Web Application Firewall can offer some respite while an upgrade is developed.
This post has been an exploration of data that Context has gathered over a large number of recent tests. There are three main conclusions that we have drawn from doing this;
- Almost every application tested by Context has a Critical or High vulnerability.
- Weak authentication, Cross Site Scripting and TLS configuration are still the biggest problem areas, with a high proportion of Critical and High rated problems to be addressed.
- Investment in these areas in the form of training, developing and applying standards or buying technology will offer the most return on investment.
How does this compare to your organisation’s position? Have you cracked these three key areas and are chasing down the other problems? If you have not done this analysis of vulnerability types for your system, you should.
The results could be different from the cross-industry cut of tests we have conducted so don’t just take our word for it, analyse the results of testing your own systems and target your remediation!