This document reflects my personal opinion on the state of application security. It calls out what I see are the weaknesses of our approach as a community to addressing the issue of web [in]security. Web [in]security is a healthy and growing industry and rather than verification of issues we constantly find and are exposed to new threats without every addressing the current ones en-masse…….
A long ,long time ago we used to “
test security out” when it came to web applications. This meant performing a time limited penetration test on a web application in the hope you could find all the existing vulnerabilities using the skills, resources and tools at your disposal….Oh, actually we still do this..
There are weaknesses to this approach and can be reflected in the current state of internet, application, cyber security. To be honest the issue of web [in]security is getting worse. Despite growth in the security industry (solutions, vendors, consultants) and the fact that application security is more mainstream than it ever was the way in which we address this problem has not changed very much.
"Insanity is doing the same thing over and over and expecting different
results." - Albert Einstein
results." - Albert Einstein
Below are some of the issues the way I see it….
Limitations
Time & Tools of the trade:
Time is limited to perform the test, the tools available are limited (who has all the tools available?):
We have got to remember "
a fool with a tool is still a fool" but tools are important when conducting technical security assessments….Tools are flawed and can be out of date with current issues; the tester very rarely “tunes” their tools to the target application.
Tools do not test application logic, business logic or authorisation very well (if at all) as context is required which needs to be understood; this is far beyond the reach of any tool to date.
In the era of Rich Internet Applications (RIA) the approach, traditionally used, to perform web application testing is flawed. More and more functionality is moving back to the user’s browser (AJAX, HTML5, JS Frameworks). Traditional tools focus on HTTP requests and manipulation, bypassing the client code completely and leave whole portions of the web application untested. Very few tools perform JavaScript/Binary/flash parsing and this layer of the stack is getting ignored. I’m sure this will change but it needs to do more than “keep up” in order to improve matters.
The robustness, coverage and ultimately accuracy of the testing relies on the skill of the consultant that performs the assessment, the tests conducted and the tools used; this leads onto consistency problems…..
Consistency:
We can’t guarantee any two testers will find the same issues (human element) on the same application and also apply the same risk to the discovered issues. I’ve managed teams of over 100 testers’ globally on large engagements and the variance in quality is huge. Scalability does not lend itself well to quality. Coupling the “human element” with variance in skill and experience leads to massive issues regarding a consistent approach to testing applications.
Comedy of Errors
If these weaknesses (above) inconsistent approach and weak tools are amplified by high volumes of testing the issue gets much worse (a comedy of errors). I think we need to remember that application security is a sub domain of an engineering discipline called computer science!! Looking at the current approach and the weaknesses associated I don’t believe what is currently done can be really called science, more like “best endeavours”.
…So our approach is a little flawed to be polite.
Industry Growth!= More Secure (well, actually less secure)
Another point to demonstrate the above flaw is the Penetration testing “industry”. It has grown, estimated to have more than doubled globally in the last 10 years. But the problems with internet and web security have only gotten worse.
….
Throwing money at the issue is not making much of an impact.
Invisible (and expensive) Deliverable
Our deliverable is invisible. A secure application is not noticed; it works, taken for granted. Its only when security does not work security gets noticed. It is difficult to sell security when there does not seem to be any tangible output. It’s sort of like an insurance policy; you only notice it when you pay for it and when you actually need it. (You might be forced to have it via compliance also).
*Good* skilled penetration testers are expensive due to the fact they are limited in supply and it is hard to “learn” penetration testing, it really comes with experience. There is also a distinct lack of security folks who can write code.
…So the deliverers of the “invisible deliverable” can be hard to justify and can be expensive.
It’s not working let’s approach this in a different manner:
So like any sane person we need to change strategy to do something that works….or we could keep doing what we are doing (what we are currently doing?)
So if we were to replace the “
test security out” activity with something else what would that be? ”I know, let’s
build security in”: Secure application development, Training, Code review, Static analysis, SDLC security etc…sounds like a great idea….wow a great idea (Nothing new there...).
Who would have thought that reviewing code (where the vulnerabilities exist) is a good idea and a logical place to prevent security issues!!
So the “
build security in” industry has now grown in a rapid manner. We have non profit organisations like OWASP and commercial enterprises alike, all trying to solve the same problems. When I started reviewing source code for security issues in 2002
security code review was akin to waving a dead chicken over the keyboard, it was not so mainstream.
Source code review and static analysis is not particularly new but its adoption rate is still significantly less than penetration testing. Penetration testing will be/is a commodity; everyone is doing it (not all to the same standard). Anyone can be a penetration tester!! Particularly if you hide behind a good brand!
Source code review is different. It requires a better understanding of the technology, the language, associated frameworks coupled with penetration testing knowledge and understanding of risk. But who said this was easy??
A new Approach to solving an old problem
We understand that despite a growing industry the problems in the wild are only getting worse. Time limited approaches due to financial and market pressure coupled with lack of awareness and a vastly varying skill and tool set of the security consultant community certainly does not help.
So let’s propose a novel solution (not really very new but you would not thin it):
·
SDLC Security:
A repeatable structured approach which reflects organisations method of development, the frameworks used and technologies utilised. Structured and repeatable means less error prone and relies less on the skill of the individual (to some extent). It would cover off the following
- Secure Design: Designed to help ensure the software architecture is appropriate. The appropriate controls are in the correct places within both the client and the server sides of the application.
- Developer Training: Get the development community involved in secure application development. Raising the bar of code delivered. Remove low hanging fruit.
- Common Module/Framework design and implementation: Using core common components for various security functions such as canonicalization, input validation, encoding and error handling.
- Code review: Manual and Static analysis tools: Manual review of the code using a risk based approach focusing on the application perimeter and tracing eh dataflow inwards.
- Integrated Functional / Security testing /Anti-Functional testing: Negative use cases. Testing aimed at running exception paths in the code.
Fixing discovered issues by virtue of a penetration test is also known as “Bolting on security”. Point fixes of discovered issues which may not have been thought through, may break other issues.
In the case of a addressing a security issue with a root cause being a design flaw this is even worse as it is generally more expensive to fix and retrofit.
In general, fixing issues, as a result of a penetration test is more expensive and error prone. We should try to build and detect security issues as part of the development and test phases of the life cycle. With finite resources and finance it is best to prevent issues from occurring rather than detecting issues after they occur.
·
High Volume, Consistent, Cost effective, semi-automated penetration testing.
Using a tuned vulnerability scanner, we can understand the coverage, areas of weakness of the scanner, vulnerabilities covered and not covered. It is automated; consistent approach, proven (over time), maintained testing engine and rule-set. After all we need to identify and fix vulnerabilities. The scanner is tuned over time in order to improve efficiency and accuracy.
- Manual verification of discovered issues; all issues require verification for exploitability and risk rating.
- Manual business logic and authorisation testing (to some extent); Business logic testing will require manual testing but this can also be integrated into System testing
- Consistent risk analysis: Assessing the risk of an issue with sufficient business context of the application.
I suppose you may notice that manual penetration testing is not in list above. That is because it has not proven to work. The manual effort is used in the SDLC and in the verification of the runtime scanning.
A Point in time
So once a web application undergoes testing that is in effect a point-in-time test.
Once the application undergoes maintenance, functional change or even cosmetic change in the case of some RIA applications vulnerabilities may be re introduced.
"App Radar":
What is required is a frequent and low bandwidth continuous scan if possible. Such a solution would provide an “App Radar” effect by detecting changes in the application as they happen. Such changes can be used to compare deltas between various points in time as to monitor the organic growth and change of the application over time.
Root Cause Analysis:
Changes in application behaviour from a security perspective can be traced internally to the source code change control process and hence assisting with root cause discovery and definition.
Near-Immediate testing:
Another benefit of continuous scanning is as new rules and tests are discovered they can be deployed and used to test the application as part of the continuous exercise.
All of this feeds into Enterprise Security Intelligence (ESI)
More about this next time................