[Owasp-webcert] Assurance Levels

Chris Wysopal cwysopal at Veracode.com
Mon Jun 11 10:54:42 EDT 2007


 
I should have supplied more explanation behind the graphic that I sent
to Mark because it is a bit more complex than just automated static is
better than a questionaire and automated dynamic is better than
automated static and so on.  Each analysis technique has a subset of all
vulnerability classes that it can even attempt to detect.  Try finding
authorization bypass with an automated tool for instance. More
generally, each analysis technique has a false negative rate for each
vulnerability class.  Measuring the false negative rates by
vulnerability class for each analysis technique is a significant effort
but a valuable one.  At Veracode we are "analysis technique neutral".
We want to combine multiple techniques to give the best possible
analysis for the amount of money appropriate for the assurance
requirements. Security must make economic sense after all.  
 
So back to the complexity.  The idea behind measuring the capabilities
of different analysis techniques is to assure that the false negative
rate for all the vulnerability classes you care about is low enough for
the assurance level the application requires.  So for example lets take
the OWASP top ten as the vulnerability classes we care about for a
certain application.  If the application is high assurance such as an
online banking application we have to make sure the false negative rate
for all these vulnerability classes is close to zero.  A fully manual
effort is not the most cost effective since manual testing is the most
expensive.  If we combine automated static and automated dynamic and
then add manual testing for the classes that the first 2 techniques
can't detect or have unacceptable false negative rates then we can get
to an acceptable FN rate for the complete OWASP top ten. 
 
There are certainly things that automated static analysis can find
better than manual pen testing.  Integer overflows in C/C++ application
is a good example. So the choice of testing technique does depend on the
vulnerability class you are looking for.  It seems to me that manual pen
testing is the best for many of the OWASP top ten but automated dynamic
and even automated static do have a place in driving down costs for high
assurance applications.  The automated techniques may also be good
enough for medium assurance applications such as back office application
that don't deal with high value information.
 
-Chris

________________________________

From: owasp-webcert-bounces at lists.owasp.org
[mailto:owasp-webcert-bounces at lists.owasp.org] On Behalf Of Mark Curphey
Sent: Monday, June 11, 2007 4:27 AM
To: owasp-webcert at lists.owasp.org
Subject: [Owasp-webcert] Assurance Levels


I propose to make assurance levels an integral part of the OWASP Web
Certification Criteria and want your feedback on the concept.

In many ways it's one of those things that's so damn obvious when you
see it described with clarity. Enlightenment came for me when Chris
Wysopal <http://www.veracode.com/management-team.php#Wysopal>  sent me
the fantastic graphic atttached describing Veracodes
<http://www.veracode.com/>  view of assurance levels.  Of course it is
nothing new, the basic concept of assurance (confidence) is as follows;

	Different testing techniques provide different levels of
assurance (confidence) on claims about the security of a web site. 

An automated static analysis tool will provide a lower level of
assurance than an automated dynamic analysis tool which will in-turn
provide a lower level of assurance than a comprehensive manual code
review.  It also follows that an automated web application penetration
test will provide a lower level of assurance than a manual penetration
test. Both types of penetration testing will provide lower levels of
assurance than code reviews. It also makes sense that if a company has
demonstrated that security is an integral part of the security DNA of
their SDLC (define, design, develop, deploy and maintain) then there is
a higher level of assurance that any test results will be consistent in
the future. 

So why wouldn't everyone just go for the approach that provides the
highest level of assurance? It's very simple, cost. The appropriate
level of assurance should be based on risk. 

Of course all of this things have a butterfly effect, no two tools are
the same and no two testers are the same. Imagine a control panel with
multiple dials but where people want a single output display (not
necessarily a single reading).  I expect lots of people arguing that a
specific tool or firm is as good as the next level up on the assurance
level but well deal with that as well.

This also enables us to define what a web app firewall is good for, what
it isn't and place it into an assurance level bucket. More on that in a
while. 

By incorporating assurance levels into the criteria, industry sectors,
business partners or regulators can require a level of security with an
assurance level based on risk. This would be a significant step forward
from where we are today with broken schemes like PCI DSS
<http://securitybuddha.com/2007/03/23/the-problems-with-the-pci-data-sec
urity-standard-part-1/> .

So what do y'all think?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.owasp.org/pipermail/owasp-webcert/attachments/20070611/4223fe72/attachment.html 


More information about the Owasp-webcert mailing list