[Owasp-webcert] Assurance Levels

Chris Wysopal cwysopal at Veracode.com
Tue Jun 19 16:46:50 EDT 2007


Reformatted. 

OWASP			Automated	Automated 	Manual
Description		Static	Dynamic	Pentest
	
Unvalidated input	Medium	High		Medium		

Broken access
control 		Low		Medium	High

Broken auth/
session mgmt 	Low		Medium	High

XSS			Medium	High		High

Buffer overflows 	High		Medium	Low

Injection flaws	Medium	High		High

Improper error
handling		Medium	Medium	Medium

Insecure storage	High		Low		Low

DoS attack		Medium	Medium	Low

Insecure Config
Management		Low		High		High


-----Original Message-----
From: owasp-webcert-bounces at lists.owasp.org
[mailto:owasp-webcert-bounces at lists.owasp.org] On Behalf Of Chris
Wysopal
Sent: Tuesday, June 19, 2007 4:42 PM
To: Bellis, Ed; Mark Curphey; owasp-webcert at lists.owasp.org
Subject: Re: [Owasp-webcert] Assurance Levels

Well I am almost a week late on this.  Sometimes work gets in the way :)
 
Here is a strawman matrix of mapping quality of analysis for dynamic vs.
static for the OWASP Top Ten
 
The categories for "quality of analysis" are High Quality (low false
negative rate or is unlikely to miss problems), Medium Quality (moderate
false negative rate - some issues may be missed) and Low Quality (high
false negative rate - this technology is not useful for identifying
these flaw types).

OWASP Description			Automated Static
Automated Dynamic	Manual Pen Test
	
Unvalidated input			Medium		High
Medium		
Broken access control 		Low			Medium
High
Broken auth/session mgmt 	Low			Medium
High
XSS					Medium		High
High
Buffer overflows 			High			Medium
Low
Injection flaws			Medium		High
High
Improper error handling		Medium		Medium		Medium
Insecure storage			High			Low
Low
Denial of service attack	Medium		Medium		Low
Insecure Config Management	Low			High
High

This is just anecdotal from having looked at the output of the tools and
having performed pen testing myself.  What is needed is a reference set
of vulnerabilities similar to what SAMATE[1] is doing for static source
code testing and run all the techniques against a large enough set to be
meaningful.

We could add columns for manual source code review and design reviews.

-Chris

1. http://samate.nist.gov/SRD/


 

 
 
 


________________________________

From: owasp-webcert-bounces at lists.owasp.org
[mailto:owasp-webcert-bounces at lists.owasp.org] On Behalf Of Bellis, Ed
Sent: Tuesday, June 12, 2007 9:44 AM
To: Chris Wysopal; Mark Curphey; owasp-webcert at lists.owasp.org
Subject: Re: [Owasp-webcert] Assurance Levels



 

I think this makes a lot of sense and should likely be the way we look
at incorporating assurance levels into the webcert standard. As
discussed previously, this addresses the "bubble" thinking of the
current PCI standard. By taking into account the vulnerability class and
the threat that is being mitigated, you'll be able to pick the "right
control for the job" to achieve the assurance level required for the
application and the organization.

 

This is great work, and of course, the devil will always be in the
details. Is there an existing matrix that maps vulnerability classes to
analysis techniques (OWASP top 10 at a minimum)? I presume this would be
something this group will need to come up with. Chris, do you have a
head start in this area?

 

-Ed

 

 

________________________________

From: owasp-webcert-bounces at lists.owasp.org
[mailto:owasp-webcert-bounces at lists.owasp.org] On Behalf Of Chris
Wysopal
Sent: Monday, June 11, 2007 9:55 AM
To: Mark Curphey; owasp-webcert at lists.owasp.org
Subject: Re: [Owasp-webcert] Assurance Levels

 

 

I should have supplied more explanation behind the graphic that I sent
to Mark because it is a bit more complex than just automated static is
better than a questionaire and automated dynamic is better than
automated static and so on.  Each analysis technique has a subset of all
vulnerability classes that it can even attempt to detect.  Try finding
authorization bypass with an automated tool for instance. More
generally, each analysis technique has a false negative rate for each
vulnerability class.  Measuring the false negative rates by
vulnerability class for each analysis technique is a significant effort
but a valuable one.  At Veracode we are "analysis technique neutral".
We want to combine multiple techniques to give the best possible
analysis for the amount of money appropriate for the assurance
requirements. Security must make economic sense after all.  

 

So back to the complexity.  The idea behind measuring the capabilities
of different analysis techniques is to assure that the false negative
rate for all the vulnerability classes you care about is low enough for
the assurance level the application requires.  So for example lets take
the OWASP top ten as the vulnerability classes we care about for a
certain application.  If the application is high assurance such as an
online banking application we have to make sure the false negative rate
for all these vulnerability classes is close to zero.  A fully manual
effort is not the most cost effective since manual testing is the most
expensive.  If we combine automated static and automated dynamic and
then add manual testing for the classes that the first 2 techniques
can't detect or have unacceptable false negative rates then we can get
to an acceptable FN rate for the complete OWASP top ten. 

 

There are certainly things that automated static analysis can find
better than manual pen testing.  Integer overflows in C/C++ application
is a good example. So the choice of testing technique does depend on the
vulnerability class you are looking for.  It seems to me that manual pen
testing is the best for many of the OWASP top ten but automated dynamic
and even automated static do have a place in driving down costs for high
assurance applications.  The automated techniques may also be good
enough for medium assurance applications such as back office application
that don't deal with high value information.

 

-Chris

 

________________________________

From: owasp-webcert-bounces at lists.owasp.org
[mailto:owasp-webcert-bounces at lists.owasp.org] On Behalf Of Mark Curphey
Sent: Monday, June 11, 2007 4:27 AM
To: owasp-webcert at lists.owasp.org
Subject: [Owasp-webcert] Assurance Levels

I propose to make assurance levels an integral part of the OWASP Web
Certification Criteria and want your feedback on the concept.

In many ways it's one of those things that's so damn obvious when you
see it described with clarity. Enlightenment came for me when Chris
Wysopal <http://www.veracode.com/management-team.php#Wysopal>  sent me
the fantastic graphic atttached describing Veracodes
<http://www.veracode.com/>  view of assurance levels.  Of course it is
nothing new, the basic concept of assurance (confidence) is as follows;

	Different testing techniques provide different levels of
assurance (confidence) on claims about the security of a web site. 

An automated static analysis tool will provide a lower level of
assurance than an automated dynamic analysis tool which will in-turn
provide a lower level of assurance than a comprehensive manual code
review.  It also follows that an automated web application penetration
test will provide a lower level of assurance than a manual penetration
test. Both types of penetration testing will provide lower levels of
assurance than code reviews. It also makes sense that if a company has
demonstrated that security is an integral part of the security DNA of
their SDLC (define, design, develop, deploy and maintain) then there is
a higher level of assurance that any test results will be consistent in
the future. 

So why wouldn't everyone just go for the approach that provides the
highest level of assurance? It's very simple, cost. The appropriate
level of assurance should be based on risk. 

Of course all of this things have a butterfly effect, no two tools are
the same and no two testers are the same. Imagine a control panel with
multiple dials but where people want a single output display (not
necessarily a single reading).  I expect lots of people arguing that a
specific tool or firm is as good as the next level up on the assurance
level but well deal with that as well.

This also enables us to define what a web app firewall is good for, what
it isn't and place it into an assurance level bucket. More on that in a
while. 

By incorporating assurance levels into the criteria, industry sectors,
business partners or regulators can require a level of security with an
assurance level based on risk. This would be a significant step forward
from where we are today with broken schemes like PCI DSS
<http://securitybuddha.com/2007/03/23/the-problems-with-the-pci-data-sec
urity-standard-part-1/> .

So what do y'all think?

_______________________________________________
Owasp-webcert mailing list
Owasp-webcert at lists.owasp.org
https://lists.owasp.org/mailman/listinfo/owasp-webcert


More information about the Owasp-webcert mailing list