[Owasp-topten] Stats used to support Top 10 entries

Dave Wichers dave.wichers at owasp.org
Wed Feb 27 15:29:23 UTC 2013


The methodology for how the Top 10 risks are ranks is described towards the end of the Top 10 on the page titled, “Note About Risks”. Maybe a better title for that page would be, the OWASP Top 10 Risk Ranking Methodology, or something like that. The Top 10 leverages the OWASP Risk Rating Methodology and uses 4 factors, 3 are likelihood factors, and 1 for consequence.

 

The three likelihood factors are: Exploitability, Prevalence, and Detectability. 

 

To date, the only metrics the Top 10 has look at is vulnerability stats from various sources, and we use those metrics to drive the Prevalence calculation.  The other 2, and consequence are currently calculated based on our professional opinion, but if there are other metrics we can leverage, I think that would be a good idea.

 

I agree, the incident databases that have been mentioned below should be looked into, and if appropriate, we may be able to use them to drive the values of the Exploitability factor since they are driven by real world exploits, rather than anyone’s opinion. Both Exploitability and Detectability are factors related to the potential attacker. I.e., what is the likelihood that an attacker can find the flaw, and then what is the likelihood that an attacker would then try to exploit that flaw.

 

-Dave

 

From: owasp-topten-bounces at lists.owasp.org [mailto:owasp-topten-bounces at lists.owasp.org] On Behalf Of Pawel Krawczyk
Sent: Wednesday, February 27, 2013 2:43 AM
To: Michael Coates; Dinis Cruz
Cc: OWASP TopTen
Subject: Re: [Owasp-topten] Stats used to support Top 10 entries

 

As I wrote last week (see thread "RC1 comments"), the current Top10 sources are made only from pentesting/scanner vendor data which only represents one view: what vulnerabilities we, vendors, were able to notice and recognize as vulnerabilities, along with subjective scoring.

 

One sources Dinis proposed is WHID, which I definitely support, as it represents much more realistic view: what vulnerabilities were actually used by hackers to compromise sites. There are more reports written with such approach:

*	Verizon Data Breach Report (2012)
*	TrustWave report (2013)
*	Zone-H 2010 statistics

There are significant differences in vulnerability severity between both datasets - as discussed last week, current sources completely ignore RFI/LFI vulnerabilities (which currently are larges cause of compromises) and Mass Assignments, to name a few. 

 

-- 

Paweł Krawczyk, CISSP
http://ipsec.pl http://echelon.pl
+48 602 776959

On 26/2/2013 at 8:21 PM, "Michael Coates" <michael.coates at owasp.org> wrote:

I'd like to revisit this thread. I'd argue that doing it right is better than doing it fast. 



Is there a way to publish aggregate data? A black box approach is not consistent with the way we do things - and even if done in the best possible way, it is tough to defend since people just have to take someone's word.







--
Michael Coates | OWASP | @_mwc
michael-coates.blogspot.com

 

On Wed, Jan 30, 2013 at 2:23 AM, Dinis Cruz <dinis.cruz at owasp.org> wrote:

I don't think it is too late to introduce them. We can do it after you release the first draft.

 

I've written why at Stats used to support OWASP Top 10 entries (next version must publish them) <http://blog.diniscruz.com/2013/01/stats-used-to-support-owasp-top-10.html> 


 

Also relevant to this issue is: Why NDAs have no place at OWASP <http://blog.diniscruz.com/2013/01/why-ndas-have-no-place-at-owasp.html>  


Dinis Cruz

 

On 28 January 2013 21:01, Dave Wichers <dave.wichers at owasp.org> wrote:

Given that I intend to publish the release candidate in 1 week, I simply don’t think we have the time to introduce this at this point.  I really wanted the draft out 1 month ago, but didn’t  get it done earlier.

 

-Dave

 

From: Dinis Cruz [mailto:dinis.cruz at owasp.org] 
Sent: Monday, January 28, 2013 7:19 AM
To: Dave Wichers
Cc: OWASP TopTen
Subject: Re: [Owasp-topten] Stats used to support Top 10 entries

 

Well, that is not really usable right? :)  (there are only 4 links on https://www.owasp.org/index.php/Top_10_2010 and there is not much consumable data in there)

 

I understand how in the past it made sense to have such arrangement, but for the next version (OWASP Top 10 2013) can we have it so that all data used is published? And per-reviewed?

 

Thanks



Dinis Cruz

 

On 28 January 2013 00:25, Dave Wichers <dave.wichers at owasp.org> wrote:

The data is NOT published by OWASP because it was provided to OWASP with the understanding that we wouldn't republish it. That said, many of the data providers have already published their data, like White Hat and Veracode for example (and MITRE in the past), so people can go get the data directly from those providers. But not all data providers have made their data public.

 

And we clearly list who the data providers are in the Top 10 itself.

 

-Dave

 

From: owasp-topten-bounces at lists.owasp.org [mailto:owasp-topten-bounces at lists.owasp.org] On Behalf Of Dinis Cruz
Sent: Sunday, January 27, 2013 4:32 AM
To: OWASP TopTen
Subject: [Owasp-topten] Stats used to support Top 10 entries

 

I got a question about the stats, data and sample size used to backup the choice of the Top 10 entries.

 

Since I couldn't find that info on the https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project page, I am asking it here :)

 

So, where can I get it from? (I know it exists, since I remember the threads)

 

Thanks



Dinis Cruz

 

 


_______________________________________________
Owasp-topten mailing list
Owasp-topten at lists.owasp.org
https://lists.owasp.org/mailman/listinfo/owasp-topten

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-topten/attachments/20130227/301eb6a0/attachment.html>


More information about the Owasp-topten mailing list