[Owasp-leaders] Potential Update to the OWASP Risk Rating Methodology

Josh Sokol josh.sokol at owasp.org
Tue Mar 12 16:37:08 UTC 2013


Hey Leaders,

I am doing some research for a talk that I am giving at the BSides Austin
event next week.  The talk is called "Convincing Your Management, Your
Peers, and Yourself That Risk Management Doesn't Suck" and it got me
examining different risk rating methodologies.  That lead me to examining
the OWASP Risk Rating Methodology here:

https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology

While reading through this, there was something that caught my attention
that I feel is a bit of a flaw in the methodology, but I wanted to see what
others thought of it.

The methodology does a pretty decent job in giving the user some criteria
for estimating the impact of the risk (assuming it's AppSec based) both
from a technical and business standpoint.  Where I take issue is how the
final value used in the risk formula ends up being just an average of these
criteria:

Next, we need to figure out the overall impact. The process is similar
> here. In many cases the answer will be obvious, but you can make an
> estimate based on the factors, or you can average the scores for each of
> the factors.
>

Use SQL Injection as an example for technical impact.  You have a read-only
user and the flaw allows full disclosure of data in the database with full
audit trails turned on.  According to the methodology, this would be all
data disclosed (9 - Confidentiality), no corrupt data (0 - Integrity), no
loss of availability (0 - Availability), and fully traceable (0 -
Accountability).  Based on the methodology, you then average these values
so we get a technical impact of 2.25 (LOW).  I would argue, however, that
the technical impact of this particular vulnerability for this specific
situation would still be high.  Any dissenting thoughts on this?  Agreement?

In any case, if what I said above is true, then this methodology is
effectively broken in certain situations.  I think that it would be really
easy to correct though.  What if, instead of the average, we suggested that
the user take the highest criteria level and use that as the impact?
General risk management best practices that I've seen seem to reflect the
approach of taking the worst-case scenario so this would be in line with
that and it prevents the value from being diluted by other significantly
low values.  Anyone out there have opinions on modifying the methodology to
use this approach instead?

~josh
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20130312/12033272/attachment.html>


More information about the OWASP-Leaders mailing list