[Owasp-leaders] Potential Update to the OWASP Risk Rating Methodology

Jason Johnson jason.johnson at owasp.org
Tue Mar 12 16:52:31 UTC 2013


Did you just leave one of my meetings....? Lol

I look at it this way if you a burglar and there are many like you and some
are better. They all rob and sometimes steal. So you break into my house /
DB. How did you get in? You picked my lock or bashed the door in. You did
not burn my house down you took some jewls and beer from the fridge. My
safe was to heavy for you so you left it. The Risk is there but I know what
you loom like because I have cameras. I did beef up my door and that is a
slight risk. I do not think its broke because there are many burglars a
mist. My valuable items aka personal cc cards info or the like is a
different risk. Had my database been taken / burned down or my safe stuff
broke into. Then it high but now I have copied data took the time to get
the good stuff and now I've opened 10 more risks. Of course we could call
everything high and thus obfuscate the true risk that is my hollow core
door.

My 1 1/2 cents

Jason
On Mar 12, 2013 11:37 AM, "Josh Sokol" <josh.sokol at owasp.org> wrote:

> Hey Leaders,
>
> I am doing some research for a talk that I am giving at the BSides Austin
> event next week.  The talk is called "Convincing Your Management, Your
> Peers, and Yourself That Risk Management Doesn't Suck" and it got me
> examining different risk rating methodologies.  That lead me to examining
> the OWASP Risk Rating Methodology here:
>
> https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology
>
> While reading through this, there was something that caught my attention
> that I feel is a bit of a flaw in the methodology, but I wanted to see what
> others thought of it.
>
> The methodology does a pretty decent job in giving the user some criteria
> for estimating the impact of the risk (assuming it's AppSec based) both
> from a technical and business standpoint.  Where I take issue is how the
> final value used in the risk formula ends up being just an average of these
> criteria:
>
> Next, we need to figure out the overall impact. The process is similar
>> here. In many cases the answer will be obvious, but you can make an
>> estimate based on the factors, or you can average the scores for each of
>> the factors.
>>
>
> Use SQL Injection as an example for technical impact.  You have a
> read-only user and the flaw allows full disclosure of data in the database
> with full audit trails turned on.  According to the methodology, this would
> be all data disclosed (9 - Confidentiality), no corrupt data (0 -
> Integrity), no loss of availability (0 - Availability), and fully traceable
> (0 - Accountability).  Based on the methodology, you then average these
> values so we get a technical impact of 2.25 (LOW).  I would argue, however,
> that the technical impact of this particular vulnerability for this
> specific situation would still be high.  Any dissenting thoughts on this?
> Agreement?
>
> In any case, if what I said above is true, then this methodology is
> effectively broken in certain situations.  I think that it would be really
> easy to correct though.  What if, instead of the average, we suggested that
> the user take the highest criteria level and use that as the impact?
> General risk management best practices that I've seen seem to reflect the
> approach of taking the worst-case scenario so this would be in line with
> that and it prevents the value from being diluted by other significantly
> low values.  Anyone out there have opinions on modifying the methodology to
> use this approach instead?
>
> ~josh
>
> _______________________________________________
> OWASP-Leaders mailing list
> OWASP-Leaders at lists.owasp.org
> https://lists.owasp.org/mailman/listinfo/owasp-leaders
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20130312/a246e6f1/attachment.html>


More information about the OWASP-Leaders mailing list