[Owasp-leaders] Potential Update to the OWASP Risk Rating Methodology

Tim tim.morgan at owasp.org
Tue Mar 12 18:06:30 UTC 2013


I think if we go down the road I've gone, in terms of getting the math
more correct from a conceptual/policy standpoint, then we need to have
a well-defined "standard" formula.  Otherwise, you are right,
conveying this to the public will be tricky.  Something like CVSSv2,
only that doesn't suck.  (Sorry, I mean to say something that works
for estimating risk, which the CVSS isn't designed for.)

I think it could be successful though if we hone the formula under a
general concensus and then we provide things like a little web page
that lets you calculate your scores (all implemented in JavaScript,
for instance).  I think if we provide an easy starting poing won't be
too hard to hand it off to users to tweak internally as needed. 

I think the biggest challenge is always going to be deciding which
categories to include in the formula and what they mean.  Perhaps a
framework that provides defaults and a way to easily adjust these
makes sense...?

tim


On Tue, Mar 12, 2013 at 12:26:21PM -0500, Josh Sokol wrote:
> This is really interesting Tim.  I hadn't really contemplated using Sigmoid
> or other functions for this beyond the standard mean, median, min, max,
> mode, etc.  That might work out well here, but is it something that we can
> convey easily to a user?  I need to play around with that a bit to see how
> it affects my risk formulas.  I guess my question for you is do you agree
> that the current methodology is broken and needs to be fixed or do you feel
> that the average is the best that we have and what we should be offering
> consumers of this methodology?
> 
> ~josh
> 
> On Tue, Mar 12, 2013 at 12:41 PM, Tim <tim.morgan at owasp.org> wrote:
> 
> >
> > Hi Josh,
> >
> >
> > > In any case, if what I said above is true, then this methodology is
> > > effectively broken in certain situations.  I think that it would be
> > really
> > > easy to correct though.  What if, instead of the average, we suggested
> > that
> > > the user take the highest criteria level and use that as the impact?
> > > General risk management best practices that I've seen seem to reflect the
> > > approach of taking the worst-case scenario so this would be in line with
> > > that and it prevents the value from being diluted by other significantly
> > > low values.  Anyone out there have opinions on modifying the methodology
> > to
> > > use this approach instead?
> >
> >
> > Over the last few years I've put a lot of thought into how one should
> > combine various risk and mitigating factors elements into a final
> > score or rating.  You are right that a simple average doesn't
> > represent the worst of the elements quite as clearly.  However, if you
> > just take the highest score, you lose information.  What information?
> > Synergy.  If you have a vulnerability that has multiple impacts, then
> > you lose the ability to represent all of the lesser, but present, risk
> > elements.  For instance, XXE generally allows for many types of
> > attack, but none of these attacks on their own are as bad as something
> > like SQLi in general, but taken together, we can't ignore the overall
> > variety of risks.  How should multiple impacts be combined in a
> > way that still represents contributions from lesser elements without
> > watering down the most important one?
> >
> > In the past I've ended up using something like this:
> >
> > Impact = Sigmoid(Sum(impact_element1 + impact_element2 + ... +
> > impact_elementN))
> >
> > The Sigmoid function (or variants of it) always returns values between
> > specific thresholds.  If you play with the numbers for a bit, you realize
> > that
> > any one element that is high will push the output score high (which is
> > what you want).  However, multiple moderately high elements can also
> > push the score higher.  Finally, it combines things asymptotically, so
> > you don't lose information, it just becomes less important as you
> > stack up the bad things.
> >
> > Combining something like Impact with probability of an attack
> > happening (to create a true risk rating) is yet another challenge...
> >
> > tim
> >


More information about the OWASP-Leaders mailing list