[Owasp-leaders] Potential Update to the OWASP Risk Rating Methodology
josh.sokol at owasp.org
Tue Mar 12 17:26:21 UTC 2013
This is really interesting Tim. I hadn't really contemplated using Sigmoid
or other functions for this beyond the standard mean, median, min, max,
mode, etc. That might work out well here, but is it something that we can
convey easily to a user? I need to play around with that a bit to see how
it affects my risk formulas. I guess my question for you is do you agree
that the current methodology is broken and needs to be fixed or do you feel
that the average is the best that we have and what we should be offering
consumers of this methodology?
On Tue, Mar 12, 2013 at 12:41 PM, Tim <tim.morgan at owasp.org> wrote:
> Hi Josh,
> > In any case, if what I said above is true, then this methodology is
> > effectively broken in certain situations. I think that it would be
> > easy to correct though. What if, instead of the average, we suggested
> > the user take the highest criteria level and use that as the impact?
> > General risk management best practices that I've seen seem to reflect the
> > approach of taking the worst-case scenario so this would be in line with
> > that and it prevents the value from being diluted by other significantly
> > low values. Anyone out there have opinions on modifying the methodology
> > use this approach instead?
> Over the last few years I've put a lot of thought into how one should
> combine various risk and mitigating factors elements into a final
> score or rating. You are right that a simple average doesn't
> represent the worst of the elements quite as clearly. However, if you
> just take the highest score, you lose information. What information?
> Synergy. If you have a vulnerability that has multiple impacts, then
> you lose the ability to represent all of the lesser, but present, risk
> elements. For instance, XXE generally allows for many types of
> attack, but none of these attacks on their own are as bad as something
> like SQLi in general, but taken together, we can't ignore the overall
> variety of risks. How should multiple impacts be combined in a
> way that still represents contributions from lesser elements without
> watering down the most important one?
> In the past I've ended up using something like this:
> Impact = Sigmoid(Sum(impact_element1 + impact_element2 + ... +
> The Sigmoid function (or variants of it) always returns values between
> specific thresholds. If you play with the numbers for a bit, you realize
> any one element that is high will push the output score high (which is
> what you want). However, multiple moderately high elements can also
> push the score higher. Finally, it combines things asymptotically, so
> you don't lose information, it just becomes less important as you
> stack up the bad things.
> Combining something like Impact with probability of an attack
> happening (to create a true risk rating) is yet another challenge...
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OWASP-Leaders