[Owasp-testing] Defining Risk

Jeff Williams jeff.williams at aspectsecurity.com
Tue Dec 19 16:06:56 EST 2006


Hi,

 

I'm all for a very simple system, but a one-size-fits-all model is
unrealistic. One simple example is that corporate LAN applications are
frequently the most critical applications, but the model below minimizes
their importance.  I believe we should develop an OWASP framework with a
way for organizations to tailor it - so that it produces the right risk
ratings for them.

 

I suggest we start with the standard risk model....

 

       Risk = Likelihood * Impact

 

There are a number of factors that go into likelihood and other factors
that go into impact.  Organizations can customize a standard risk
ranking framework by weighting these factors according to their
business.

 

Likelihood factors generally break down into information about the
threat agent, and information about the vulnerability:

 

-        Threat agent

o       Skill level required

o       Threat agent motivation (attractiveness of target)

o       Threat agent access level

o       Threat agent size

o       ...

 

-        Vulnerability

o       Ease of discovery

o       Awareness

o       Tools available

o       Mitigating controls in place

o       Accountability (not logged, logged w/o review, logged and
reviewed)

o       ...

 

Impact is generally calculated based on annualized loss expectancy
(ALE). Businesses should create a standard for what dollar amounts are
significant to their business and establish some levels for Impact.
Understanding the assets and functions involved, and the importance of
confidentiality, integrity, and availability to the business is critical
to getting good estimates of the real business impact. Reputation damage
is frequently the driver here.

 

Since none of these factors are very easy to measure, it's best to have
enumerated lists (like accountability above) for each factor that make
sense for the particular business.

 

The real tailoring comes from weighting these factors according to your
business.  Having a risk ranking framework that's customizable for a
business is critical for adoption.  Otherwise, you'll spend lots of time
arguing about the risk ratings that are produced.

 

Based on the factors and weights, you calculate whether the likelihood
is L, M, or H and whether the impact is L, M, or H.  Then you can
calculate the overall risk with a 9-box.

 

                                                            OVERALL
SEVERITY

 

HIGH LIKELIHOOD

Medium

High

Critical

MEDIUM LIKELIHOOD

Low

Medium

High

LOW LIKELIHOOD

Note

Low

Medium

 

LOW IMPACT

MEDIUM IMPACT

HIGH IMPACT

 

 

If the list is okay with this approach, I will write it up in time for
the release.  I can imagine a simple tool that queries you for
information like what are your different user groups, etc.. then the
tool allows you to weight the factors, then it produces a custom risk
rating tool for your organization's application security findings.
Built into report generator this would be very cool.

 

--Jeff

 

________________________________

From: owasp-testing-bounces at lists.owasp.org
[mailto:owasp-testing-bounces at lists.owasp.org] On Behalf Of Eoin
Sent: Tuesday, December 19, 2006 3:40 AM
To: Daniel Cuthbert
Cc: owasp-testing at lists.owasp.org
Subject: Re: [Owasp-testing] Defining Risk

 

This is nicer, easier to apply, sharper and in my view better suited.



 

On 18/12/06, Daniel Cuthbert <daniel.cuthbert at owasp.org> wrote: 

anyone who knows me well, knows i'm blonde and therefore need it done
as simply as possible :0)

Where I work (Corsaire), we use a well proven way of defining risk.
This isn't something we plucked out of the blue, but something that
came with over 8 years experience in doing this voodoo and based on
client feedback, it works!

Risk = imp x eoe x exp x nominal value.

We have gone over the above little equation in previous mails, so i
wont bother doing it again. But the main difference here is applying
it and defining a rating. I have always liked the following method:

1: Impact
Critical (attacker owns the box, end of story)
High (attacker gains access through some vulnerability and can then
do further damage) 
Medium (attacker can't really gain direct access, but can cause other
damage)
Low (info leakage etc)

2: Ease of exploitation
Easy (any muppet with a browser can do it)
Moderate (attacker needs to have some skills and knowledge) 
Difficult (no skript kiddy or hacking exposed reader could do it)

3: Exposure
Very high (any issue which is externally facing and available to the
world and does not need any authentication)
High (the issue is exposed over the web or via 3rd parties but only 
to authenticated/registered users)
Medium (the host is widely available to known users. Think corporate
lan applications)
Low (host is on a secured lan with very limited access)

4: Nominal Value
High (Any business critical system!) 
Medium (important applications)
Low (workstations)

Any manager reading the above, and applying it to a vulnerability
such as weak session ID generation, would understand it. The biggest
problem we are currently facing in our industry is one of standards. 
There are so many ways to define this and that, we need to be strong
and say "THIS IS THE OWASP WAY"




On 18 Dec 2006, at 22:06, Marco M.Morana wrote:

> Dan & Eoin
>
> The critique expressed herein on quantitative risk models reflects
> my opinion too. It
> was introduced herein as an example (not the  method just an
> example) and leaving room
> for a qualitative risk analysis to be compared with and eventually 
> chosen as a
> reference.
>
> My day to day experience with qualitative risk analysis consists on
> assigning to each
> vulnerability a risk factor based upon impact and easy of
> exploitation. The limitation 
> I see on this is that relies mostly on the experience and knowledge
> of the pen tester
> while assigning risk factors. Experienced pen testers can do a risk
> evaluation in a
> bit based upon similar scenarios, type of vulnerability and 
> knowledge of similar
> applications.
>
> So probably the question is how this risk evaluation guideline can
> can help less
> experienced pen testers. I think a risk questionnaire could 
> probably help evaluate
> risks for common vulnerabilities in typical attack scenarios.
>
> Marco
>
>
> On Mon Dec 18  8:49 , Eoin  sent:
>
>> Gents,
>> If we can get something "simple" but effective quickly I would go 
>> for that.
>> I would prefer if we stayed away from academic risk analysis and
>> statistics.
>> Some of the intro documentation can still be used also.
>> -ek
>>
>>
>> On 18/12/06, Daniel Cuthbert daniel.cuthbert at owasp.org> wrote:
>>
>> Agreed, this is the common approached used by most of the clients 
>> we work with,
> especially the banking sector.
>>
>> Matteo i know you don't want to change it and get a draft out, but
>> we need to be
> aware that many will follow our guide as the gospel on app testing 
> and i'd rather we
> delay some bits so that newcomers to our industry have a good solid
> footing and not
> the one i had when this industry was started (
>> a.k.a make up what you want, there isn't anyone to disagree) 
>>
>> What do you think? should we quickly agree on something less
>> complex and get it
> written up (i can do this as im currently on holiday and have less
> commitments than
> usual) 
>>
>>
>>
>>
>>
>> On 18 Dec 2006, at 19:01, Eoin wrote:
>>
>>
>> Yep agreed.
>> One thing I've always hated about assigning risk is to use these 
>> formulas which at
> times do not take context into account, if the vulnerability is
> internal facing only,
> is it exposed to unauthenticated users or authenticated only.
>>
>> There must be a rule of thumb relating to assigning how much of a 
>> risk a particular
> vulnerability is but avoiding complex academic formulas.
>> To me Risk is as simple as defining how damaging a vulnerability
>> exploit may be if
> exploited and how easy/accessible it is to commit the exploit. 
>> Also taking into account if the vulnerability is externally facing
>> or is it internal
> on a "secure" LAN segment?
>> -ek
>>
>>
>> On 17/12/06, Daniel Cuthbert daniel.cuthbert at owasp.org
>>> wrote:
>> I've spent today looking at what has been written so far and I feel
>> we are venturing into some dangerous territory with what we are 
>>
>> suggesting.
>> We need a easy to use, and understand, method of defining risk and
>> the one we have at the moment will cause more confusion than good.
>>
>>
>> https://www.owasp.org/index.php/How_to_value_the_real_risk_AoC
>>
>> The section on Quantitative Risk Calculation seems to be heavily 
>> based upon some complex mathematical formula, but does anyone
>> honestly know how to do this?
>>
>>
>> I've shown this to a number of pentesters and colleagues and they all

>> agree that they would not use the above approach as it's overly
>> complicated.
>>
>> Thoughts?
>>
>>
>> _______________________________________________ 
>>
>> Owasp-testing mailing list
>> Owasp-testing at lists.owasp.org
>>
>> http://lists.owasp.org/mailman/listinfo/owasp-testing
>>
>>
>> --
>> Eoin Keary OWASP - Ireland
>>
>> http://www.owasp.org/local/ireland.html 
>> http://www.owasp.org/index.php/OWASP_Testing_Project
>>
>> http://www.owasp.org/index.php/OWASP_Code_Review_Project
>>
>>
>> --
>> Eoin Keary OWASP - Ireland
>> http://www.owasp.org/local/ireland.html 
>>
>> http://www.owasp.org/index.php/OWASP_Testing_Project
>> http://www.owasp.org/index.php/OWASP_Code_Review_Project
>
>




-- 
Eoin Keary OWASP - Ireland
http://www.owasp.org/local/ireland.html 
http://www.owasp.org/index.php/OWASP_Testing_Project
http://www.owasp.org/index.php/OWASP_Code_Review_Project 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.owasp.org/pipermail/owasp-testing/attachments/20061219/b9788cd4/attachment-0001.html 


More information about the Owasp-testing mailing list