[Owasp-testing] CVSS v2

Jim Manico jim.manico at owasp.org
Thu May 9 11:45:34 UTC 2013


+1, sharp perspective Andrew.

- Jim

> I suggest we refer to calculation of business impact levels and contribution of the security testing results to a risk management process, and explain how it could be done, not how it should be done. 
> 
> Business impact can be calculated in as many ways as there are businesses, so suggestions and references may help, but anything more may result in the Test Guide process being difficult to integrate. Similarly for risk management. For example, you allude to quantitative risk analysis, Jim, but I've seen quite a lot of qualitative risk analysis in my travels. 
> 
> I believe the Test Guide needs to focus on what it does best (verifying and validating application security controls and finding vulnerabilities in applications), the rest is up to the business analysts and risk professionals. While the calculation of vulnerability severity metrics is currently flawed as Eoin points out, attempts to fix it are on the way. This is unfortunate, but it is an industry standard and I believe we should try to work with it as much as we can and change it when we can. 
> 
> I am not a fan of reinventing the wheel. The many, varied and often failed attempts to write security testing methodologies drives me up the wall. Business impact levels, risk management processes and vulnerability severity metrics have all been done before by many people in many ways. I believe in the approach of reusing, refining, and when there is no other option, redoing. 
> 
> regards, 
> Andrew 
> 
> 
> ----- Original Message -----
> 
> From: "Jim Manico" <jim.manico at owasp.org> 
> To: "jm" <sysvar0 at gmail.com> 
> Cc: owasp-testing at lists.owasp.org 
> Sent: Thursday, May 9, 2013 5:54:44 PM 
> Subject: Re: [Owasp-testing] CVSS v2 
> 
> Keep in mind, all of the factors can be changed. 
> 
> I am most worried about the math that is used to compute overall business impact. It's an average of business factors which might improperly reduce the overall business impact. 
> 
> Check out the section on "repeatable process" under https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology#Business_Impact_Factors to see what I'm talking about. 
> 
> I think you want to set specific weights for each business impact factors specific to the context of your business and let the highest value be the true overall business impact. 
> 
> And really, when it comes to true risk mathematics, this is incredibly lightweight. I would not run a risk practice on this fuzzy math alone. 
> 
> Cheers, 
> Jim 
> 
> 
> 
> 
>> I look at the OWASP risk rating methodology. 
>>
>> My comments as follows. 
>>
>> It does not really distinguish between a specific occurence and 
>> the prevalence or distribution of an affected application in a given 
>> context. 
>> It does not sufficiently factor in the value or the role of the application 
>> within the context to the organization, the execution environment, or the 
>> remediation cost. 
>>
>> In Threat Agent factors. 
>> -Size of group of threat does not qualify type. 
>> -Opportunity should speak to motive and not to a level of access. 
>>
>> In Vulnerability factors. 
>> -Discoverability and exploitability are not very helpful in expressing the 
>> complexity of the attack. 
>> -Under awareness, I do not think that how well known a vulnerability is for 
>> a group of threat agents is a reasonable question to ask, and am not 
>> certain of a practical scenario where hidden would be assigned. It could 
>> presume knowledge about the MO of a threat profile - not typically 
>> vulnerability-centric, and this level of detail may be beyond the reach of 
>> the organization or of the assessor. Instead, a more reasonable question 
>> might be centered around any evidence of disclosure of said vulnerability 
>> in the public domain, and perhaps a qualifier describing the level of 
>> confidence of the report and whether the issue was acknowledged by the 
>> vendor. 
>> A qualifier describing the age of the vulnerability could be useful 
>> Around intrusion detection, two properties seem to be implied, the 
>> detectability or ease of detection and the action of review. The ease of 
>> detection from the generation of a cyber observable may depend on other 
>> components such as a system-centric configuration settings, and the 
>> reviewing, freqency of review, etc. speak to potential / components of 
>> another system. 
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, May 8, 2013 at 2:06 PM, alberto cuevas <beto.cuevas.v at gmail.com>wrote: 
>>
>>> Hello, 
>>>
>>> In the OWASP Testing Guide, proposes the use of OWASP Risk Rating 
>>> Methodology (actually we used this for pentest web results). 
>>>
>>> - This could be noted as a standard methodology? (I guess the majority of 
>>> the community uses this) 
>>> - Has anyone seen the limits of this methodology in certain situations? 
>>> - Which other methodologies can be recommended to use rather than OWASP 
>>> Risk Rating Methodology? 
>>>
>>> Thanks in advance for your guidance. 
>>>
>>> Beto 
>>>
>>> 2013/5/7 jm <sysvar0 at gmail.com> 
>>>
>>>> Vulnerability-centric 
>>>> Qualitative 
>>>> Coarse-grained 
>>>> Simple 
>>>> Concise 
>>>> Contextually extensible 
>>>> Open framework 
>>>> Inconsistent implementations 
>>>> In relative wide-use 
>>>> One-to-one mappings - no scale well for multiple vulnerabilites 
>>>> El may 7, 2013 7:46 p.m., "Eoin" <eoin.keary at owasp.org> escribió: 
>>>>
>>>> CVSS pretty much is devoid of context. 
>>>>> It does not consider client attacks IMHO. It's more of a traditional 
>>>>> security issue rating system. PCI mapping to CVSS v2 for appsec is pretty 
>>>>> poor. 
>>>>>
>>>>> Eoin Keary 
>>>>> Owasp Global Board 
>>>>> +353 87 977 2988 
>>>>>
>>>>>
>>>>> On 8 May 2013, at 00:34, alberto cuevas <beto.cuevas.v at gmail.com> wrote: 
>>>>>
>>>>> Hello, 
>>>>>
>>>>> In the section 5.1 HOW TO VALUE THE REAL RISK in OWASP Testing Guide v3, notes : 
>>>>>
>>>>>
>>>>> "Ideally, there would be a universal risk rating system that would 
>>>>> accurately estimate all risks for all 
>>>>> organization. But a vulnerability that is critical to one organization 
>>>>> may not be very important to another. 
>>>>> So we're presenting a basic framework here that you should customize for 
>>>>> your organization. " 
>>>>>
>>>>> Whereby, the following questions came to mind: 
>>>>>
>>>>> - Is a good idea to use CVSS v2 to score pentest web results? (I think 
>>>>> so that temporal and environmental metrics can be produced diferentes 
>>>>> ratings which determines how critical the vulnerabilitie is for one or 
>>>>> another organization.) 
>>>>>
>>>>> - I read that CVSS v2 has some limitations for score combined 
>>>>> vulnerabilties, So, in case to sue CVSS v2 to score, Do exist some mode to 
>>>>> solve this issue? 
>>>>>
>>>>> I wonder if there are opinions on the ups and downs of using CVSS v2 to 
>>>>> rate the pentest web results. I appreciate in advance any help or 
>>>>> information you can give me. 
>>>>>
>>>>> Best Regards, 
>>>>>
>>>>> Beto 
>>>>>
>>>>> _______________________________________________ 
>>>>> Owasp-testing mailing list 
>>>>> Owasp-testing at lists.owasp.org 
>>>>> https://lists.owasp.org/mailman/listinfo/owasp-testing 
>>>>>
>>>>>
>>>>> _______________________________________________ 
>>>>> Owasp-testing mailing list 
>>>>> Owasp-testing at lists.owasp.org 
>>>>> https://lists.owasp.org/mailman/listinfo/owasp-testing 
>>>>>
>>>>
>>
>>
>>
>> _______________________________________________ 
>> Owasp-testing mailing list 
>> Owasp-testing at lists.owasp.org 
>> https://lists.owasp.org/mailman/listinfo/owasp-testing 
>>
> 
> _______________________________________________ 
> Owasp-testing mailing list 
> Owasp-testing at lists.owasp.org 
> https://lists.owasp.org/mailman/listinfo/owasp-testing 
> 
> 



More information about the Owasp-testing mailing list