[Owasp-codereview] Code Review Guide: Update

Eoin eoin.keary at owasp.org
Thu May 29 08:47:41 EDT 2008


Hi Marco,
do you have a link to your article regarding Taxonomy?
Also i propose a weighting factor from the vulnerability metics perspective.

Severity (1-10) * Likleyhood (1-10) = Risk (higher the more risk)

Severity  * Effort required to fix  = Win Factor (Easy win to fix)
 -> This maps to a grid severity Vs Effort (like a risk matrix) shaded areas
should be addressed.

Also issues like (as you mentioned) "how many lines do I have to fix" or "if
it a fundimental architecture issue" these would be factored into the effort
rating.

ek



On 29/05/2008, Marco M. Morana <marco.m.morana at gmail.com> wrote:
>
> Paolo
>
> I think there is certainly a need for vulnerability categorization; I would
> say taxonomy of threats and vulnerabilities. The main driver for such
> categorization supporting a security finding analysis from my point should
> be the root causes, the risks and the remediation effort. For example you
> can categorize vulnerabilities as root causes (source code, configuration
> management etc) as well as the security control that mitigate the
> threat-vulnerability (for example the application security frame). I have
> elaborated some thought on the taxonomy on my blog in the past; you are
> welcome to take a look.
>
> On the scoring and the metrics for source code analysis I think we need to
> rationalize the objectives of the metrics first. IMHO you need something
> that quantifies the effort to remediate as related to the severity of the
> issue. The severity is driven by the risk analysis (you can take the
> testing
> guide likehood x exposure factors). Source complexity metrics such as LOCs,
> FP, Ciyclomatic helps quantify the effort in source code analysis etc but I
> think profiling tools for this already exists. Probably we can leverage a
> free one? What is missing is a tool that helps to prioritize the effort for
> remediation such as correlating the root cause from categorization (code,
> design, configuration) code complexity metrics (how many LOC I need to
> fix?), and the issue and the severity of the issue (which issues I should
> tackle first? Probably the ones with higher risk and with least effort and
> limited scope to be fixed). The issue categorization in this case should
> drive the countermeasure not the threat analysis so I advocate the security
> framework being more suitable for this taxonomy.
>
> I am interested what your take on this...I would happy to elaborate more in
> the details
>
> Ciao
>
> Marco Morana
> OWASP Cincinnati Chapter Leader
> http://www.owasp.org/index.php/Cincinnati
>
>
>
>
> -----Original Message-----
> From: owasp-codereview-bounces at lists.owasp.org
> [mailto:owasp-codereview-bounces at lists.owasp.org] On Behalf Of Paolo
> Perego
> Sent: Thursday, May 29, 2008 6:03 AM
> To: Owasp-codereview at lists.owasp.org
> Subject: Re: [Owasp-codereview] Code Review Guide: Update
>
> Sure I'll do. Also with Dinis in Ghent we made the parallelism "ESAPI
> project is the software adverse party to Owasp Top 10 guide" and
> "Orizon is the software adverse party to Owasp CR Guide"
>
> I'll keep myself in charge also in these two things:
> a) creating a category list in order to describe the security checks a
> CR tool or a Code reviewer needs to check. Something like
>   O_CR_1: Input validation
>       O_CR_1_1: XSS
>            O_CR_1_1_1: On a Java servlet perform input validation
> over doGet() method
>       O_CR_1_2: Sql Injection
> And things like this. Of course this "Top 10" is really live and it
> will grow in the time. I think it would be usefull when evaluating a
> CR tool, just to say "Ok i'll use this one because it performs 20
> checks out of the 40 described in the code review guide"
>
> Of course this will be used also in Orizon project to be the official
> owasp security check library
>
> b) helping you Eoin to formalize somemetrics in order to score a
> source file when performing a code review. We can use the
> aformentioned list in order to assign a score to a source on the basis
> of how many "category checks" it fails. This will try to answer the
> question "how much is secure this piece of code".
>
> I'll put on the wiki my ideas ASAP. Of course people... I need your help :)
>
> Claim: Two s3xy guides, is better than one.
>
> Ciao ciao
> thesp0nge
> 2008/5/29 Eoin <eoin.keary at owasp.org>:
> > Paolo,
> > this is great news as it is the right place for Orizon documentation.
> Think
> > of the code review guide as a supporting book for the Orizon tool. This
> is
> > something other tools don't have, a complete guide discussing the theory
> and
> > practical aspects of code review and also a guide on how to use the
> Orizon
> > tool.
> > Its a perfect interconnect between an OWASP tool and one of the trinity
> of
> > OWASP guides.
> >
> > Can you update the wiki when as you go so we can all see the progress and
> > help review.
> >
> >
> > thanks,
> >
> > Eoin
> >
> >
> > On 29/05/2008, Paolo Perego <thesp0nge at gmail.com> wrote:
> >>
> >> 2008/5/28 Eoin <eoin.keary at owasp.org>:
> >> > Hello my fellow security colleagues :)
> >> >
> >> > May I ask that anyone which is contributing to the OWASP Code review
> >> > guide
> >> > please start updating the wiki with their work :)
> >> > This shall help in reviewing the work and brainstorming.
> >> Hi Eoin and Hi everybody.
> >>
> >> As one of my Spoc 2008 goals is to improve Owasp Orizon documentation,
> >> I take the "The Owasp Orizon Framework" section in the "Automating
> >> Code Reviews" chapter.
> >>
> >> In my opinion in our guide we need to define a sort of "top 10" or
> >> "top 5" or "top something" vulnerabilities in order to give people
> >> performing a code review some metrics in order to perform their
> >> report.
> >>
> >> Let me explain further. When I perform an Ethical Hacking I prepare
> >> report to my customer saying "hey, you missed Owasp Top 10 point 1, 4
> >> and 5. Your application and your application server are prone to this
> >> vuln and this one. Do something".
> >>
> >> It would be great having a group of source code vulnerability
> >> categories (language independent) in order to give people the
> >> opportunity to make code review reports saying "hey, your code is
> >> missing Owasp CR Guide point 1.1 (Input Validation -> filter input in
> >> Servlet doGet() method)".
> >>
> >> May be I'll try to create over the wiki a sort of our TOP 10 and we
> >> can make some brainstorming about this.
> >> What about this?
> >>
> >> Ciao ciao
> >> thesp0nge
> >>
> >>
> >> --
> >> Paolo Perego <thesp0nge at owasp.org>, Owasp Orizon Project leader
> >> orizon.sourceforge.net
> >> _______________________________________________
> >> Owasp-codereview mailing list
> >> Owasp-codereview at lists.owasp.org
> >> https://lists.owasp.org/mailman/listinfo/owasp-codereview
> >
> >
> >
> > --
> > Eoin Keary OWASP - Ireland
> > http://www.owasp.org/local/ireland.html
> > http://www.owasp.org/index.php/OWASP_Code_Review_Project
>
>
>
> --
> Owasp Orizon leader
> orizon.sourceforge.net
> _______________________________________________
> Owasp-codereview mailing list
> Owasp-codereview at lists.owasp.org
> https://lists.owasp.org/mailman/listinfo/owasp-codereview
>
> _______________________________________________
> Owasp-codereview mailing list
> Owasp-codereview at lists.owasp.org
> https://lists.owasp.org/mailman/listinfo/owasp-codereview
>



-- 
Eoin Keary OWASP - Ireland
http://www.owasp.org/local/ireland.html
http://www.owasp.org/index.php/OWASP_Code_Review_Project
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.owasp.org/pipermail/owasp-codereview/attachments/20080529/0e93bddc/attachment-0001.html 


More information about the Owasp-codereview mailing list