[Owasp-boston] 2 Questions about Jim's Presentation last week

Scott Matsumoto smatsumoto at cigital.com
Mon Nov 23 17:15:24 EST 2009


Neil,

I am only relaying information that work is going on.  My personal opinion is that unless there is some unbiased, 3rd partly (like a NIST) that's validating the standard, it standard doesn't really achieve any level of interop.
________________________________
From: neil.smithline at gmail.com [neil.smithline at gmail.com] On Behalf Of Neil Smithline [owasp.org at smithline.net]
Sent: Monday, November 23, 2009 5:10 PM
To: Scott Matsumoto
Cc: james at architectbook.com; owasp-boston at lists.owasp.org
Subject: Re: [Owasp-boston] 2 Questions about Jim's Presentation last week

No offense Scott, but I would guess that the number of useful standards is several orders of magnitude smaller than the number of standards. I would add several orders of magnitude to that if we compare useful standards to incomplete standards.

IMO, the general question of whether standardizing a data format is useful depends on who you ask.

Microsoft spent years saying "no". Now they sometimes say "yes" (eg: .docx format) and sometimes say "yes, as long as we control the standard" (eg: some WS standards).

Ask RMS, OWASP.org or Dinis Cruz (of O2), and the answer will be "yes".

Clearly MS's responses are based on business issues. Nearly every time MS said "no", the EBM consortium (Everyone But Microsoft) argued strongly for a standard and frequently created some of those many standards that nobody uses.

The open-sourcerers argue yes largely on philosophical reasons.

>From a strictly technical viewpoint, I think that open (ie: publicly available and modifiable) standards  frequently help. The trick is not to stifle progress by being forced to adhere to a standard that is not capable of meeting your requirements.

Hence, open, extensible standards seem the big win. For example, X.509 and SAML both provide a means to support a core functionality, come with common extensions, and allow extensions that can be ignored if a tool does not know how to handle them. Another example is how many Servlet containers have configuration extensions beyond those that are specified in the standard web.xml file.

Perhaps "open, extensible standards supporting graceful degradation" sums up my thoughts. I argue "open", in part on religious reasons, but more because closed standards hinder innovation. Not just for static analysis, but for data, API and communication formats of many types.

The only question is if static analysis, in particular security-focused static analysis is at a point where it is mature enough to be standardized. I think the answer is unquestionably "yes", provided it is an open, extensible standard that supports graceful degradation.


Neil Smithline
OneStopAppSecurity.com
781-754-7628
Chat [http://www.images.wisestamp.com/gtalk.png] Google Talk: neil_smithline at gmail.com<mailto:neil_smithline at gmail.com> [http://www.images.wisestamp.com/yahoo.png] Y! messenger: smithln
Contact Me [http://www.images.wisestamp.com/linkedin.png] <http://www.linkedin.com/profile?viewProfile=&key=2519386&trk=tab_pro>  [http://www.images.wisestamp.com/facebook.png] <http://www.facebook.com/home.php?#/profile.php?ref=profile&id=546173087>  [http://www.images.wisestamp.com/twitter.png] <http://twitter.com/emandab1>


--- @ WiseStamp Signature<http://www.wisestamp.com/email-install>. Get it now<http://www.wisestamp.com/email-install>


On Mon, Nov 23, 2009 at 15:57, Scott Matsumoto <smatsumoto at cigital.com<mailto:smatsumoto at cigital.com>> wrote:
I don't know the fate of O2, so I cannot comment on it.

SAFES is an emerging standard to standardize tool findings, so, there's value in standardizing them.  I think it's very early for SAFES.

________________________________
From: james at architectbook.com<mailto:james at architectbook.com> [james at architectbook.com<mailto:james at architectbook.com>]
Sent: Monday, November 23, 2009 3:45 PM
To: Scott Matsumoto
Cc: owasp-boston at lists.owasp.org<mailto:owasp-boston at lists.owasp.org>
Subject: RE: [Owasp-boston] 2 Questions about Jim's Presentation last week

OWASP Board member Dinis Cruz frequently discusses the O2 Platform. I am curious if anyone knows whether IBM will step up and rally behind it or let it die on the grapevine? Also would love to know if there is merit in having Appscan, WebInspect and other dynamic analysis tools emit a findings file in a standard format.

-------- Original Message --------
Subject: Re: [Owasp-boston] 2 Questions about Jim's Presentation last
week
From: Scott Matsumoto <smatsumoto at cigital.com<mailto:smatsumoto at cigital.com>>
Date: Mon, November 23, 2009 9:59 am
To: "Laverty, Patrick" <Patrick_Laverty at brown.edu<mailto:Patrick_Laverty at brown.edu>>,
"owasp-boston at lists.owasp.org<mailto:owasp-boston at lists.owasp.org>" <owasp-boston at lists.owasp.org<mailto:owasp-boston at lists.owasp.org>>

Patrick,

In terms of your first question, I can answer from our experience doing assessments that mix manual and tools for both static and dynamic analysis that those percentages are roughly right. I don't have quantitative numbers to back up that claim since our analysis isn't based solely on raw numbers.

In terms of employing multiple scanners, we use both AppScan and HP (SPI) and I don't see enough difference in coverage to run both. I think that using different tools that are looking for different types of problems or using a mix of static and dynamic tools provides enough of a win to justify the extra cost (both dollar and human).

Using this mix of static and dynamic as well as manual and tool-based techniques is exactly what we do. In the end, however, I find the vulnerabilities that have the highest business-related impact are those that we find manually. I think it's because many of the defects that the tools miss involve information disclosure and the tools don't have enough intelligence to distinguish what data one is should or should not see.

________________________________
From: owasp-boston-bounces at lists.owasp.org<mailto:owasp-boston-bounces at lists.owasp.org> [owasp-boston-bounces at lists.owasp.org<mailto:owasp-boston-bounces at lists.owasp.org>] On Behalf Of Laverty, Patrick [Patrick_Laverty at brown.edu<mailto:Patrick_Laverty at brown.edu>]
Sent: Monday, November 23, 2009 8:34 AM
To: owasp-boston at lists.owasp.org<mailto:owasp-boston at lists.owasp.org>
Subject: [Owasp-boston] 2 Questions about Jim's Presentation last week

And they’re both about the same statement.

Jim stated that the scanners will “Only going to find 10 – 20% of vulns – low hanging fruit”

My two questions are:

1. How do we know that they will only find that number? If we know they’re missing 80-90%, how did we find them to count those that were missing?

2. I’ve read that the most effective and thorough scanning is to get 4-5 different scanners and use them all, as they will find some different vulnerabilities. Has anyone done this? What did you find when you look at all the data from multiple scanners and how varied are they at finding different true positive results?

Thank you!

Patrick



_______________________________________________
Owasp-boston mailing list
Owasp-boston at lists.owasp.org<mailto:Owasp-boston at lists.owasp.org>
https://lists.owasp.org/mailman/listinfo/owasp-boston
_______________________________________________
Owasp-boston mailing list
Owasp-boston at lists.owasp.org<mailto:Owasp-boston at lists.owasp.org>
https://lists.owasp.org/mailman/listinfo/owasp-boston




More information about the Owasp-boston mailing list