[Owasp-leaders] A neutral Benchmark

johanna curiel curiel johanna.curiel at owasp.org
Sat Nov 28 13:51:00 UTC 2015


*>>I asked Simon about this and he was unclear about what you were talking
about. We certainly aren’t trying to exclude any valid results from ZAP and
we are working closely with the ZAP team to get a good set of Benchmark
results for it.*

I think I answered this very clear to you on  October 18
http://lists.owasp.org/pipermail/owasp-benchmark-project/2015-October/000034.html

Dave, any ZAP user is very aware that ZAP does not produce reports of all
the automated scans (like active and passive) including the automated
Fuzzing attacks

In fact, I mentored a GSOC for ZAP back in 2013 to explore how to add
reports for ZAP
https://www.google-melange.com/gsoc/project/details/google/gsoc2013/edil/5683633302011904
https://www.owasp.org/index.php/GSoC2013_Ideas#OWASP_ZAP:_Exploring_Advanced_reporting_using_BIRT

*>>Regarding the tool’s native output format. We don’t care about the
format of the output. We’ll produce a parser for whatever the native tool
supports. XML, JSON, csv, zip file, whatever. And we’ll publish that parser
so anyone can use the Benchmark to score that tool.*

You need to create the parser before anyone can actually use the tool
against a SAST/DAST tool. So no, you cannot just use it. What if the
XML structure changes? You need constant adaptations of the parser. Only
someone who can develop can do that, so not anyone at anytime can use
Benchmark.That is my point.

*>>I agree with you that Contrast’s marketing claims have significantly
harmed the project’s reputation, and that’s why Josh talked to Jeff about
this to ask them to ‘cool it’. I think we need to clearly separate
Contrast’s behaviour from any issues with project itself as they are really
two different things.*

Cool it? All the endorsements should be taken down. Video's and webiste. As
long as this exists it will damage the perception of anyone who reads it,
especially non security experts looking for automated solutions, public
unfamiliar with OWASP or people who do not understand how Benchmark works
and the maturity level so far.

So far nothing should be hard concluded and deduced from benchmark project.
As you see, you was not aware very well how ZAP works and that ZAP does not
produce an output of all its results. How is BURP and the rest? Also can we
actually conclude that we can benchmark tools based on the tools output
reports? Are they complete? Is this maybe not a very limited view of how
effective a tools is?

Regards

Johanna

-------------------------------------------------------------------------------------------------------------------------------------------
A copy of Simon conversations where I clarified this:
http://lists.owasp.org/pipermail/owasp-benchmark-project/2015-October/000034.html
-------------------------------------------------------------------------------------------------------------------------------------------
johanna curiel curiel <johanna.curiel at owasp.org>
Oct 18
to Dave, psiinon, owasp-benchmar.
Hi Dave

My apologies for not answering before, here  my answers.

>I really don’t understand this comment. ZAP is fully able to output all of
its results so we can assess them in the Benchmark.  Simon and his team
having been working to get ZAP to do a full scan of the Benchmark and they
believe it now will (I just haven’t had time to test it yet, but will soon
– promise)[...] We haven’t seen a tool that can only output partial
results. In our experience its all or nothing.

I do not agree with this. ZAP does not produce a complete output results of
all the things you can test with ZAP.Therefore, the tool will only
presented a 'limited' view of the things ZAP is able to do. I know Simon has
been working and using Benchmark to improve the accuracy of the results
produce through the XML output. But I know very well that the output
produces only a portion of all the testing you can do with ZAP.

>The Benchmark project has been VERY careful not to publish results for
individual commercial tools and I would caution anyone else to do the same
without their express permission. We are hoping to get that permission as
part of the Benchmark project but its been slow going.

My idea is to do a separate, independent testing in coordination with the
commercial vendors. I do not belong to any company and I have no conflict
of interest if I execute this research separate . Personally I think I can
also evaluate the actual flaws of the benchmark project and provide a
report that do not compromise conflict of interest.

As you know the discussions online and on twitter between the company you
work for and other vendors puts you and your project in an awkward position
as leader. Especially after Jeff has been promoting the project in a
certain way without clarifying that the tool is still in development and
nothing should be hard concluded with the results obtained
through benchmark until it has reached community adoption and major testing.

Regards



On Sat, Nov 28, 2015 at 8:22 AM, Rory McCune (OWASP) <rory.mccune at owasp.org>
wrote:

> Hi All,
>
>
>
> Just to add another 0.02 of your local currency to this, I think if you
> look at the page that Johanna linked (which is Contrast current marketing
> position), it’s pretty clear that there’s a problem with how this project
> is being used by Contrast.
>
>
>
> The page has the very strong implication that the US Dept of Homeland
> Security and OWASP are stating that Contrasts IAST solution is vastly
> superior to SAST and DAST software.  The page has OWASP logos the title is
> “OWASP Benchmark project”.
>
>
>
> Now people familiar with exactly how OWASP operates (i.e. anyone can start
> a project and call it an OWASP project) may say that this is fine as it
> doesn’t represent an endorsement from OWASP.
>
>
>
> However try reading that page as someone who doesn’t know this (i.e. how
> 99+% of people would)
>
>
>
> A person who is unfamiliar with how OWASP operates would, I think, take
> that page as OWASP the organisation endorsing Contrast’s IAST solution as
> being better than the alternatives.
>
>
>
> “*The results of the OWASP Benchmark Project – with its 21,000 test cases
> – are dramatic*” –
>
>
>
> This clearly reads as an endorsement by OWASP of their product, as does
>
>
>
> “*The 2015 OWASP Benchmark Project, sponsored by the US Department of
> Homeland Security (DHS), shows that existing SAST and DAST solutions are
> leaving businesses vulnerable to attack.*”
>
>
>
> As has been mentioned elsewhere in this thread, how are all the OWASP
> sponsor companies who make products in the SAST and DAST world going to
> react here?  I can’t imagine it will make conversations that OWASP members
> may have with their employers relating supporting OWASP any easier…
>
>
>
> Now I like the idea of a cross-tool comparison, although I think we’d be
> better working with existing project like sectoolsmarket , but OWASP need
> to be very careful about allowing companies to give the appearance of an
> endorsement given our position.
>
>
>
> Cheers
>
>
>
> Rory
>
>
>
>
>
> *From:* owasp-leaders-bounces at lists.owasp.org [mailto:
> owasp-leaders-bounces at lists.owasp.org] *On Behalf Of *johanna curiel
> curiel
> *Sent:* 28 November 2015 01:10
> *To:* Josh Sokol <josh.sokol at owasp.org>
> *Cc:* owasp-leaders at lists.owasp.org; Andre Gironda <andreg+owasp at gmail.com
> >
> *Subject:* Re: [Owasp-leaders] A neutral Benchmark
>
>
>
> I think that OWASP should not be publishing results.
>
>
>
> Agree, the person publishing the results is Johanna et al.
>
> Also with a disclaimer: Johanna's opinions do not represent in any way
> OWASP endorsing or not the tool. This initiative is solely carried on by
> Johanna etc...
>
>
>
> Fact is that due to the dependency of a XML output report of the findings,
> I can totally assert that this tool cannot compare 1 on 1 any SAST/DAST
> tools against each other, therefore the claims done by Contrast are totally
> false:
>
> Contrast dominates SAST & DAST in Speed and Accuracy?
>
>
>
> This is so false😂....
>
>
>
> http://www.contrastsecurity.com/owasp-benchmark
>
>
>
> [image: Inline image 1]
>
>
>
>
>
>
>
> On Fri, Nov 27, 2015 at 9:00 PM, Josh Sokol <josh.sokol at owasp.org> wrote:
>
> I really like this idea, Johanna, and it seems inline with Dave's
> suggestion of having an Advisory Board for the project.  The one thing that
> I do think that we need to steer clear from, however, is in publishing the
> results of the tests conducted with the Benchmark.  If others want to test
> and publish their personal results, that's not something we can stop, but
> in an effort to be vendor-neutral, I think that OWASP should not be
> publishing results.
>
> ~josh
>
>
>
> On Fri, Nov 27, 2015 at 4:16 PM, johanna curiel curiel <
> johanna.curiel at owasp.org> wrote:
>
> Hi Dave
>
>
>
> >>I don¹t have licenses to any of these tools and so far, no one has
> stepped
>
> up and offered to run any of these tools against the Benchmark.
>
>
>
> I think that the Contrast marketing campaign hurt the participation of a
> promising project before it could take off.
>
>
>
> For every specific xml output report , you need to create a parser in
> order to produce the reports. Without their collaboration or people with
> licences to test, you won't get their input
>
>
>
> As a neutral party with no conflict of interests in this project, I think
> we can request licenses to these vendors and with the participation of
> other volunteers that have no commercial ulterior motives to this. I have
> added Ali Ramzoo, who is also part of the  OWASP Research initiative
>
>
>
> We could indeed:
>
>    - Promote that the project is under a neutral research initiative
>    - Ask for licenses,
>    - Deploy them in a VM we can all have access to
>    - Verify if the tools can produce an XML output report (if not you
>    cannot parse)
>    - Discuss with them our findings privately before publishing our
>    findings
>    - We have also to be very conscious that if the XML report does not
>    generate all the findings in their tool (as the case of ZAP with Fuzzing)
>    then we need mention this very clear. Otherwise you can hurt the reputation
>    of the tool.
>
>
>
> This is how I can help this project and try to create a neutral clean view
> of a tool which I believe has potential but it needs to shake off all the
> publicity around Contrast
>
>
>
> Regards
>
>
>
> Johanna
>
>
>
> _______________________________________________
> OWASP-Leaders mailing list
> OWASP-Leaders at lists.owasp.org
> https://lists.owasp.org/mailman/listinfo/owasp-leaders
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20151128/f0c23a82/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 26583 bytes
Desc: not available
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20151128/f0c23a82/attachment-0001.jpg>


More information about the OWASP-Leaders mailing list