[Owasp-leaders] A neutral Benchmark

johanna curiel curiel johanna.curiel at owasp.org
Sat Nov 28 01:56:59 UTC 2015


Totally false claims(in red) found on this website
http://www.contrastsecurity.com/owasp-benchmark

*The top commercial Static Application Security Testing (SAST) products had
an accuracy score of 33%, and the worst scored 17%. For Dynamic Application
Security Testing (DAST) products, the results were just as startling, with
the top product scoring 17% and the worst 1%. Contrast Enterprise, which
combines the best of Static, Dynamic and Runtime application security
testing technology, scored 92%!*

Fact is that due to the dependency of a XML output report of the findings,
I can totally assert that this tool cannot compare 1 on 1 any SAST/DAST
tools against each other. You must make sure that all the information is in
the output report. ZAP's most powerful automated feature(imo) is the
*fuzzing* including all sort of attacks. ZAP does not produce an automated
XML output report of this section.
Dave mentioned:  'I don¹t have licenses to any of these tools and so far,
no one has stepped up and offered to run any of these tools against the
Benchmark...." So how this comparison took place?

*Anyone can use the OWASP Benchmark Project to evaluate the pros and cons
of current solutions*

Not anyone. A Java programmer with experiencing creating parsers for XML
files .
And in order to use it...

   - The tool to be assessed must produce an XML output report of
   all the findings so it can be judged at all levels (example ZAP does not
   produce XML output of Fuzzing results and therefore 'Fuzzing results'
    cannot be analysed by Benchmark)
   - You must program a XML parser of the XML output report produced by the
   tool to be analysed . every tool produces its own XML structure, so you
   need to create custom parsing for
   - If benchmark does not contain the parser for the tool you want to
   'benchmark' it won't work.No parser no report.

*The 2015 OWASP Benchmark Project
<https://www.owasp.org/index.php/Benchmark>, sponsored by the US Department
of Homeland Security (DHS), shows that existing SAST and DAST solutions are
leaving businesses vulnerable to attack. *


 How can he conclude that the '*other tools leaving business vulnerable
to attack*' when the only thing this tool can do is parse a limited view of
the results that the tool being 'benchmark' produces.
Like ZAP. No report of Fuzzing means that ZAP cannot find automated
SQL injections attacks because no output report is produce by ZAP? This is
so damaging to the rest of the tools...

*The results of the OWASP Benchmark Project – with its 21,000 test cases –
are dramatic. *
This tool is a beta release. How do we know  that the 21,000 test cases are
accurate and have been properly programmed to contained the vulnerability
to be tested?    I would like to find out...

I feel that when a vendor is using so heavily OWASP  to downgrade other
vendors making claims that are false or have not been verified, this is so
damaging for OWASP.



On Fri, Nov 27, 2015 at 9:10 PM, johanna curiel curiel <
johanna.curiel at owasp.org> wrote:

> I think that OWASP should not be publishing results.
>
> Agree, the person publishing the results is Johanna et al.
> Also with a disclaimer: Johanna's opinions do not represent in any way
> OWASP endorsing or not the tool. This initiative is solely carried on by
> Johanna etc...
>
> Fact is that due to the dependency of a XML output report of the findings,
> I can totally assert that this tool cannot compare 1 on 1 any SAST/DAST
> tools against each other, therefore the claims done by Contrast are totally
> false:
> Contrast dominates SAST & DAST in Speed and Accuracy?
>
> This is so false😂....
>
> http://www.contrastsecurity.com/owasp-benchmark
>
> [image: Inline image 1]
>
>
>
> On Fri, Nov 27, 2015 at 9:00 PM, Josh Sokol <josh.sokol at owasp.org> wrote:
>
>> I really like this idea, Johanna, and it seems inline with Dave's
>> suggestion of having an Advisory Board for the project.  The one thing that
>> I do think that we need to steer clear from, however, is in publishing the
>> results of the tests conducted with the Benchmark.  If others want to test
>> and publish their personal results, that's not something we can stop, but
>> in an effort to be vendor-neutral, I think that OWASP should not be
>> publishing results.
>>
>> ~josh
>>
>> On Fri, Nov 27, 2015 at 4:16 PM, johanna curiel curiel <
>> johanna.curiel at owasp.org> wrote:
>>
>>> Hi Dave
>>>
>>> >>I don¹t have licenses to any of these tools and so far, no one has
>>> stepped
>>> up and offered to run any of these tools against the Benchmark.
>>>
>>> I think that the Contrast marketing campaign hurt the participation of
>>> a promising project before it could take off.
>>>
>>> For every specific xml output report , you need to create a parser in
>>> order to produce the reports. Without their collaboration or people with
>>> licences to test, you won't get their input
>>>
>>> As a neutral party with no conflict of interests in this project, I
>>> think we can request licenses to these vendors and with the participation
>>> of other volunteers that have no commercial ulterior motives to this. I
>>> have added Ali Ramzoo, who is also part of the  OWASP Research initiative
>>>
>>> We could indeed:
>>>
>>>    - Promote that the project is under a neutral research initiative
>>>    - Ask for licenses,
>>>    - Deploy them in a VM we can all have access to
>>>    - Verify if the tools can produce an XML output report (if not you
>>>    cannot parse)
>>>    - Discuss with them our findings privately before publishing our
>>>    findings
>>>    - We have also to be very conscious that if the XML report does not
>>>    generate all the findings in their tool (as the case of ZAP with Fuzzing)
>>>    then we need mention this very clear. Otherwise you can hurt the reputation
>>>    of the tool.
>>>
>>>
>>> This is how I can help this project and try to create a neutral clean
>>> view of a tool which I believe has potential but it needs to shake off all
>>> the publicity around Contrast
>>>
>>> Regards
>>>
>>> Johanna
>>>
>>> _______________________________________________
>>> OWASP-Leaders mailing list
>>> OWASP-Leaders at lists.owasp.org
>>> https://lists.owasp.org/mailman/listinfo/owasp-leaders
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20151127/b7f440bc/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screenshot 2015-11-27 21.06.49.png
Type: image/png
Size: 526209 bytes
Desc: not available
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20151127/b7f440bc/attachment-0001.png>


More information about the OWASP-Leaders mailing list