[Owasp-board] Jeff soing marketing for Contrast using Benchmark

Josh Sokol josh.sokol at owasp.org
Fri Nov 27 19:13:54 UTC 2015


Thank you for the links to those articles.  The first one discusses the
strengths and weaknesses of the different methods of evaluating for
application vulnerabilities.  The section on the Benchmark seems wholly
appropriate to me.  That seems like an excellent description of what the
project is designed to do.  I see some metrics in there about which tools
are more effective on which types of vulnerabilities, but I don't see him
straight up saying "The OWASP Benchmark proves that Contrast is better".
This seems like statements made based on some level of testing and
research.  Honestly, I don't see any OWASP brand abuse in that article.
Whether it's in good taste or not at this stage in the project is certainly
debatable, but if you look at the brand usage guidelines (
https://www.owasp.org/index.php/Marketing/Resources#tab=BRAND_GUIDELINES),
I don't see any violations.  We need to govern to policy here which is why
Paul and Noreen are evaluating changes to the guidelines and our
enforcement policies to make abuse more difficult.

The second article is a competing vendor's reaction to the first.  He makes
some good points about the issues with Benchmark, but he also says that he
hopes that it will be improved over time, and Dave has committed to that.
What I don't see is the vendor saying "...and Veracode has committed
resources to help make the Benchmark more accurate across all tool sets".
The Benchmark page is pretty clear that it does it's best to provide a
benchmark without working exactly like a real-world application.  Maybe
some more disclaimer text about where the project is at today would be in
order to validate some of Chris' concerns, but I hardly see this as "brand
abuse" or a reason to demote the project.

Please consider that I have spoken with both Dave and Jeff on this topic
and read much of the discussions around it before formulating my opinion.
I doubt that you have done the same so I'm not sure how you can claim that
you have researched the issues and all parties involved when you haven't
even spoken with the two people whom you are accusing of impropriety.  I
have no bias here.  I am simply speaking with the individuals involved,
looking at the currently OWASP policies and guidelines, and helping to
determine our next steps.

~josh

On Fri, Nov 27, 2015 at 12:08 PM, johanna curiel curiel <
johanna.curiel at owasp.org> wrote:

> Josh, also take the time to read the reaction of Veracode
>
> Jeff doing marketing...
>
> https://www.veracode.com/blog/2015/09/no-one-technology-silver-bullet
>
> This week we’re all treated to watch this spectacle play out in the pages
> of Dark Reading, loosely disguised as a discussion about a new industry
> benchmark. While vendors sling arrows at each other, the benchmark itself
> isn’t getting much attention and I think it would benefit us all to focus
> on what’s important here: the benchmark
> <https://www.owasp.org/index.php/Benchmark>.
>
> .....
>
> f you haven’t been following the drama, over the past few days, the
> general manager of HP’s Fortify division, Jason Schmitt, and the CTO and
> Co-founder of Contrast Security, Jeff Williams, have been in a tit-for-tat
> argument over this question. In a post
> <http://www.darkreading.com/vulnerabilities---threats/why-its-insane-to-trust-static-analysis/a/d-id/1322274?> published
> yesterday, Williams points to a new benchmark from OWASP as a good way to
> objectively evaluate the strengths and weaknesses of different application
> security tools.
>
> I* have a concern with the OWASP benchmark scoring as well. I don’t agree
> with the scoring process where the score is true positive rate minus false
> positives rate (score = TP%-FP%).  It is much more important to be able to
> detect a vulnerability than to reject a false positive, to a point.  I am
> going to recommend to OWASP that TP% and FP% be reported and **not combined
> into a final score.  This way there is more information presented and
> customers can make up their minds about the FP rate their risk posture and
> resources can tolerate.  For instance if a test has a TP% of 65% and FP%
> around 35%, instead of just comparing a score of 30 to compare test results
> look at both numbers.  That paints a more realistic picture of how a
> testing technology will perform.*
>
> On Fri, Nov 27, 2015 at 1:59 PM, johanna curiel curiel <
> johanna.curiel at owasp.org> wrote:
>
>> Josh
>>
>> Inform yourself better.
>>
>> Is now Jeff being forced to write articles in DarkReading about benchmark
>> and Contrast?
>>
>>
>> http://www.darkreading.com/vulnerabilities---threats/why-its-insane-to-trust-static-analysis/a/d-id/1322274
>>
>> [image: Inline image 2]
>>
>
>
> _______________________________________________
> Owasp-board mailing list
> Owasp-board at lists.owasp.org
> https://lists.owasp.org/mailman/listinfo/owasp-board
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-board/attachments/20151127/a4f3ed4d/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screenshot 2015-11-27 13.55.44.png
Type: image/png
Size: 110730 bytes
Desc: not available
URL: <http://lists.owasp.org/pipermail/owasp-board/attachments/20151127/a4f3ed4d/attachment-0001.png>


More information about the Owasp-board mailing list