[Owasp-leaders] [Owasp-board] OWASP Benchmark project - potential conflict of interest

Justin Searle justin at meeas.com
Wed Dec 2 00:20:30 UTC 2015


As a neutral third party, let me try to answer some of the questions
people have raised.

Ian, since the Benchmark project is a tool, not a standard, trying to
compare them to NIST/ISO would not be correct.  Our document projects
like Top 10 and Testing Guide would be much more aligned with
NIST/ISO.  The Benchmark tool is something that simply pretends to be
a web applications and evaluated how various testing tools identifies
the vulnerabilities it has.  Think Web Goat, but with internal logic
that scores how a tool runs against it.

Kostas, no where in the OWASP pages for Benchmark nor its source code
do I see anything about publishing scores for vendors.  It is simply a
project to test how effective testing tools are.  I think of it
similar to how ZAP scores a web application by reporting how many
vulnerabilities it has found.

Nikola, as mentioned above, the Benchmark project is a tool that
evaluates how effective an assessment application works.  And looking
at the Benchmark code, and the statements by Psiinon, and I personally
don't see how this is any different than the various OWASP testing
tools like ZAP scoring how vulnerable commercial web applications are.

Tony, I don't see any logic in forcing multi-leader controls on a
project.  OSS has and always will be a meritocracy.  If someone is
capable and interested, they will start to contribute to the project,
and if their contributions are great enough that the project leader
agrees that the individual shares the same vision as themselves, or
perhaps an even greater vision, then that personal is usually promoted
to co-lead the project.  Arbitrarily forcing another person to co-lead
a project is completely contrary to the OSS model and makes no sense
to the health of an OSS project.  If we are worried that it is skewed
to Contrast, any of us can evaluate it through use or through code
review to verify if there is veracity to this claim.  To this point, I
don't see anyone stepping forward to point out real issues such as
these in the Benchmark codebase.  The only thing I see is people
concerned that Contrast's tool scores higher than all other unnamed
tools in the marketing reports.

On the surface, I agree with Kostas that I feel I'm being played the
idiot since I didn't see anywhere in their report that they were
associated with Benchmarks creation, but that is something that
reflects on Contrast and doesn't automatically imply that Benchmark is
tainted.  As for why Contrast's tool scores higher than SAST and DAST,
I'd expect to see those higher results in ANY RAST or IAST tool ran
against Benchmark.  As a penetration testing of 15 years, I never
expect a SAST or DAST application to find more than 30% of the
vulnerabilities in any of my reports.  By merging these technologies,
it makes sense why they are faster and more accurate than a standalone
SAST or DAST tool.  That is why IBM and HP have been trying to fill
our their suite of DAST and SAST toolsets.  Alone they only paint a
partial picture.

Everyone, to simplify this mess, doesn't it make more sense to treat
these as two separate issues?  One in evaluating Benchmark to see if
it really is tainted or skewed to Contrast's tool.  So far I see
nothing substantial indicating there is a problem here, but if someone
can find something, I'd be the first to want to hear about.  The
second issues would be OWASP's legal obligation to defend its
trademark against ANY company misusing its brand, which seems the
Board has already been addressing and Contrast has acknowledged an
issue and agreed to address it, which of course should be followed up
by the Board.

As a side note, I really hope this incident does not discourage Dave
from continuing to develop Benchmark as OSS.  If the general OWASP
body decided they do not want to be associated with the Benchmark
tool, so be it.  However there is a HUGE deficit in an effective way
to evaluate the effectiveness of testing tools.  There is currently no
good way to automatically benchmark a pare of DAST tools, or a pair of
SAST tools, or even harder yet, a DAST with a SAST tool.  I see no
evidence that Benchmark's goals are to publish scores.  If they were
to do that, I would be the first to stand up against it.  But having
an opensource tool to benchmark the effectiveness of multiple tools in
a purely automated way, is something I've dreamed of having for my
self, for my clients, and for all security community as a whole.
Until I see real indicators in the code of Benchmark that prove the
tool is skewed towards Contrast's tool over all other tools, I vote to
keep Benchmark as an OWASP project, and to ask the Board to continue
to work with Contrast to ensure OWASP and the Benchmark project are
not being misused by the company.

Dave / Jeff, sorry about slaughtering your company name on my last
post.  As you are reworking some of the reports and pages on your
website, I think it would be ethical to mention, perhaps even proudly
proclaim, the part you and/or your owners have played in making
Benchmark.  Thank you for giving us such a wonderful start of a tool.
The best part is that any of us find an issue they don't like, or
something that skews the tool in a direction they disagree with, any
of us can fork the tool and make it fit any of our needs, which is the
reason I contribute to OSS in general and OWASP in particular.  One
other feature request for Benchmark Dave.  I'd love to see a way to
use it to test actual penetration testers as well, would require a
method for a person to enter his "report" in by hand.

In summary, please don't throw the baby out with the bathwater.  We
need such a tool.  I need such a tool.  Lets just work to make it
better.


Justin Searle
Managing Partner - UtiliSec
+1 801-784-2052
justin at utilisec.com
justin at meeas.com


More information about the OWASP-Leaders mailing list