[OWASP-benchmark-project] Project Questions
dave.wichers at owasp.org
Mon Oct 12 19:54:08 UTC 2015
Sorry for the long delay here. Between the list email issues and then my
crazy week last week, I haven¹t been able to respond until now.
Here are my answers to your questions:
1. You aren¹t the first to bring this up. Jim Manico has asked about this as
well and its a totally fair question
The 11 test areas are just a start. So, its certainly not complete. We just
had to start somewhere so this is what we started with. We started with
vulnerability areas that static tools could find, because the 1.1 release
was purely static. The effort to develop the 1.2beta was purely to make the
existing test cases runnable, and survive a scan. So we didn¹t change the
vulnerability areas at all.
After we get the 1.2 out of Beta we plan to add more test areas. The get it
out of Beta I have a list of about 5 things to make the app either more
realistic or survive external scans better so we can get more accurate
results from the dynamic tools.
For the next test areas, we are thinking about adding things like:
* Web Services with frameworks and XML/JSON parsers like Spring, Jackson,
And more I¹m sure, but I haven¹t thought about this that hard yet. I¹m
looking for input so if anyone on the list has suggestions or (even better)
test case contributions, please send them in!!
I¹ve already asked Simon Bennetts to think up what test case areas he¹d like
to see next. Since he¹s leading the project with (one of?) the best open
source dynamic scanners, he¹s a great resource for this of course.
2. Objectivity/free from conflicts of interest.
Another totally fair question and again not the first time this has been
brought up. I¹ve answered this question before in other venues and am happy
to do so again.
I clearly have a vested relationship in the success of Contrast and I¹ve
been very open about that. Aspect Security decided to develop the Benchmark
because we have an entire Application Security Automation practice
(http://www.aspectsecurity.com/application-security-automation) that does
LOTS of commercial work with lots of different vendors tools, including IBM,
HP, PortSwigger, Contrast, OWASP and others and we decided that we really
needed to understand the strengths of weaknesses of different tools through
real testing, not just our professional opinion.
And we felt that developing this through OWASP would allow everyone to
benefit from such research and hopefully would spark the development of a
large community around this effort so we can share the effort as well as the
I think the best way to avoid any actual conflicts of interest is for such a
community to grow and thrive, with contributions and observation from MANY
members of the OWASP community, including vendors, consultants, end users,
In answer to your next 5 questions:
1 and 2: Contributors: Aspect Security developed almost all the code to
date, primarily by me, Juan Gama, and Nick Sanidas.
I¹m the release manager so I push all the changes to OWASP, which is why you
only see my name in Github for the most part.
We have received significant feedback from vendors for tools such as ZAP,
Arachni, Fortify, and others and have fixed numerous issues in the Benchmark
based on their feedback to either fix bugs, or make the Benchmark more
realistic for dynamic scanners.
We have also accepted the small number of push requests that have come in,
mostly in areas of improving the parsers for specific tools, like SonarQube
and FindSecBugs. Contrast Security contributed a small amount of code to
facilitate running Contrast against the Benchmark which we incorporated as
We would LOVE for other vendors to provide us with integration code to make
it easy to use their tools agains the Benchmark. We¹ve done what we can and
some vendors have helped us, but we¹ve had to do most of this on our own. We
have started this page:
https://www.owasp.org/index.php/Benchmark#tab=Tool_Scanning_Tips to document
the techniques for getting the best results from each tool. We¹d be very
happy if all the vendors (or other people who are expert with these tools)
would update this page with their best tips/tricks/configuration steps to
get the best/most accurate results for each tool.
We would LOVE for others to contribute and several vendors, including
Synopsys and Rapid7 approached me at OWASP AppSec USA and asked if they
could contribute and I said yes, of course. We want as many people as
possible to contribute with both code and feedback.
3. How to address the appearance of conflict of interest.
I think the best way is to get more people involved and I welcome such
involvement. There are probably a dozen or more people I¹ve been
corresponding with over the past few months and we need to get that
correspondence to be more public by using the project mailing list AND more
importantly, we need to get more people/organizations involved.
4. Other IAST tools
We¹d love to add more. We actually want to add as many tools as possible,
not just IAST. Synopsys has Seeker, and they approached me at AppSec USA as
I said. So hopefully they will join the project and we will soon see Seeker
added to the list, and also see significant contributions from them in terms
of test cases, or added complexity through the addition of frameworks,
parsers, or whatever will make the Benchmark test cases more diverse and
realistic. I am not actually aware of any other IAST vendors, but please
let me know if they exist so I can reach out to them. I am aware a number
of Dynamic scanners have IAST agents, like WebInspect for example, and we
want to add those too.
A RASP (Runtime Application Self Protection) vendor approached me after the
talk I gave at AppSec USA and asked how the Benchmark could be used with
those types of products. That is a great question that we don¹t currently
have an answer for. If anyone has any great ideas on this, please let me
know. Other RASP vendors have contacted me directly with similar questions.
We¹d love for the Benchmark to provide value to them, and I¹m sure it can.
What we don¹t yet have an idea how to do is how to use the Benchmark to
score RASP products, similar to how to score vulnerability detection
products. That¹s a really hard problem in my opinion.
5) Diverse body with multiple points of view -
We definitely need this and do not have it yet. We need to develop a
community around this with a diverse set of both leaders and contributors.
We¹ve been trying to get this going and its starting to go in that direction
but anything the OWASP community and/or board can do to make that happen
faster and more effectively would be fantastic.
Thanks for bringing up these questions and I agree that we need to address
this concern as part of the effort to make this project the wildly
successful project I think it can be.
P.s. Simon brought up a similar thread on the leaders list so I¹m cc¹ing him
From: Michael Coates <michael.coates at owasp.org>
Date: Monday, October 5, 2015 at 9:20 PM
To: <owasp-benchmark-project at lists.owasp.org>
Subject: [OWASP-benchmark-project] Project Questions
OWASP Benchmark List,
I've heard more about this project and am excited about the idea of an
independent perspective of tool performance. I'm trying to understand a few
things to better respond to questions from those in the security & OWASP
In my mind there are two big areas for consideration in a benchmark process.
1. Are the benchmarks testing the right areas?
2. Is the process for creating the benchmark objective & free from conflicts
I think as a group OWASP is the right body to align on #1.
I'd like to ask for some clarifications on item #2. I think it's important
to avoid actual conflict of interest and also the appearance of conflict of
interest. The former is obvious why we mustn't have that, the latter is
critical so others have faith in the tool, process and outputs of the
process when viewing or hearing about the project.
1) Can we clarify whether other individuals have submitted meaningful code
to the project?
Nearly all the code commits have come from 1 person (project lead).
2) Can we clarify the contributions of others and their represented
The acknowledgements tab listed two developers (Juan Gama & Nick Sanidas)
both who work at the same company as the project lead. It seems other people
have submitted some small amounts of material, but overall it seems all
development has come from the same company.
3) Can we clarify in what ways we've mitigated the potential conflict of
interest and also the appearance of a conflict of interest? This seems like
the largest blocker for wide spread acceptance of this project and the
The project lead and both of the project developers works for a company with
very close ties to one of the companies that is evaluated by this project.
Further, it appears the company is performing very well on the project
4) If we are going to list tool vendors then I'd recommend listing multiple
vendors for each category.
The tools page only lists 1 IAST tool. Since this is the point of the
potential conflict of interest it is important to list numerous IAST tools.
5) Diverse body with multiple points of view
There is no indication that multiple stakeholders are present to review and
decide on the future of this project. If they exist, a new section should be
added to the project page to raise awareness. If they don't exist, we should
reevaluate how we are obtaining an independent view of the testing process.
Again, I think the idea of the project is great. From my perspective
clarifying these questions will help ensure the project is not only
objective, but also perceived as objective from someone reviewing the
material. Ultimately this will contribute to the success and growth of the
mailing list OWASP-benchmark-project at lists.owasp.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OWASP-benchmark-project