[Owasp-leaders] OWASP Benchmark Project Releases Dynamic Scanning (DAST) Support with 1.2beta
jim.manico at owasp.org
Mon Aug 17 02:30:55 UTC 2015
This seems like a very important effort to me and I'm glad you are
taking it on. There is a very large number of tests.
However, the tests seem spread over a very small set of AppSec categories.
Command Injection 2708
Weak Cryptography 1440
Weak Hashing 1421
LDAP Injection 736
Path Traversal 2630
Secure Cookie Flag 416
SQL Injection 3529
Trust Boundary Violation 725
Weak Randomness 3640
XPATH Injection 347
XSS (Cross-Site Scripting) 3449
I'm worried about this. This limited series of subjects seems far from a
complete way to analyze DAST tools. Also, 416 test cases on the secure
cookie flag alone seems like a heavy concentration of tests in a *very*
narrow topic. I can think of hundreds of other *types* of tests for
authorization and authentication alone.
Do you have plans to dramatically increase the *types* of tests?
The high concentration of tests in a very narrow band of topics worries
me about the accuracy of this benchmark tool. Is that a fair concern?
On 8/15/15 2:57 PM, Dave Wichers wrote:
> I announced the OWASP Benchmark project
> (https://www.owasp.org/index.php/Benchmark) a few months ago with the 1.1
> release which supported analysis by static analysis tools (SAST) as the
> first major release. As I said in my initial email:
> "The OWASP Benchmark for Application Security Automation (OWASP Benchmark)
> is an open test suite designed to help organizations and practitioners
> evaluate the speed, coverage, and accuracy of automated application
> security testing tools and services."
> I¹m now proud to announce the 1.2beta release which is a fully running,
> exploitable web application ready for scanning my dynamic analysis tools
> Version 1.1 of the Benchmark has over 20,000 test cases, each being an
> individual Java Servlet. We decided to make the 1.2beta version MUCH
> smaller (slightly under 3,000 tests), because of the length of time it
> takes DAST tools to scan the Benchmark, and that they frequently run out
> of memory, and sometimes database space, etc. So we are releasing this
> smaller version to give testers a first look and so they can more quickly
> provide us feedback.
> If anyone in the OWASP community has access to a DAST tool beyond ZAP and
> Burp Pro (which we have covered), PLEASE run a scan against the Benchmark
> with that tool, and send me the results file so we can build a scorecard
> generator for it. According to Shay Chen¹s Web Application Vulnerability
> Scanner Evaluation Project (WAVSEP) -
> ml, there are over 50 different free and commercial web application
> scanners out there. We¹d LOVE to add support for ALL of them to the
> Now that we have support for both SAST and DAST in the Benchmark, we need
> to start gathering tool results so we can truly compare how these tools do
> against each other not only within their category, but across categories
> as well.
> I¹m going to be presenting this tool at OWASP AppSec USA 2015 on Thursday
> from 3-4:
> Please come join me to learn more about the project and get involved!
> Thanks, Dave
> OWASP-Leaders mailing list
> OWASP-Leaders at lists.owasp.org
More information about the OWASP-Leaders