[OWASP-benchmark-project] Proposal for handling time taken for running test cases against DAST scans
Kevin W. Wall
kevin.w.wall at gmail.com
Thu Dec 10 06:43:47 UTC 2015
Was just reading more closely on the wiki page under the 'Test Cases'
tab how it mentioned that the # of test cases have been reduced
from over 20k to ~3k (specifically, 2740). The wiki page states:
Version 1.0 of the Benchmark was published on April 15, 2015
and had 20,983 test cases. On May 23, 2015, version 1.1 of
the Benchmark was released. The 1.1 release improves on the
previous version by making sure that there are both true
positives and false positives in every vulnerability area.
Version 1.2beta was released on August 15, 2015.
Version 1.2 and forward of the Benchmark is a fully
executable web application, which means it is scannable by
any kind of vulnerability detection tool. The 1.2beta has
been limited to slightly less than 3,000 test cases, to make
it easier for DAST tools to scan it (so it doesn't take so
long and they don't run out of memory, or blow up the size of
their database). The final 1.2 release is expected to be at
least 5,000 or possibly 10,000 test cases, after we determine
that the popular DAST scanners can handle that size.
I was going to suggest that perhaps one way to deal with the
complexity and length run times of DAST would be to all whomever
is configuring the tests to have a "selector" that just allows one
or more CWEs to be chosen as the test cases and then only
deploy those test cases matching the selected CWEs into the
mix for the DAST testing. (SAST should be fast enough to probably
run against all test cases.)
Note that I'm not volunteering to write the code, but one thing is that this
goes back to my 1st question about contributing to the test cases.
If we are not going to put all 20k of them out there, then it would be
difficult to tell if some of them are redundant. And if there is a desire
to put them all out there (which I believe should be the ultimate goal),
then we need some better way to organize them for people to contribute,
e.g., making 'cwe##' subdirectories for the test cases in
and organize them that way. (That would also make creating a 'selector'
a bit easier.)
Just my $.02,
NSA: All your crypto bit are belong to us.
More information about the OWASP-benchmark-project