[OWASP-benchmark-project] Proposal for handling time taken for running test cases against DAST scans

Kevin W. Wall kevin.w.wall at gmail.com
Sun Dec 13 21:26:33 UTC 2015


Okay,  that's one reason why I was looking for wiki page or other docs to
describe steps to define new test cases. That wasn't obvious in the 2 or 3
test cases I looked at. Looking forward to seeing the details.

-kevin
Sent from my Droid; please excuse typos.
On Dec 13, 2015 4:18 PM, "Dave Wichers" <dave.wichers at owasp.org> wrote:

> Ah. But it isn’t any extra work for you. That’s the beauty of it. You just
> create a sink object with some XML describing it, and it gets thrown into
> the mash and included in a bunch of test cases automatically that include a
> UI, source, propagator, and your new sink. The propagator doesn’t even need
> to connect to your new sink (crypto test) code. If that’s the case, then
> the source/propagator portion of the test is a red herring for, and the
> only part that matters is the sink.
>
> And I think its important to score both SAST and DAST and others against
> the same test so the community learns what they can and can not do.
> Sometime I feel like only the professionals truly understand those
> differences. The Benchmark will help illustrate that more clearly.
>
> -Dave
>
> From: Kevin Wall <kevin.w.wall at gmail.com>
> Date: Sunday, December 13, 2015 at 2:13 AM
> To: Dave Wichers <dave.wichers at owasp.org>
> Cc: OWASP Benchmark Project <owasp-benchmark-project at lists.owasp.org>
> Subject: Re: [OWASP-benchmark-project] Proposal for handling time taken
> for running test cases against DAST scans
>
> Well, look at it from a test case writing PoV. If you know that something
> is going to only be detectable by SAST or IAST, why bother to go the extra
> mile to pass stuff back and forth over HTTP when DAST has a zero chance of
> detecting it because it doesn't have visibility to the 'source' (as in
> "source / sink")? (I.e., there are times the black box view is simply not
> sufficient.) A lot of crypto test cases that I can conceive of are like
> this. Building a simple test class to for the test case for SAST tools to
> scan is relatively easy; creating the additional test harness code just so
> DAST gets to play to (even though it can do no better than randomly guess
> if an issue is present) adds additional work and seems like a waste of time
> and resources to me. Sure, if there were *any* chance a black box tests
> could detect if it would be worth making the extra effort, but there will
> always be cases where only a white box or gray box test approach has any
> chance at detecting some specific issues.
>
> I _understand_ your point / philosophy, I just don't agree with it
> completely.
>
> -kevin
> Sent from my Droid; please excuse typos.
> On Dec 13, 2015 12:41 AM, "Dave Wichers" <dave.wichers at owasp.org> wrote:
>
>> All the test cases are for both. And the reason we don’t have some for
>> SAST and some for DAST is so we can make direct comparisons of SAST against
>> DAST tools all with a single Benchmark. To me, that’s one of the key
>> features of the Benchmark in that it is the first Benchmark to allow such
>> direct comparisons of SAST to DAST to IAST to any other app vuln detection
>> tool because its a real runnable app.
>>
>> Some will clearly not be detectable by DAST and others will (eventually)
>> not be detectable to SAST (most likely). And I don’t want to say in advance
>> what type of tools can find what as vendors are getting more and more
>> clever with the use of hybrid, and instrumentation, and other cool things,
>> like involving other servers in the testing, etc.
>>
>> -Dave
>>
>> From: Kevin Wall <kevin.w.wall at gmail.com>
>> Date: Saturday, December 12, 2015 at 10:26 PM
>> To: Dave Wichers <dave.wichers at owasp.org>
>> Cc: OWASP Benchmark Project <owasp-benchmark-project at lists.owasp.org>
>> Subject: Re: [OWASP-benchmark-project] Proposal for handling time taken
>> for running test cases against DAST scans
>>
>> Dave,
>>
>> It would seem like one useful extension that could be done would be to
>> add something to the testcase###.xml files that would distinguish whether
>> the particular test cases were intended for SAST, DAST, or both. That would
>> allow us to drastically increase the # of SAST tests while keeping the
>> longer running DAST tests at a minimum. Alternatively, they could the
>> separated into distinct sub-directories. That would seem like a useful
>> ability though because some 'sources' (as in source-to-sink) don't really
>> apply (as in "not visible to") to (web-based) DAST tools since they only
>> have a web / HTTP view.
>>
>> -kevin
>> Sent from my Droid; please excuse typos.
>> On Dec 12, 2015 3:02 PM, "Dave Wichers" <dave.wichers at owasp.org> wrote:
>>
>>> Kevin,
>>>
>>> The test cases are generators from 3 primary building blocks:
>>>
>>> 1) The source of taint (e.g., request.getParameter(), or headers, or
>>> cookies, etc.)
>>> 2) Various kinds of propagation constructs (including none at all).
>>> 3) A sink (e.g., runtime.exec(), or an HTTP response, or an SQL query).
>>>
>>> Currently, there are about:
>>>
>>> 14 sources
>>> Over 100 sinks
>>> ~25 propagators
>>>
>>> Not all of these are compatible with each other. That said, producing ALL
>>> of the combinations would result in 100K or so test cases, but would
>>> involve massive redundancy. That¹s why a much smaller subset is
>>> reasonable
>>> to produce because each building block is included between 25 up to 100
>>> or
>>> more times already. And that¹s with the 2740 test cases.
>>>
>>> I¹m OK with generating a bigger set, and in fact, it should naturally
>>> grow
>>> as we add more building blocks. So I think starting with 2740 dynamic
>>> test
>>> cases is a good balance between having a good / broad set, and the time
>>> it
>>> takes to run a successful scan with agains the Benchmark with Dynamic
>>> tools.
>>>
>>> -Dave
>>>
>>> P.s. We are working on a UI update for the Benchmark that will organize
>>> all the test cases into different categories by vulnerability category
>>> and
>>> will even break up the tests within a category as well so we never have
>>> more than 80 test cases for a single directory in the URL space.  This
>>> will make it easy for someone to test just one or a few specific types of
>>> vulns with their DAST scanner rather than having to scan the entire
>>> Benchmark each time.
>>>
>>> On 12/10/15, 1:43 AM, "Kevin W. Wall" <kevin.w.wall at gmail.com> wrote:
>>>
>>> >Was just reading more closely on the wiki page under the 'Test Cases'
>>> >tab how it mentioned that the # of test cases have been reduced
>>> >from over 20k to ~3k (specifically, 2740). The wiki page states:
>>> >
>>> >    Version 1.0 of the Benchmark was published on April 15, 2015
>>> >    and had 20,983 test cases. On May 23, 2015, version 1.1 of
>>> >    the Benchmark was released. The 1.1 release improves on the
>>> >    previous version by making sure that there are both true
>>> >    positives and false positives in every vulnerability area.
>>> >    Version 1.2beta was released on August 15, 2015.
>>> >
>>> >    Version 1.2 and forward of the Benchmark is a fully
>>> >    executable web application, which means it is scannable by
>>> >    any kind of vulnerability detection tool. The 1.2beta has
>>> >    been limited to slightly less than 3,000 test cases, to make
>>> >    it easier for DAST tools to scan it (so it doesn't take so
>>> >    long and they don't run out of memory, or blow up the size of
>>> >    their database). The final 1.2 release is expected to be at
>>> >    least 5,000 or possibly 10,000 test cases, after we determine
>>> >    that the popular DAST scanners can handle that size.
>>> >
>>> >
>>> >I was going to suggest that perhaps one way to deal with the
>>> >complexity and length run times of DAST would be to all whomever
>>> >is configuring the tests to have a "selector" that just allows one
>>> >or more CWEs to be chosen as the test cases and then only
>>> >deploy those test cases matching the selected CWEs into the
>>> >mix for the DAST testing. (SAST should be fast enough to probably
>>> >run against all test cases.)
>>> >
>>> >Note that I'm not volunteering to write the code, but one thing is that
>>> >this
>>> >goes back to my 1st question about contributing to the test cases.
>>> >
>>> >If we are not going to put all 20k of them out there, then it would be
>>> >difficult to tell if some of them are redundant. And if there is a
>>> desire
>>> >to put them all out there (which I believe should be the ultimate goal),
>>> >then we need some better way to organize them for people to contribute,
>>> >e.g., making 'cwe##' subdirectories for the test cases in
>>> >    src/main/java/org/owasp/benchmark/testcode
>>> >and organize them that way. (That would also make creating a 'selector'
>>> >a bit easier.)
>>> >
>>> >Just my $.02,
>>> >-kevin
>>> >--
>>> >Blog: http://off-the-wall-security.blogspot.com/
>>> >NSA: All your crypto bit are belong to us.
>>> >_______________________________________________
>>> >OWASP-benchmark-project mailing list
>>> >OWASP-benchmark-project at lists.owasp.org
>>> >https://lists.owasp.org/mailman/listinfo/owasp-benchmark-project
>>>
>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-benchmark-project/attachments/20151213/85e0e852/attachment-0001.html>


More information about the OWASP-benchmark-project mailing list