[Owasp-testing] editorial changes to intro

Marco Cova marco.cova at gmail.com
Fri Jul 18 11:06:54 EDT 2008


[I forward part of the discourse going on with Marco in case somebody
wants to jump in and leave feedback]

Marco,

I think this discussion has brought up some good points. I answer briefly:

On Thu, Jul 17, 2008 at 6:29 PM, Marco M. Morana
<marco.m.morana at gmail.com> wrote:
> For this reason a taxonomy of tools is important for decide to
> test what with which tool and how
> [...]
> Obviously depending on vulnerabilities
> and root causes, some tools are better to find issues then others.
> [...]
> The fact remain that a tool given to a monkey will test as good as the tool
> does while given to experienced tester will only be a way to reduce
> complexity in the analysis and point to hot spots where issues can be
> assessed more in depth.

I like this discussion about the taxonomy of tools, the fact that
tools have different performance on different test cases
(vulnerabilities/targets), tool results have to be interpreted by a
human, and that they allow testers to focus on the really interesting
parts (hot spots) of the system. I think it's a more valuable
assessment than generically saying "automated tools are actually bad
at automatically testing for vulnerabilities" (which is in the guide
and was the target of my note :-)).

> From a different perspective, testing tools can be referred
> to as "badness-meters" as Gary McGraw call them: the fact that you come out
> clean it does not mean that your software or application is good.

True. However, I have 2 comments here:
1. If i'm not mistaken, McGraw/Ranum/etc when they talk about
badness-meters, refer to pentesting (tools + human), not just testing
tools. And while some issues clearly overlap, I think it's good to
keep them separated.
2. More importantly, what McGraw/Ranum/etc (re)discover is that,
generally, testing of any interesting system is not sound. In other
words, in general, when testing, you have no false positives, but you
accept to have false negatives (there is no guarantee to find all the
vulnerabilities). Of course, this is relevant to this (testing) guide.
I don't know what the consensus is about this, but I think the real
value of a testing framework is that it removes (or can help remove)
much of the variability inherent in testing (in short, the outcomes
depend less on who is testing).

> these tools test for known vulnerabilities. The issue is on the "unknown" or the
> security issues that are not tested.

Just playing the devil's advocate here. Recently, there has been some
interesting research (at least in the academic camp, with which I'm
more familiar) on finding "unknown" vulnerabilities, for example, by
using anomaly detection techniques or specification learning
approaches (one good example is  "Bugs as Deviant Behavior" by D.
Engler and others. BTW, these are some of the people behind Coverity).

Marco


More information about the Owasp-testing mailing list