[OWASP-TESTING] Comments on the Draft Version 1.0 of the Testing Guide
jfernandez at germinus.com
Mon Aug 23 08:54:44 EDT 2004
Hi, I'm back of vacation, with a number of comments related to the
Testing Guide. Since I have a number of them I will just dump them
here and see if they are useful and/or spark some discussion.
- Chapter 1: The "SDLC". When talking about figure 1 it says "the
following figure shows a generic SDLC model". I actually believe the
figure says much more since it also shows the increasing cost of
fixing bugs in each of the phases. It might be worthwhile stressing
that out here (even if it is also said further along the document)
saying that the cost of fixing bugs (generally speaking) increases the
later they are fixed.
- Chapter 2: It lacks and introduction to what will be covered in the
Chapter. Something on the lines of: "There are some misconceptions
when developing a testing methodology to weed out security bugs in
software. This chapter covers some of the basic principles that should
be taken into account when testing for security bugs in software."
- Chapter 2: "Think Strategically..." could be improved in the
discussion about the bug/patch/fix cycle. Specifically, it could
include the the usual "window of exposure" picture as developed by
Bruce Schenier at http://www.schneier.com/crypto-gram-0009.html#1.
Moreover, it could mention that, generally speaking, the time between
a vulnerability is discovered and an automated attack is developed is
continuously going down leaving no time to apply patches (worms show
have shown this fact over the years). Even when developing custom
software (i.e. software which will not be pushed out to the wide
public) it might be worth including a direct reference to @stake's
article (included in the references at the end) describing the cost to
deploy patches in environments vs. the cost of fixing them in the
design phase. I think it's also worth substituting (or enhancing) the
'Wrong!' statement there with something on the lines of
"There are several wrong assumptions in this line of thinking: patches
interfere with the normal operations and might break existing
applications, not all the product's consumers will apply patches
precisely because of this issue or because they lack knowledge about
the patch's existance and it is being demonstrated that the typical
window of vulnerability does not provide enought time for patch
installation in the time between a a vulnerability is uncovered and an
automated attack against is developed and released."
- Chapter 2: "The SDLC is King". Maybe enhance the checklist with
something on the lines of:
"Security testing should be done withing the framework of an existing
SDLC in order to produce software with as few security bugs as
possible. Thus, it must blend with all of the phases of a SDLC, from
the early stages of design to the last stages of operations and
- Chapter 2: "Test Early and often". Regarding developers education:
"Knowledge of typical security vulnerabitilies is a big advantage for
developers since it will help them avoid common mistakes. Although new
libraries, tools or languages might help desing better programs (with
less security bugs) new threats arise constantly and developers must
be aware of those that affect the software they are developing.
Education in security testing also helps developers acquire the
appropiate mindset to test and application from an attacker's
- Chapter 2: "Mindset". Add the term 'thinking out of the box' to the
"Thnk like an attacker or cracker" checklist item.
- Chapter 2: "Use The Right Tools". I think the "Shouldn't bring a
knife to a gun fight item" should be removed or improved with a more
detailed explanation there.
- Chapter 2: "Develop Metrics". Add the following items:
- Create consistent metrics to determine the security level of your
code. The OWASP Metrics project can help you here.
- Automate metrics extraction from available code
- Analyse the evolution of values derived from metrics between the
It might be also worth referencing
which steems from http://www.cyberpartnership.org/Software%20Pro.pdf
It talks about an average number of defects per # of line of code in
an normal software development process versus one that introduces a
security methodology in its SDLC.
- Chapter 4: It's also missing an introduction to the chapter.
Moreover, I don't understand why this chapter covers only Review and
Manual inspections instead of briefly talking about the different
techniques (reviews, penetration testing, etc...) and leave up the
detail regarding reviews to a separate chapter. I believe the title
'Testing Techniques Explained' is misleading.
- Chapter 4: "Studies and research reports show that maximum failure
of a ..." this phrase is ok but it is missing up a reference to a
paper backing up the point.
- Chapter 4: "What is an inspection?" I don't think it really defines
whan an inspection is. It talks about different roles in the
inspection process and how meetings should be conducted, but it does
not say: "An inspection is the process in which different
individiduals, which do not belong to the development team producing
the software, inspects the software under development."
- Chapter 4: "Elaboration phase" The final paragraph talks about legal
requirements broadly. It would be nice to describe some common aspects
regarding legal requirements. For example, in the EU is mandatory for
personal data to be treated with due care in applications.
More information at
Directive 95/46/EC says:
" (46) Whereas the protection of the rights and freedoms of data
subjects with regard to the processing of personal data requires that
appropriate technical and organizational measures be taken, both at
the time of the design of the processing system and at the time of the
processing itself, particularly in order to maintain security and
thereby to prevent any unauthorized processing;"
This introduces obligations to the software developer. Some countries
(such as Spain) obligue companies to determine the sensitivity level
of the personal data stored and to take appropiate measures based on
the sensitivity level, this includes encryption of stored data and
audits of access to that data.
In the US references to the HIPAA and similar laws might apply.
- Chapter 4: "Code Reviews" I find it confusing that code reviews is
included here as well as in Chapter 5.
- Chapter 4: "Code Reviews" I don't find the name of "Scripting
vulnerabilities" appropiate to that checklist item, I believe that
"Source code integrity" might be more appropiate there.
- Chapter 5: "Source code review - Introduction" I would add, to the
end of the final paragraph: "as opposed to black box testing, also
code penetration testing which is covered in chapter 6"
- Chapter 5: "Is source code reviewed needed?" States that only source
code review can uncover trojans which is not really true. See
"Reflections on Trusting Trust"
(http://cm.bell-labs.com/who/ken/trust.html). Actually, I don't
understand why this same example is used later as a reference when
it's actually these kind of security bugs that cannot be uncovered by
software review. Note that Ken Thompson splicitly says that the source
code of the compiler is removed so that no source code revision can
detect the trojan making use of the chicken-and-egg status of C code
vs. the compiler. I don't believe this example should be used as one
security bug that could be uncovered by source code review but,
rather, as a bug that would never be uncovered by it.
- Chapter 6: "Penetration Testing" Maybe it's worth adding in the
introduction that penetration testing is in many cases done to test
the production environment. This has the disadvantage of putting at
risk the production environment (a pentester could screw it up and
remove the backend database after all) but has the advantage of
testing the actual deployment of the application (and the
infraestructure it depends on).
- Chapter 6: "Advantages and disadvantages" I wouldn't count as an
advantage, but rather as a fact, that penetration testing results vary
with the effort (and knowledge) dedicated by the pentesting team.
However, it does not usually scale. More effort might, or might not,
detect new vulnerabilities it usually gets to a point when you will
not detect more vulnerabilities regardless of the time dedicated to
it. It is true, however, that less effort will usually detect less
vulnerabilities. This scaling is not something you can depend on however.
- Chapter 6: "Advantages and disadvantages" I would add as an
advantage that penetration testing usually reviews systems and
architectures that the application relies on (firewalls, web service
or application servers setup, etc...)
- Chapter 6: "Advantages and disadvantages" says "Accuracy is a
problem (...) must rely on information sent from the application"
which is not 100% true. Pentesters also realy on the information sent
by the infraestructure that supports the application, think of a
misconfigured webserserver that returns detailed information on the
programming error that breaks and application, or when it fails when
accessing a backend...
- Chapter 6: "Advantages and disadvantages" I would add as a
disadvantage that pentesting and specially automated scanners
concentrate on common deployment mistakes or vulnerabilities and do
not necessarily concentrate on the company's concerns (or specific risks)
- Chapter 6: "Advantages and disadvantages" I would also add as a
disadvantage that pentesting teams are not usually focused on
applications but on broader systems (think OSSTMM for example).
- Chapter 6: "Why is PenTesting Needed?" Besides server-level
vulnerabilities pentesting also finds out configuration
vulnerabilities and also issues when deploying the application. For
example, in some situations the code deployed into production might
not be the same code as the one developed or the environment it is
deployed to varies. It might be worth noting that too.
- Chapter 6: "Approaches to Penetration testing" I think it's best to
order the different styles by order of preference (i.e. Prima Donna last)
- Chapter 6: "Approaches to Penetration testing - Capture the flag" It
might be worth stressing there that these in these tests a test that
fails does not mean anything. Moreover, you might never know if the
pentesting team actually _did_ anything.
- Chapter 6: "Guessing the architecture" A penetation tester also
tries to determine how it is implemented and what technology is used
since some flaws are specific to language or technology.
- Chapter 6: "Viewing Source Code to Better Understand..." It fails to
point out that some server vulnerabilities might disclose source code
(or fragments of it) which might help in the analysis. Also, in
penetration testing source code is sometimes obtained from side
channels used for publishing (think of an open FTP server)
- Chapter 6: "So Why Not Automate all of this?" It might be worth
adding a list of things an automated scanner is good at, for example:
brute forcing user accounts, finding URLs that are not published
(through dictionary attacks against an application), finding typical
input handling errors and common server security bugs due to
misconfiguration or to unpatched vulnerabilities.
- Chapter 7: "Example 2: Bad Cryptopgraphy" Has a typo it says:
"Clearly, aw we explain the scheme..."
it should say
"Clearly, as we explain the scheme..."
- Appendix A: Testing Tools, does not include some open source blax
box scanners such as Nessus (it does have some plugins to detect web
application vulnerabilities) and Nikto/Whisker. It might be worth
adding also some applications used as intermediate proxies for black
box scanning (SPIKE does some of this too) like httpush (
http://sourceforge.net/projects/httpush), Exodus, Achilles
Commercial: Paessler Site Inspector
(http://www.paessler.com/products/psi) formerly IEBooster) and maybe
some others might be relevant here...
That's more or less all the comments I had in mind, I hope they are
More information about the Owasp-testing