[OWASP-TESTING] Ideas for WAVA Testing Overview & Procedures Checklist documents

Stephen Venter stephen.venter at gmail.com
Mon Apr 18 13:11:37 EDT 2005


Hi all

I'd like to I put forward another concept for consideration here. As
most people have been mentioning, there are two different perspectives
that you (as the application tester) normally adopt when you perform
your testing activities against the target system: black-box and
white-box.

However, I feel there is a need to consider that perhaps these
perspectives (coined when security testing was all about
infrastructure reviewing / pentesting) don't adequately reflect /
encompass the approaches adopted during Web Application security
reviewing (and here's another phrase that a previous colleague of mine
came up with which I feel might be more appropriate, to help get away
from the word "pentest": Web Application Vulnerability Assessment, or
WAVA).

My suggestion is that we consider defining an additional perspective:
"Translucent box" [or whatever name might be more appropriate here – I
was just trying to find one that sort of conveyed the concept of
fitting in-between black & white]), and that some or all of the actual
testing techniques, might be appropriate (and some not) depending on
which point of view you are approaching the application from – and off
course, some testing procedures may need to be performed differently
depending on this perspective.

These testing perspective definitions would be best suited to being
inclusions into the OWASPTesting_PhaseOne document (although I would
like to offer an alternative name for that too: something like the
"Testing Overview" or "approach" or "guideline document", because, as
far as I see it, it actually covers a number of phases not just one),
and then the Web App Testing Checklist table could be expanded to
include a column for specifics about the testing phases from the
Overview document where the tests are appropriate, or should be
customised, etc And then the appendices to the checklist could be the
place to include specific examples of how to carry out those tests and
what the expected results are, etc.

I feel it would be appropriate to define the testing perspectives
something like:
1.	Black box - from the perspective of an anonymous, unauthenticated
user, with minimal knowledge about the target system
2.	Translucent box [or whatever name might be more appropriate here –
I was just trying to find one that sort of conveyed the concept of
fitting in-between black & white] – from the perspective of a "normal"
application user, where they have a login account (through whatever
means a "normal" user of the system gets assigned an "authorised"
login to it)
3.	White box – from the perspective of a person with full system
access, and thus full access to the application code and servers, etc
[typically the level of access the system admin, application author,
system auditor might have]

And then you can discuss how each of the testing procedures /
techniques / steps fit in with each of the above perspectives,
together with how they might or might not be appropriate in the
different testing perspectives, or might need to be modified to suit
the particular perspective.

In each perspective there are a certain amount of activities to be
performed that fall into the common class of categories like:
·	Planning
·	Information gathering
·	Target identification
·	Service enumeration & service information mining
·	Automated testing procedures
·	Manual testing procedures
·	Feedback / report writing

So, for example, in the traditional black box pentest, the pentester
might just be given a server name or IP address as the target (i.e.
the anonymous user approach, with minimal information up front), and
he will be required to go through the usual testing steps like:
·	Planning: understanding the security significance of the target and
why someone might want to compromise it;
·	Information gathering: querying public resources, whois records,
mailing lists, for information that may help, etc;
·	Target identification: ping sweeps, icmp vs tcp, etc
·	Service enumeration (portscans) & service information mining
(fingerprinting, harvesting service version info from what is returned
when you connect to them, etc)
·	Automated testing using vulnerability scanners
·	Manual testing procedures (weed out false-positives, perform
additional  checks to compliment the automated scanners, etc)
·	Feedback / report writing

In an application security vulnerability assessment (WAVA) scenario,
the black box approach could be where the tester has been given the
specific URL or web server address only. Again the testing techniques
/ steps / procedures might include:
·	Planning: understanding the security significance of the target and
why someone might want to compromise it;
·	Information gathering: querying public resources, connecting to the
server with a browser and reading what information is presented, etc;
·	Target identification: HTTP vs HTTPS ports, SSL certificate
information, a lot of the work here could be considered more
traditional infrastructure pentest work, and sometimes we need to be
wary of being drawn too much into this when the primary target is the
web app, not necessarily the infrastructure [and yes, of course the
infrastructure needs to be secure, which is what pure pentests are
there for…]
·	Service enumeration & service information mining – observing what
web application specific information is obtainable, noting the input
fields and variables, including hidden fields, client side scripting,
etc.
·	Manual testing procedures – I'd say that an application tester is
likely to start with manual testing, and only get on to automated
testing when he has a better understanding of what functions are
performed by the application - testing areas like user authentication
processes (session manipulation / user escalation), input validation,
code injection, etc.
·	Automated testing – like sampling session management values (e.g.
cookie) to assess for predictability, brute force password guessing,
code injection testing of input fields (i.e. SQL injection, etc)
·	Feedback / report writing

In a pentesting situation, translucent box testing might be testing
from the position of the DMZ or corporate network, where you have
access behind firewalls, etc. Not quite administrative access, but
closer to the level of access that a normal (a.k.a. "corporate") user
might have.

With the application (WAVA) tester, the translucent box testing can be
seen to be where the tester is given example user accounts for users
of different levels of privilege (from the application's perspective,
not the operating system perspective!). And then your manual testing
activities take on a whole new angle where you attempt to see if you
can call admin level menu functions while only logged in as a
low-privileged user, see how cross-site scripting might allow a
low-privileged user to escalate their privileges to those of the admin
level user, or even to gain unauthorised access to records of other
low-privileged users, etc.  For the translucent box perspective you
can take the manual testing procedures phase (as just one of the items
on its own) and expand it into sub categories, like:
·	User Authentication and Authorisation mechanisms - test a user's
ability to perform unauthorised tasks such as: Access data or
resources that they are not entitled to; Escalate their current
privileges; Transact as another user; Access resources after logoff /
session timeouts; etc
·	Access Protection controls - test access protection mechanisms over
system resources, e.g. testing if a user can: Gain unauthorised access
to system files, data or other resources; Circumvent the access
protection controls using atypical methods such as alternate character
sets, or URL munging / manipulation / fuzzing, etc.
·	Data Validation - test the application for its vulnerability to data
injection and insertion techniques (buffer overflows, string
formatting problems, etc), i.e. test if the application: Correctly
parses any data inputted by the user; Performs adequate bounds
checking; Performs adequate validation of any data inputs; etc
·	Session Management - assess whether a user can: Steal or highjack
the credentials or session of another user; Change their current
identity to that of another user (account or role "hopping");
Manipulate or falsify the session management controls for any other
means; etc

Also, while doing translucent box testing, the WAVA tester will gather
new information that would be useful to try from the black-box WAVA
perspective – like trying to call internal web app functions before
you are authenticated with the server (e.g. I have encountered
situations where simply connecting to the login page / function and
being issued with a session ID / cookie allowed me to call a function
that lists the transaction record – in other words the authorisation
procedures were not being invoked when that function was called, all
that it looked at was that a session ID in the correct format was
included within the user's POST request…)

And then you get on to the white-box testing. 

In the pentest situation, I'd be more inclined to call this the
infrastructure hardening review / audit – with admin level access you
review operating system settings and installed applications; IT
department organisation reviews; change control procedures; disaster
recovery planning; and all the other normal audit / compliance
objectives…

The white-box testing in the WAVA situation could be where you include
categories of activities like code reviews / audits, architectural
design reviews, interviewing developers, threat modelling, DRP, change
control procedures, etc

What do you think?

Regards,
Steve


More information about the Owasp-testing mailing list