No subject


Thu Nov 13 15:28:43 EST 2008


1) Identify what information we need to hold about the site under test. That
will include the basics, such as URL's/links and Response Codes seen when
they are requested, through to "Does it hold a form", "where do the forms
point to?", "What parameters does the form accept?", "What methods can we
use to submit parameters?", "Does this application require a cookie?", "What
values have been/can we submit to this application?", "What error responses
have we seen while requesting this URL/submitting values to this URL? i.e.
SQL errors, etc"

2) Build the database or datastructures required to store this information

3) Build the proxy that allows a human to direct traffic (via their own web
browser), that will be parsed to supply the DB

4) Build an interface to present the results to the user in a meaningful
manner (in realtime)

5) Start adding functions to the interface that do things that can't be done
well from the browser itself. E.g. The spider, URL Fuzzing, cookie
collection. Let the user decide when and how to execute them. E.g. For the
spider, let the user spider a particular subtree, or provide a regex exclude
to prevent the spider from logging itself out.

6) ONLY THEN start working on when to automate functions implemented above.
E.g. for the spider, perhaps we could provide a default exclude that "skips
all URLS that have "{exit|logoff|logout|finish}", etc in them". Or we could
look at the text that accompanies the links, for similar words. Or we could
perform pattern recognition on the images that accompany the links. My point
here is that this, in itself, is not an easy task. Why try to accomplish it
up front, when it is actually quite easy for a human to perform, and trying
to solve it detracts from the more useful work that is more difficult for a
human to do . . . ?

I'll admit right now that this sound a lot like Exodus in it current
incarnation, and that this could be interpreted as an attempt to "hijack"
WebScarab and turn it into Exodus, or make Exodus into WebScarab. It was
always my intention in participating in WebScarab to contribute any ideas
that I developed into code, or even any ideas period, that could improve the
tool. I just think that WebScarab has stalled because people are not really
seeing how it can benefit them at this point, so there is no development on
it. Once we get it to the point that people can start using it, and deriving
some benefit, additional features will be forthcoming, and WebScarab will
flourish.

I would be quite happy for people to rip whatever they find useful out of
Exodus, if it will help to kickstart WebScarab. I know that my design skills
are sadly limited, so I'm sure that large parts would need to be rewritten,
but whatever is useful is available.

Rogan

-----Original Message-----
From: Ingo Struck [mailto:ingo at ingostruck.de] 
Sent: 25 May 2003 05:41 PM
To: rdawes at deloitte.co.za
Subject: Fwd: Re: Scrab rises from the ashes......no thats Pheonix..oh well.


Hi Rogan,

sorry, somebody messed up your email address in the mail before...

----------  Weitergeleitete Nachricht  ----------

Subject: Re: Scrab rises from the ashes......no thats Pheonix..oh well.
Date: Sun, 25 May 2003 17:33:59 +0200
From: Ingo Struck <ingo at ingostruck.de>
To: "S. Rohit" <s.rohit at usa.net>, <mark at curphey.com>, 
<david.raphael at ceterum.net>, <jpoteet at tech-partners.com>, 
<admin at mokshafaced.com>, <rdawes at delloite.co.za>, <mcmahon at sprintmail.com>, 
<jeff.williams at aspectsecurity.com>
Cc: owasp-webscarab at lists.sourceforge.net

Hi...

> but more importantly will be the ability to do
> the attacks manually, as web apps i belive need some human inervention to
> perform attacks, cos some of the attacks need on the fly thinking and
> innovating. wat works for one web app may not for the other. also the tool
> shud be preferably modular, so tat we can switch components in and out, to
> strike out the prefect balance... ooooops, hope i hvnt chewed more than i
> can bite... and i believe generally people in the web security area will
be
> pretty keen on something like this.

Well, the picture you draw here matches our old WebScarab vision
and in fact some of the structure disposed in WebScarab's CVS pretty well.

:o)

- the WebScarab code is not only intended but in fact highly modular
- from our last discussions / use cases WebScarab is mainly an interactive
  tool (cf. some of the UI mock-ups)
- automation of the tests is at a very low rank in the development plans

We already have a pretty good working spider, some more or less useful
ideas for the UI impls and a proxy UI mock-up (alas without sources yet!).

I guess what needs to be done is:
- browse through the CVS and get familiar with what we already put there
- make a new preference list what needs to be done;
  our last consensus was to finish the spider code (session tracking,
  form-based authentication, some protocol specifica) first, then write an
  attack engine with pluggable attack-classes and a proxy to inject
  handicrafted attacks
- check in the code of the proxy UI mock-up, since it seems to be a good
  approach to insert completely manually crafted requests (attacks)
- code the stuff

Most important is the last point.


More information about the OWASP-Leaders mailing list