[Owasp-testing] Plan for OTGv5

Tomas Zatko tomas.zatko at citadelo.com
Sat Mar 4 11:39:06 UTC 2017


I cc vanderaj at owasp.org to point him to ASVS-OTG sync discussion.

> Now that there are so many volunteers the obvious question is how to proceed.
> As I told Matteo, a good entry point would be to collect issues with v4.

Identifying and listing the issues with v4 is surely a good start. We collected notes on this in last two years in our company. Now is the right time to review it and send it here. We can do that in following days.

> 1. OTG should get in line with OWASP ASVS Level 1 (according to "them" they are currently working on 3.1 with the goal to make every Level 1 item pentestable, see http://lists.owasp.org/pipermail/owasp-application-security-verification-standard/2017-January/001017.html ) Thus, a mapping like OTG-uvw = ASVS Vx.yz would be an obvious thing to do (and not doing this is yet another missed chance to foster consistency - yes, in the past OTG and ASVS were contradicting at some points). Specifically, let's talk to the ASVS folks!

This is here in this list (owasp-testing) actively involved in ASVS? Maybe better to ask - vanderaj (Andrew van der Stock), are you involved in testing guide project as well?

> 2. OTG is quite unbalanced at some items. Compare INFO-002 (almost trivial) and INPVAL-005 (epic) for instance. Some items appear (to me) sub-items of something larger, while especially INPVAL-005 is simply too deep. Other items like AUTHN-004 or AUTHZ-004 have sub-items.

I agree.

> The questions to me are: How much practical guidance should each item contain? And how should that be presented? And how balanced should the items of OTG be?
> At this point I want to throw in another invaluable source for web app pentesting, namely the Web Application Hacker's Handbook (WAHH). Chapter 21 there presents a methodology similar to what OTG (or ASVS L1) try to achieve, while the preceding chapters are actually the technical guidance. The ASVS is the exact opposite: There they say things like "verify that this and that" and leave it to the reader to find out what to do in order to verify. What I envision for OTG is something similar like WAHH, but reverse: present the methodology first and postpone technical guidance (at reasonable depth) to supplementing appendices. This way the main text body becomes more crisp and clear, and methodology ("what do I need to do, and what is expected") is separated from technical guidance ("how do I get to the verdict”).

I thing OTG should set the minimal standard what to test and point one to direction how to test. I don’t think it should have the ambition to be the primary source for technical guidance for everything.
Application testing is very complex, and it is changing a lot. Updating such document once in two years is not enough.
I think we should rather refer to more vital documents.

For example, the INPVAL-005 - it is good.
However, there are other good sources elsewhere. For example:
And others.

One should combine them and add own experience gained from various war games to do truly good tests.

I think leaving people to believe INPVAL-005 is enough is harmful.

> 3. Tools: OTG mentions tools a part of almost every item, and in many cases it is a proxy, where sometimes it is ZAP, sometimes it is  Burp, sometimes it is WebScarab (Huh? Is that maintained, or is that OWASP archeology?). See e.g. AUTHN-004. I guess, the mentioning of specific tools can be factored out and moved to a section on tools.

Let’s remove WebScarab. (With all the respect to people who made it)

In general I find mentioning tools a good thing. Both free/oss and commercial.

> 4. Now I get to the core: the actual items are sometimes overlapping, unclear, or not consistent. For instance, AUTHN-004 is a mess. Why on earth is session ID prediction there, if SESS-001 would be a more obvious place? And many other items that I would like to discuss later. Moreover, both, ASVS and WAHH have items that are not covered by OTG. (For instance, directory browsing which I use to cover under CONFIG-002 - not because it belongs there but because I didn't find a better place.) Further, there are some pentesting cheat cheats whose wisdom should be included into OTG, e.g. XSS Filter Evasion and REST Assessment.

Agree. Let’s refer the wisdom from external sources.

Also - in our practice, we sometimes use CITADELO-001, CITADELO-002, etc. when we want to cover items not fitting to other parts. It would be good to mention similar practice for such situations. CUSOM-001, or EXTRA-001, etc.

> 5. Most items start with "Test for xyz" which gives at least a vague idea of what to do, and in most cases, what to expect. So later you can assign a verdict like "ok" or "not ok because...". INFO seems to be an exception, but even there you can give a verdict, except for INFO-006 and INFO-007. These two are pure reconnaissance, while other can result in a verdict (e.g. INFO-008: Using a framework with known issues? - not ok - INFO-002: webserver leaks detailed version information - not ok - and so on). Reconnaissance should be drawn out of the main testing chapter (cf. Sections 1 and 2 of Chapter 21 in WAHH), my take on OTG is that each testing item should - if applicable - result in a verdict.


> 6. Perusing several items of the OTG and the according examples, the items and examples seems to reflect the good old monolithic PHP application as was custom ten years ago, but for modern applications OTG appears less a guidance on pentesting. This should be addressed for OTGv5. Yes, the CLIENT section has some stuff in that direction but there is still room for improvement.

Yes. Many consider OTG dinosaur because of this.

Tomas Zatko

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3791 bytes
Desc: not available
URL: <http://lists.owasp.org/pipermail/owasp-testing/attachments/20170304/5533e990/attachment.bin>

More information about the Owasp-testing mailing list