[Owasp-webcert] OWASP Evaluation and Certification Criteria

Mark Curphey mark at curphey.com
Thu Aug 2 08:56:59 EDT 2007


The previous mail was written using my phone. Please ignore the grammar and
typos. I just reread it on my laptop, apologies. 

-----Original Message-----
From: owasp-webcert-bounces at lists.owasp.org
[mailto:owasp-webcert-bounces at lists.owasp.org] On Behalf Of Mark Curphey
Sent: Thursday, August 02, 2007 12:07 PM
To: andre at operations.net; owasp-webcert at lists.owasp.org
Subject: Re: [Owasp-webcert] OWASP Evaluation and Certification Criteria

Its all helpful Andre. All good and very much appreciated!

What I am trying to do is to create the framework and define the controls we
can measure. I want people to be able to configure those controls based on
their own beliefs / religion / dogma. I bet if we poll everyone on this list
about what reasonable password length is we would get a bell curve
distribution but those on the upper quartile will argue strongly and those
on the lower. The args may all be valid as the risk profiles of their domain
would likely be different. What I want is to create the criteria framework
so people can configure but agree the controls that are needed. You may have
seen a placeholder in the back for a reference implementation. This is where
OWASP Leaders as a group can say "we think its 12 characters" and "we think
its 16 bytes of entropy for the username". The Pay Pal Industry Association
(I have no idea if they really exist BTW) may say "7 chars and 12 bytes" but
if we have the base part as consistent people can build up process, tools
and services around the core. 

The 256bit SSL should be abstracted and the scheme user should choose the
cipher suite etc. I'll make sure that's changed or clear. The control should
be that the data is sent over an encrypted channel. A wizzybang AJAX
interface may use service which may use WS-Security. We can all imagine that
it could be secure and just because they choose not to implement TLS doesn't
mean they should fail a security evaluation. That said a scheme org like our
PayPal Industry Association may want to specify TLS with a set of cipher
suites only. Hopefully this approach allows for the best of all worlds
without watering it down to become wishy-washy by choosing a lowest common
denominator which is what I think a lot of other standards do. 

On your last point I want to fold the process part back to the technology
part so it could be thought of like this

Technology - Has the company built a secure web site as defined by the
criteria?
People - Does the company have the people who understand how to design,
build and maintain secure web sites as defined by the technology section of
the criteria?
Process - Does the company implement a process which will likely mean they
will design, build and maintain secure web sites as defined by the
technology section of the criteria?

I am envisioning the process part as being a Rev 1, its seems clear this is
quite a way behind the times for most folks and some first steps would be
pragmatic. 

Does the company have a threat modeling process?
Do they use it in design, develop etc?
Does the company ensure all code is checked for security?
Extended - Does the company ensure that security is part of continuous
integration?
Etc

Of course one think I am I am sure many others are already seeing coming
from this is a plan that can bind many different OWASP Projects around a
common purpose. I maybe overstepping the mark here but I think it would be
great if we could use something like this to tie together the testing guide,
i.e. do we have a documented way to provide all the assurance levels for
each issue. It would provide a neat waty of bringing things together to work
alongside each other in a cohesive way. But I think that maybe my thought
now. 

I want the likes of the CSO's on this list to be able to take this doc,
configure it to their tastes (7 char pass, x for this, y for that) and be
able to use it at the core of their app sec program. They can use it to set
expectations of business partners who build sites for them or the do
business with. They may use it as their internal standards and internal
testing criteria. If groups of like minded folks can get together and agree
a specific configuration they can share efficiencies of scale; all in the X
industry could agree this is good and reduce the cost of multiple audits to
satisfy multiple people. 

One key think I think will be in making sure the assurance levels are
simply. That may mean compromise for a rev one. By that I think we have to
stick to a few definitions, so automated versus manual and code review
versus penetration testing but I am not quite there yet. Plan to be by the
end of today or tomorrow. 

-----Original Message-----
From: owasp-webcert-bounces at lists.owasp.org
[mailto:owasp-webcert-bounces at lists.owasp.org] On Behalf Of Andre Gironda
Sent: Wednesday, August 01, 2007 8:01 PM
To: owasp-webcert at lists.owasp.org; Mark Curphey
Subject: Re: [Owasp-webcert] OWASP Evaluation and Certification Criteria

On 8/1/07, Mark Curphey <mark at curphey.com> wrote:
> Todays update. I have not been able to make as much progress as I wanted
> yesterday and today. I know expect to finish the technology section
> tomorrow.

Mark,

You're doing a great job so far!  Keep up the momentum.

I wanted to try and nitpick on the username and password examples you
included before jumping into any of the others and get some feedback
on my feedback.  I'm not sure if I'm coming across as annoying or
helpful, but I want to point out some potential flaws or areas for
improvement.

> strong password should be 6 or 16 chars then well never get anywhere fast.

Passwords besides, I think UUID strings (i.e. your UM-00X?) should be
at least 16-bytes with average entropy e.g. the equivalent of `cat
/dev/urandom | tr -cd [:digits:] | fold -w 16 | head -1`.  UUID's are
typically used for session ID's or user-based parameters.

PCI DSS states that passwords should be at least 7 characters in
length.  I think that's fine for most systems using most modern-day
password storage routines.  However, under Windows systems still
shipping with default LM hash storage technique (everything except
Vista), 7 characters should not be the minimum, as this cuts the
entropy of the hash in half.  In all seriousness - they should be at
least 15 characters, preventing LM hash entirely (unless turned off by
some other means).  My point here is that every system has its own
inherent issues - if you work around the issues to each particular
system - you gain the assurance needed.

For UM-002, you mention, "Email addresses should NOT be allowed as
usernames if email is one of the password reset options".  I think at
higher levels of both application security (admin at mark@curphey.com
type attacks), and reputation / user verification attacks
(dre at mailinator.com), most higher assured systems will not use email
address for almost any sort of trust relationship.  Password reset is
ok, but we'll get to that at a much later time ;>

For UM-003, you say, "All user account management activity including
account maintenance, and password resets should take place using 256
bit SSL (with valid certificates)".  I think that the 256-bit matters
less than the rest of it.  By default, in most popular browsers that
I'm aware of, the default is AES-128 for encryption using certificates
that employ MD5 with a 1024-bit RSA public key.  I think AES-128 is
fine for almost all applications, but that the RSA keys should be
2048-bit (the MD5 part is fine as long as used along with the RSA
key).  This is actually a PKI issue, not an SSL configuration issue,
however.

In UM-004, you suggest, "10 characters, at least 1 upper or lower case
and one numeric and one special character".  What about repeating
characters?  Maybe something that includes "same character not
repeated 3 times or more"?  E.g. isn't aA-1111111 (10 chars) very
similar to aA-1 (4 chars)?

There's also common attack patterns for character combinations in
usernames and passwords that should probably be blacklisted, such as
"\x".

> Also please remember this is a discussion document, proposing a better way
> to do web evaluation and certification that is out there today. This is
not
> a build standard or a complete guide. Once I get this rev out of the door
we
> can all sit around a table face to face at the next OWASP conference and
> work out the next steps to turn this into something real. There are great
> opportunities to hook this into the testing guide for issues and have them
> all dynamically update and keep current.

So, by this - you mean that all the little details of what provides
which level of assurance will get worked out at a later time?  It does
seem like you are trying to populate the fields with some information
(although you're using copy/paste a lot - you'll probably need a
Copy-Paste Detector at some point!).

> Andres points about wanting to test throughout the lifecycle will be
> addressees by the Process part that will follow. In there we may have
> different techniques such as Threat Modeling. This way people can also
just
> adopt the Technology part now and work towards the real deal later. This
is
> a gentler learning curve and will likely see faster adoption as it will be
> less initial pain.

I think it's important to understand the overlap between penetration
testing for applications and QA testing.  Threat modeling applies to
both ITO and Development equally well - interestingly enough.
Continuous testing and inspection mostly applies to just Dev.
Although there are SaaS vendors who would have you believe it also
applies to IT/Ops (and I'm not going to argue the point here, instead
I'll let them do it).

A lot of dev shops already have "some sort" of testing in place, just
like a lot of ITO shops have "some sort" of firewall already in place.
 It's just a matter of using the right "capital" (your PPT) to
configure/verify/troubleshoot (CVT) the technology to a set of working
standards that allow for defined levels of assurance.  In other words,
if you can get somebody to audit a firewall to certain controls - why
can't you get somebody to audit a QA functional test to certain
controls?

With basic building-blocks (you described the entire process as OOP at
the beginning of Part 1, which I thought was brilliant) in place, you
can generically assign assurance and areas for improvement to ITO or
Dev as if they are the same thing.  It seems like this is what you are
trying to accomplish with this sort of criteria - and I'm all in favor
of it.  The hard part that I see coming is relating the ideas behind
the Process to an organization that is very mature on the IMM scale,
but very immature on the CMM scale (or vice-versa).

Cheers,
Andre
_______________________________________________
Owasp-webcert mailing list
Owasp-webcert at lists.owasp.org
https://lists.owasp.org/mailman/listinfo/owasp-webcert

_______________________________________________
Owasp-webcert mailing list
Owasp-webcert at lists.owasp.org
https://lists.owasp.org/mailman/listinfo/owasp-webcert



More information about the Owasp-webcert mailing list