[Owasp-board] Flagship Code Products

Kevin W. Wall kevin.w.wall at gmail.com
Mon Mar 31 02:41:21 UTC 2014


Since this is a board list and I'm not a board member, apologies in
advance for taking up your precious time.

On Sun, Mar 30, 2014 at 8:36 PM, Jim Manico <jim.manico at owasp.org> wrote:
> On Mar 30, 2014, at 2:23 PM, Dennis Groves <dennis.groves at owasp.org> wrote:
>>
>> That is why OpenSAMM is only one of several questionnaires developed for
>> rating project maturity and quality.
>> The other questionnaires address your concerns.
>
> Yes but the OpenSAMM was the only one widely distributed and it confused all
> the project teams. Something serious is amiss here, not to mention Joannes
> comments that the advisory board is not really happening and there is little
> actionable guidance being given. Time for the board to step in and clean up
> this mess.

At AppSec USA 2013, I filled out 2 or maybe 3 questionnaires on several
projects that I had volunteered for (including ESAPI). I believe one was
for the Dev Guide and one was definitely for ESAPI. I do not recall them
being different questionnaires and I even made the remark to someone
(I think it was either Denis, Martin, or Samantha) that I thought it was
strange that we were using a one-size-fits all questionnaire to try to
evaluate all OWASP projects regardless of the type of project it was.
(I might have even made some comments like that on the questionnaires
that I filled out.)

So, my assumption was that the OpenSAMM-based questionnaire was the one
distributed at AppSec USA in NYC last November. Was this not the case?
Perhaps the alternate questionnaires were not yet complete at that time?

Regardless of the timing and which one has been used, as an OWASP volunteer
who has been involved in both code (ESAPI, and to a lesser degree, AppSensor
and ZAP) and documentation projects (Dev Guide and Cheat Sheets), I strongly
agree with Jim and Andrew that using the same questions to evaluate each
(*especially* if the intent is to compare the results of those two types
of projects based on responses from the same questionnaire), is at best,
less than optimal, and at worst, almost pointless.  I think that Andrew's
comment this being an exercise in expecting a "fish to climb a tree" were
apropos. (Well, maybe on one of the low budget sci-fi movies that you
are only likely to find on the SyFy channel, like Frankenfish you might
find that, but nowhere else. ;-)

The biggest thing that I think the survey that I took (which seemed to be
based exclusively on OpenSAMM) was lacking was for code projects
it did not address like project quality (e.g., how many outstanding
bugs and how long are those bugs open for), code architectural issues
(the specifics, as to whether there were problems and what were they,
not whether or not the design was documented or a design review was
followed, etc.), and project commitment (i.e., some measure of project
activitly? Is there evidence that there are lots of volunteers participating
or as the project seemed to languige? Etc.).  Sure, lots of users of
these code projects can't probably answer those things, but the project
leaders and volunteers could. (And I would like to think that they would
be honest and objective as possible when doing so.) From the users of
those projects, they could answer things like "How well does the project
seem to be supported?" While this is more subjective than the number of
open bugs, average time to close a bug, etc., this is something that is
important to the OWASP user community. For instance--and this is just an
observation, not necessarily a comment on how our formal support structure
is or is not working--for ESAPI, I have noticed that during the past 2 or 3
years, the users have been posting less and less to the ESAPI-Users mailing
list (which is our official forum to provide assistance) and has moved
more toward other means such as posting ESAPI questions to Stack Overflow
or sending myself or Chris or Dave Wichers or some other individual ESAPI
related questions. I'm not sure i know the reason for this, but I found it
disturbing enough that I sent up some Google and Stack Overflow filters
that run once a month so I can try to find those questions. (Of course,
it is also easier to ignore them when they are posted on some other
forum because I generally have to log into them rather than just
responding to an email so admittedly I sometimes forget to respond or
just can't make the time to respond. But I digress.)

What ultimately raised a red flag to me as one of the ESAPI project leaders
is when I answered the questions as honestly and objectively as I was
able, the survey results did not make it obvious (to me at least; but it was a
sample size of 1 after all) that ESAPI was not in dire straits when I really
thought that it was. As an analogy, if the survey was like a home inspection
and intended to point out things that were amiss with the project being
reviewed, it failed to do that. It was like a home inspection that failed to
discover a crumbling foundation due to termite or water damage. And the
concern that I had if my honest survey results hadn't made that obvious to
me, how would it be obvious with others not as closely involved with ESAPI?
I can't say if others had this perception or not, but I think it was because
in a large part the survey was too generic and not *code* specific enough.

Anyway, I wanted to bring up one last point. Since many (most?) of the
OWASP projects (or at least the code projects) are no registered with
Ohloh (http://www.ohloh.net/p?query=owasp&sort=relevance), I would think
that we should also be able to use that as a more objective input for
measuring the ongoing project commitment aspect.

Thats all for now. Thanks for listening.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
NSA: All your crypto bit are belong to us.


More information about the Owasp-board mailing list