[Owasp-cmm] An intro and an idea
steingra at gmail.com
Thu Oct 2 12:27:24 EDT 2008
On Tue, Aug 5, 2008 at 9:31 AM, Pravir Chandra <chandra at list.org> wrote:
> Hey everyone.
> So, without further ado, meet the Software Assurance Maturity Model (SAMM).
> To avoid the evil of attachments posted to lists, download it here:
Ok, so I've been reviewing this and I'm about halfway through it. Overall I
think the document is excellent. The model is quite readable and
understandable. I have a few questions/quibbles outlined below. I'm not
all the way through yet so this is just a first pass, but I thought I'd
share my feedback with the whole list anyway.
A few comments on structure of the model.
1. I like the overall structure. As was pointed out earlier breaking this
into a finite set of 4 disciplines and 3 functions per discipline makes a
nice workable set.
2. The maturity model scale of 1-3 is perhaps a bit fine grained.
3. The metrics for judging maturity level don't differentiate between
maturity of process, results, and compliance/adoption. You can solve this
in as assessment by measuring each team/LOB, etc. separately. At the same
time there is something to be said for differentiating between existence of
processes, and the compliance with them. Or, perhaps more concretely,
looking for ways to monitor/measure the activities being done by each
responsible party. One of the things you need in order to implement an
assurance program is an understanding and tracking of how well each
group/team/person is doing their job/role. I understand that the metrics
are for measuring the success of the program as a whole, but to gain
adoption/traction you need to be able to see who is/isn't doing their job.
So, some metrics that allow you that sort of breakdown are going to be
necessary for implementation.
Now, a few comments on some of the actual material.
1. In Standards and Compliance you say that sc1 will require assessment of
external compliance and sc2 requires internal policies/standards
compliance. Without getting into a big discussion of doing risk
assessments, I'm not convinced this will always be the right order of
operations. Not sure it needs correcting, but whether this is the right
approach will vary tremendously by industry and what threats/risks/etc. a
given organization has.
2. In SC2, you say that "each active project should undergo an audit at
least biannually." I think this statement and several others in the
document presuppose a certain development model and perhaps business model.
Maybe I'm just interpreting the word "project" differently than you meant it
in this context. Is it worth sending more time discussing what a project
is, or am I over analyzing this statement?
3. Small nit in SP1. When you say to assess a program, you say to give a
1,2,3 based on whether a certain level is achieved, and a 0 is no metrics
are met. But what if some metrics are met? Do we rate people 1.5? Or are
these whole number rating only? If so, then I think this points to needing
a slightly wider scoring/maturity scheme than 1-3.
4. In SP3, I'm not sure I understand what the goals of researching security
costs is. Is this just related to third-party tools (static analyzers,
dynamic testing tools, threat modeling tools, etc) or is it related to all
costs of software assurance? For example, assessing whether those tools are
effective, should you be using them, what are others doing, etc? It isn't
even clear to me why calculating these costs vs. others is necessarily
relevant. I think I might need some more convincing on this point.
I'm only through the first few sections of the document, so I'll send along
some more thoughts to the list when I have time.
steingra at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Owasp-cmm