[Owasp-leaders] 2012 Rugged Summit

Jerry Hoff jerry at owasp.org
Tue Sep 4 13:45:19 UTC 2012


Hi Jeff...

On Sep 3, 2012, at 10:32 PM, Jeff Williams <jeff.williams at owasp.org> wrote:


>> >> there are clearly activities which result in reproducible, demonstrable, measurable improvements in software security
> > 
> > No.  BSIMM shows what some large companies are doing, not whether it
> > actually works.  It's interesting, but certainly not time to stop
> > thinking about new ways to do things.  What Jeremiah said about BSIMM
> > is that there is "little to no supporting data." (See
> > https://www.owasp.org/images/5/53/OWASP_AsiaPac_04.14.pdf).  He
> > proposes to test whether BSIMM actually works by using his own
> > service, attack metrics from Akamai and others, and breach data from
> > Verizon et al.

I wasn't referring to bsimm here - I just meant there are activities that clearly impact the security of an organization. Manual/Automated code reviews, education, policy, etc.. I'm fully aware of BSIMM's implications.


> > That's a start towards the science I'd love to see.  I'd be far more
> > interested in data showing that threat modeling or hands-on training
> > actually reduce vulnerabilities than who is doing them.  You can't
> > really do these experiments without a model to rate the results
> > against.  
This is my point in a nutshell. As for the model, even comparing vulns after training/threat modeling and comparing to historical averages of the same group would be illuminating.


> > Jeremiah's model is what is in his company's scanner.  I'd
> > prefer something positive that matches up against the actual threats
> > to that business (like a security story).  But now at least we're
> > talking experimental design.
Good point. Sentinel traditionally has had a particular scope - vulns that can be found with a dynamic scanner / manual pentest. 


> > I think Jeremiah's basic intuition that we're on the wrong track is
> > correct.
I think his newest report shows overall improvement, but again that gets into the model issue you pointed out above.


> > Most organizations are riddled with simple but critical
> > vulnerabilities. And most, even those with highly developed
> > application security programs, are introducing more vulnerabilities
> > each month than they are finding and eliminating (vFlow positive).
> > That tells me we need some radical innovation.
That's very interesting. A lot of the clients I see have little to no appsec policy / activities. Is it common to find an org with an advanced appsec program with no noticeable effect on security?

I think most of the time it's a complete disregard for appsec which causes the vuln proliferation we witness daily.  I will deeply change my outlook if you are right, that even orgs with mature appsec programs have little to no success in preventing vulnerabilities. My working assumption based on personal observation is most orgs are doing nothing for appsec.  

This is exactly why I would love to see a study done that *proves* what you just said, current activities have little to no effect. If that truly is the case, I wholeheartedly agree we should start thinking in a completely different way about security.  So we're back to my point, the data should come first, then the theoretical solution (rugged), which should be measured, tested and evaluated.

Does any data exist surrounding an org's adoption of rugged and their corresponding vFlow delta?

> And even if you think our current activities do work -- there's no way
> IMHO that they can keep up with increasing complexity,
> interconnectivity, and attacker ability.  Check out Josh Corman's "HD
> Moore's law" --
> http://blog.cognitivedissidents.com/2011/11/01/intro-to-hdmoores-law.


Same as before, let's measure and find out what actually works / doesn't work.


> > I think there *are* considerably better ways to do application
> > security. We tried to capture a first cut at what it might look like
> > in Rugged.  But there's still a lot to do.  And you're absolutely
> > right -- our entire field needs some science.
Agreed. Any chance you'll be in Dublin where we can continue this discussion?


Thank you for your thoughtful, reasoned responses Jeff,

Jerry



> > 
> > On Sat, Sep 1, 2012 at 5:15 PM, Jerry Hoff <jerry at owasp.org> wrote:
>> >> Hi Jeff,
>> >> 
>> >> As stated on page 2 the handbook "This handbook is a strawman version
>> >> intended to generate discussion, comment, controversy, and argument. ",
>> >> so in that spirit, please allow me to refine the point I'm trying to make.
>> >> 
>> >> From page 6 of "The Rugged Handbook, v4":
>> >> 
>> >> "We have no proof that any of the things that we recommend in this
>> >> handbook will work. Of course, nobody has any proof that what anyone is
>> >> currently doing works either. What we do know is that we need to try new
>> >> approaches, for it is certain that the problem is scaling up much faster
>> >> than our ability to apply our current techniques."
>> >> 
>> >> Fair enough - however, I would argue that "nobody has any proof that
>> >> what anyone is currently doing works either" is not accurate.  There may
>> >> not be a universal methodology to building secure software, but there
>> >> are clearly activities which result in reproducible, demonstrable,
>> >> measurable improvements in software security.
>> >> 
>> >> Are there currently organizations / divisions / groups that have
>> >> actually implemented "Rugged" and have measurable improvements in
>> >> security?  If there are, I would love to hear about it!  That is the
>> >> "meat" I would have loved to see in this document.
>> >> 
>> >> My point from the beginning is: lead with and spotlight that data if it
>> >> exists.
>> >> 
>> >> If there is no field data to corroborate the viability of "Rugged", then
>> >> I would simply remark, in my opinion, that should be the goal for V5.
>> >> For this reason, I find the ongoing BSIMM survey to be extremely
>> >> compelling.  It gives us insight as to what is actually happening in the
>> >> field, rather than strictly hypothesis.
>> >> 
>> >> If this field data already exists for Rugged, I apologize in advance.
>> >> 
>> >> Thank you for considering,
>> >> Jerry
>> >> 
>> >> 
>> >> 
>> >> On 8/31/12 5:20 PM, Jeff Williams wrote:
>>> >>> Hi Jerry,
>>> >>> 
>>> >>> I think we're in agreement about supporting recommendations with facts
>>> >>> -- "it's not what you know."  But it's impossible without providing
>>> >>> the whole context.  By that I mean something (a model really) that
>>> >>> identifies business level concerns, maps defenses, and drills all the
>>> >>> way down to evidence?  That's the purpose of the security story.  A
>>> >>> user-friendly model of all the information relevant to making security
>>> >>> decisions. By making a security story explicit, we hope that
>>> >>> organizations can make informed decisions about what risks to address
>>> >>> and what to tolerate.  Apparently it wasn't clear that Rugged doesn't
>>> >>> imply hyper-security -- I'll have to see what I can do about that.
>>> >>> 
>>> >>> In the broader context, I think you're angling for a more scientific
>>> >>> approach to appsec.  I share this desire, but it's impossible to
>>> >>> experiment with security unless you can do the kind of tradeoffs that
>>> >>> a security story enables.  This is a little meta, but perhaps you
>>> >>> could think of Rugged as trying to establish an organizational culture
>>> >>> that enables experimentation.  That's what that language about
>>> >>> builders and breakers collaborating and competing is all about.  If
>>> >>> you have ideas about making this more clear in the Rugged materials,
>>> >>> I'd love to hear them.
>>> >>> 
>>> >>> I encourage you to give it a try.  Take an application and sketch out
>>> >>> a security story from the information you have available.  The Rugged
>>> >>> implementation guide is a good starting point.  You should be able to
>>> >>> link together your business level concerns with a set of defenses, map
>>> >>> in evidence that those defenses are present and actually work, pentest
>>> >>> results, etc... I'd be willing to bet that you find some major gaps
>>> >>> that are easy to explain in the context of the story. I've found this
>>> >>> to be a super-productive way to communicate with development teams.
>>> >>> And that's the start of the culture we're after.
>>> >>> 
>>> >>> Thanks for the feedback,
>>> >>> 
>>> >>> --Jeff
>>> >>> 
>>> >>> 
>>> >>> 
>>> >>> On Fri, Aug 31, 2012 at 4:22 PM, Jerry Hoff <jerry at owasp.org> wrote:
>>>> >>>> Hi Jeff,
>>>> >>>> 
>>>> >>>> The main point I wanted to get across in this public forum is: as an industry and as an organization WE should have the culture of analytically proving what we recommend.
>>>> >>>> 
>>>> >>>> I might be alone here, but I believe too little or too much security are equally harmful.
>>>> >>>> 
>>>> >>>> Promoting simply an unbridled "culture of security" can hamper an organization's productive ability and putting them at a disadvantage against competitors.
>>>> >>>> 
>>>> >>>> Too little security obviously comes with its own inherent set of risks.
>>>> >>>> 
>>>> >>>> I'm looking for us as an industry to  promote the *right* amount of measured security, in relation to the project and organization. In this way the overall business risk can be properly measured and hedged.
>>>> >>>> 
>>>> >>>> The rugged way, as outlined in the PDF in question, seems to serve the same function as a Tony Robbins talk - hyperbolic demagoguery aimed at rousing the apathetic.
>>>> >>>> 
>>>> >>>> Engineering security in a cost effective way, however, requires relevant metrics, reproducible best practices, and quantifiable results.
>>>> >>>> 
>>>> >>>> So my plea is simply for more precision in the form of data upon which business decisions can be made.
>>>> >>>> 
>>>> >>>> Jerry
>>>> >>>> 
>>>> >>>> 
>>>> >>>> On Aug 30, 2012, at 11:19 PM, Jeff Williams <jeff.williams at owasp.org> wrote:
>>>> >>>> 
>>>>> >>>>> Hi John,
>>>>> >>>>> 
>>>>> >>>>> Confusion here is my fault. Of course all those practices have been
>>>>> >>>>> around for a long time.  If we're going to get all historical, I would
>>>>> >>>>> direct your attention to the Systems Security Engineering CMM (ISO
>>>>> >>>>> 21827) that I (and others) wrote back in the mid-to-late 90's.  We had
>>>>> >>>>> hundreds of large organizations implementing the model and it's now an
>>>>> >>>>> international standard.
>>>>> >>>>> 
>>>>> >>>>> I think you *massively* underestimate (insult actually) OWASP if you
>>>>> >>>>> think If we're all about "Top-10 lists, penetration testing, and
>>>>> >>>>> training." There is more information about implementing the practices
>>>>> >>>>> in the BSIMM at OWASP than in any other place. More presentations,
>>>>> >>>>> articles, talks, videos, and experts... including the guys who drafted
>>>>> >>>>> up the Rugged Handbook (Strawman).
>>>>> >>>>> 
>>>>> >>>>> There's an important lesson I learned creating the SSE-CMM: process
>>>>> >>>>> models don't change culture. There are plenty of good practices listed
>>>>> >>>>> in BSIMM and OpenSAMM, but I'm a little surprised that your "it's a
>>>>> >>>>> survey, it's a survey, it's a survey" mantra doesn't make the case for
>>>>> >>>>> itself that we need something more.
>>>>> >>>>> 
>>>>> >>>>> I know of many organizations that do the things in the BSIMM yet make
>>>>> >>>>> no progress because they're just going through the motions.  They
>>>>> >>>>> don't really believe. As I look forward, I don't think this approach
>>>>> >>>>> can possibly keep pace with advances that bring more complexity, more
>>>>> >>>>> communications, more critical information, and are increasingly
>>>>> >>>>> difficult to verify.
>>>>> >>>>> 
>>>>> >>>>> On the other hand, I've worked with a handful of organizations who I
>>>>> >>>>> DO consider Rugged.  But it's not clear how they got there or how it
>>>>> >>>>> works.  They don't do half of the BSIMM activities, but they get
>>>>> >>>>> consistently great results.  For example, I had a great conversation
>>>>> >>>>> today with the CEO of CabForward, a small ROR shop.  They build
>>>>> >>>>> software that's important and they committed to being Rugged because
>>>>> >>>>> they believe in it and want it. It's not being forced on them.
>>>>> >>>>> 
>>>>> >>>>> We're trying to figure it out.  We spent quite a lot of time studying
>>>>> >>>>> what might bootstrap this type of culture in organizations that are,
>>>>> >>>>> frankly, stuck in a negative, reactive hamster-wheel-of-pain with
>>>>> >>>>> regard to application security.  It's not about the list of
>>>>> >>>>> activities, but how you apply them to get to an organization that
>>>>> >>>>> WANTS to produce rugged code.
>>>>> >>>>> 
>>>>> >>>>> I think BSIMM and Rugged are perfectly suited to live side-by-side,
>>>>> >>>>> and I'm looking forward to our next get-together so we can argue about
>>>>> >>>>> it :-)
>>>>> >>>>> 
>>>>> >>>>> --Jeff
>>>>> >>>>> 
>>>>> >>>>> 
>>>>> >>>>> 
>>>>> >>>>> On Thu, Aug 30, 2012 at 10:11 PM, John Steven <John.Steven at owasp.org> wrote:
>>>>>> >>>>>> Jeff,
>>>>>> >>>>>> 
>>>>>> >>>>>> I think at best you've overstated.
>>>>>> >>>>>> 
>>>>>> >>>>>> [Role Ideas]
>>>>>> >>>>>> Cursory consideration shows Rugged's ideas for each role are covered
>>>>>> >>>>>> by BSIMM3 activities and often overwhelming precedent exists. A
>>>>>> >>>>>> notable exception I found is "Model your data instead of using
>>>>>> >>>>>> strings" [*MS]. Those unfamiliar with BSIMM should take note: several
>>>>>> >>>>>> BSIMM respondents implement the activities they're credited with using
>>>>>> >>>>>> developers, as implied by the Rugged Handbook. When the study refers
>>>>>> >>>>>> to QA it references the function not necessarily the staff-in-role.
>>>>>> >>>>>> 
>>>>>> >>>>>> [Prairie Dogs]
>>>>>> >>>>>> Ken Van Wyk (and others) have written extensively on the concept of
>>>>>> >>>>>> integrating post-deployment personnel and monitoring in a feedback
>>>>>> >>>>>> loop affecting policy, architecture, and development. Several
>>>>>> >>>>>> organizations in multiple verticals do this effectively now. I'm glad
>>>>>> >>>>>> this notion has gained buzz under the term DevOps. I, as well as many
>>>>>> >>>>>> of the organizations I've had the pleasure to work with, believe that
>>>>>> >>>>>> several advantages to both development and operations come from their
>>>>>> >>>>>> interaction.
>>>>>> >>>>>> 
>>>>>> >>>>>> [Ant Colonies]
>>>>>> >>>>>> I covered, published, and presented the concept of creating a
>>>>>> >>>>>> multi-stakeholder security initiative as early as 2003 at S3-con under
>>>>>> >>>>>> the name of ESSF (*link no longer available). Look at material I
>>>>>> >>>>>> published again in '06 [*EF], and McGraw's "Software Security" [*SS]
>>>>>> >>>>>> and you'll find the five (5) roles described on page 11 of the Rugged
>>>>>> >>>>>> Handbook [*RR]. Many organizations which have progressed past the
>>>>>> >>>>>> "one-man-shop" model in multiple verticals integrate these
>>>>>> >>>>>> stakeholders effectively now.
>>>>>> >>>>>> 
>>>>>> >>>>>> [Musk Oxen]
>>>>>> >>>>>> Again, easily tied to BSIMM activities and a key component of
>>>>>> >>>>>> organizations' strategies. Though, this topic's coverage in the
>>>>>> >>>>>> handbook is naive for organizations with more than one web app.
>>>>>> >>>>>> 
>>>>>> >>>>>> [Beavers]
>>>>>> >>>>>> Pravir Chandra fought vehemently (and in my opinion knowledgeably and
>>>>>> >>>>>> presciently) for what became "Software Environment" in the BSIMM SSF
>>>>>> >>>>>> and in a form I prefer: "Deployment" in OpenSAMM.
>>>>>> >>>>>> 
>>>>>> >>>>>> [Honeybadgers, BigHorns, and Humans]
>>>>>> >>>>>> Again, well-trodden territory.
>>>>>> >>>>>> 
>>>>>> >>>>>> [Sum]
>>>>>> >>>>>> I'm excited folk within the OWASP community have come to the
>>>>>> >>>>>> conclusion that Application Security is more than just Top-10 lists,
>>>>>> >>>>>> penetration testing, and training. I'm thrilled that a focus on
>>>>>> >>>>>> developers has emerged and that this focus has not precluded
>>>>>> >>>>>> architecture, operational, and executive stakeholder out-reach.
>>>>>> >>>>>> 
>>>>>> >>>>>> For those who like the ideas prescribed to each role in the Rugged
>>>>>> >>>>>> Handbook, I'd suggest consider all the activities in BSIMM (*CC). In
>>>>>> >>>>>> its SSF you may find some other pursuits well-suited to your
>>>>>> >>>>>> particular organization, maturity, and strengths. You won't find
>>>>>> >>>>>> effective explanation of HOW to do these things though--not there. The
>>>>>> >>>>>> good news is that each activity has been found in *at least* three (3)
>>>>>> >>>>>> organizations. This means that someone has likely written something
>>>>>> >>>>>> public you can find and consume to help you with more specifics "who",
>>>>>> >>>>>> "what", and "how".
>>>>>> >>>>>> 
>>>>>> >>>>>> The notion of a security story may be a solid organizing framework for
>>>>>> >>>>>> those just beginning their journey from one-man-shop to three-man
>>>>>> >>>>>> practice. I expect that if it's to gain traction, Rugged's notion of
>>>>>> >>>>>> "proving it" is going to have to mature dramatically and explicitly
>>>>>> >>>>>> weave itself amongst the "ideas", "budget", "measurement", and
>>>>>> >>>>>> "outreach" dimensions. Personal experience has shown this is where
>>>>>> >>>>>> many trip up. So, to me, fleshing these interactions out further is
>>>>>> >>>>>> essential.
>>>>>> >>>>>> 
>>>>>> >>>>>> [Assurance - Bonus]
>>>>>> >>>>>> And this leads us back to the Handbook's largely implied notion of
>>>>>> >>>>>> "assurance" that Jeff mentions specifically in his email. His quote
>>>>>> >>>>>> from Training Day aptly hits why most "mature" and "expensive"
>>>>>> >>>>>> security practices (and vendors, for that matter) have much room for
>>>>>> >>>>>> improvement. Yet, despite a large body of existing software assurance
>>>>>> >>>>>> work *[SA], I don't see much reference at all in the Handbook to what
>>>>>> >>>>>> we can do to 1) specifically and concretely improve traditional
>>>>>> >>>>>> activities or 2) augment/replace those activities with ones that will
>>>>>> >>>>>> produce a more risk-driven and iron-clad assurance case. This, I
>>>>>> >>>>>> believe is a "brass ring" worth reaching for. However,
>>>>>> >>>>>> higher-assurance software typically comes with a level of formality
>>>>>> >>>>>> beyond the appetite of commercial software.
>>>>>> >>>>>> 
>>>>>> >>>>>> [insert Mars Lander analogy here].
>>>>>> >>>>>> 
>>>>>> >>>>>> -jOHN
>>>>>> >>>>>> --
>>>>>> >>>>>> Phone: 703.727.4034
>>>>>> >>>>>> Rss: http://feeds.feedburner.com/M1splacedOnTheWeb
>>>>>> >>>>>> 
>>>>>> >>>>>> * [CC] - The documentation is open source and you can download it,
>>>>>> >>>>>> edit it, and fork it for your own use as you see fit.
>>>>>> >>>>>> * [EF] - https://buildsecurityin.us-cert.gov/bsi/568-BSI/version/2/part/4/data/Steven_IEEE_SP.pdf?branch=main&language=default
>>>>>> >>>>>> * [MS] - Developer idea #2 - A Solid suggestion. One I've made and
>>>>>> >>>>>> actually implemented in several Enterprises. This, however, is less of
>>>>>> >>>>>> cultural change and more of a secure design pattern. I prefer a former
>>>>>> >>>>>> name: "Maintain type-safety of data". Strings are, of course, only one
>>>>>> >>>>>> of many offensive catch-all types that defeat such safety.
>>>>>> >>>>>> * [RR] - technology execs, testers, developers, architects, and security folk
>>>>>> >>>>>> * [SA] - Here I'm referring to the "Software Assurance" work of the
>>>>>> >>>>>> '70's and 80's and even 90's, not the US Government's SwAF.
>>>>>> >>>>>> * [SS] - Chapter 10, in particular, though throughout.
>>>>>> >>>>>> 
>>>>>> >>>>>> 
>>>>>> >>>>>> On Thu, Aug 30, 2012 at 4:41 PM, Jeff Williams <jeff.williams at owasp.org> wrote:
>>>>>>> >>>>>>> Hi Jerry,
>>>>>>> >>>>>>> 
>>>>>>> >>>>>>> I don't see Rugged as a reformulation of *anything* people are already doing.  Many traditional activities are valuable, but there are an awful lot that don't end up producing any value.  That is, they're disconnected from what the business needs.
>>>>>>> >>>>>>> 
>>>>>>> >>>>>>> Rugged is focused on generating a software development culture that produces security in a tangible defensible way. As Denzel said in Training Day, "it's not what you know -- it's what you can prove"
>>>>>>> >>>>>>> 
>>>>>>> >>>>>>> Think about the security story for any website on the planet.  I guarantee if you capture it, it will reveal gaps.  And over time it will drive the communication you need to improve your organization.  This is the "visible" that OWASP should champion.
>>>>>>> >>>>>>> 
>>>>>>> >>>>>>> I hope you'll try it.  I'm happy to help anyone interested create a security story and start "Thinking in Rugged"
>>>>>>> >>>>>>> 
>>>>>>> >>>>>>> --Jeff
>>>>>>> >>>>>>> 
>>>>>>> >>>>>>> 
>>>>>>> >>>>>>> 
>>>>>>> >>>>>>> On Aug 30, 2012, at 4:18 PM, "Tom Brennan" <tomb at owasp.org> wrote:
>>>>>>> >>>>>>> 
>>>>>>>> >>>>>>>> Good feedback -- cc to the guys leading the effort.
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> A suggestion was already floated on getting it up on a wiki hmmm I know a group that has a wiki *cough* https://www.owasp.org/index.php/Category:OWASP_RuggedSoftware  -- for community review and contribution to sections or to take inbound feedback for Version 5.0 or a another working group that is open to anyone who wants to attend and contribute.
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> Will relay what we hear to the list; poke the badgers, bears and ice-T @ http://www.ruggedsoftware.org/about.html
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> -----Original Message-----
>>>>>>>> >>>>>>>> From: Jerry Hoff [mailto:jerry at owasp.org]
>>>>>>>> >>>>>>>> Sent: Thursday, August 30, 2012 2:24 PM
>>>>>>>> >>>>>>>> To: tomb at owasp.org
>>>>>>>> >>>>>>>> Cc: <owasp-leaders at lists.owasp.org>
>>>>>>>> >>>>>>>> Subject: Re: [Owasp-leaders] 2012 Rugged Summit
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> Hello all,
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> Nice work! Although my first reaction was: Honey badgers? Ice-T? Seriously?
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> I think this is an interesting document - but I hope as an organization and as an industry we focus on reproducible best practices, quantitative metrics and real data behind works such as this one, rather than yet another reformulation / restatement of the same basic advice we as an industry have been preaching over the years.
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> A guide such as this coming out of actual metrics and real-world best practices would be much more appealing. The blurbish case studies at the end should have driven the document, instead of the other way around.
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> Not trying to be antagonistic - just food for thought.
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> Jerry
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> 
>>>>>>>> >>>>>>>> On Aug 30, 2012, at 1:49 PM, "Tom Brennan" <tomb at owasp.org> wrote:
>>>>>>>> >>>>>>>> 
>>>>>>>>> >>>>>>>>> A Software Security Philosophy *RELEASED* 2012-Aug is creating quite a buzz around in a very short time -- this was a HOT TOPIC at last week's DHS / US-CERT event in the USA.
>>>>>>>>> >>>>>>>>> 
>>>>>>>>> >>>>>>>>> http://www.ruggedsoftware.org/docs/RuggedHandbookv4.pdf
>>>>>>>>> >>>>>>>>> 
>>>>>>>>> >>>>>>>>> In summary a group of well-known participants spent a week together, developing the details; kudos to them for volunteering their time with attribution to OWASP
>>>>>>>>> >>>>>>>>> 
>>>>>>>>> >>>>>>>>>  Justin Berman
>>>>>>>>> >>>>>>>>>  John Bernero
>>>>>>>>> >>>>>>>>>  Nick Coblentz
>>>>>>>>> >>>>>>>>>  Josh Corman
>>>>>>>>> >>>>>>>>>  Gene Kim
>>>>>>>>> >>>>>>>>>  Jason Li
>>>>>>>>> >>>>>>>>>  John Pavone
>>>>>>>>> >>>>>>>>>  Ken van Wyk
>>>>>>>>> >>>>>>>>>  John Wilander
>>>>>>>>> >>>>>>>>>  Jeff Williams
>>>>>>>>> >>>>>>>>>  Chris Wysopal
>>>>>>>>> >>>>>>>>> 
>>>>>>>>> >>>>>>>>> If you would like to get involved see:  http://www.ruggedsoftware.org/about.html
>>>>> >>>>> _______________________________________________
>>>>> >>>>> OWASP-Leaders mailing list
>>>>> >>>>> OWASP-Leaders at lists.owasp.org
>>>>> >>>>> https://lists.owasp.org/mailman/listinfo/owasp-leaders
>> >> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20120904/0c6bcb28/attachment-0001.html>


More information about the OWASP-Leaders mailing list