[Owasp-leaders] SecDevOps Risk Workflow Book (please help with your feedback)

Jason Johnson jason.johnson at p7n.net
Sun Nov 6 21:33:34 UTC 2016

I agree with all that.

On November 5, 2016 12:41:12 PM CDT, Sherif Mansour <sherif.mansour at owasp.org> wrote:
>I am, and I think I know a few others who might be interested.
>Although this needs a bit more thought, as it has many moving parts
>  - Looking for the same vulnerability types across different languages
> - Looking for language specific and framework specific vulnerabilities
>(MVCs like Spring, Django, RoR, and front end framworks like React.io
>   AngularJS etc).
>   - Having the relevant automation functionality + rule suppression
>   conditions in place for it to work for different teams.
>One simple (and I do mean very simple) rule is to look for common
>like "Begin Private Key" etc.. which show hardcoded keys/API strings. I
>have seen many Static Code Analysers miss those and instead show the
>"password =" which flags a ridiculous amount of false positives.
>On Sat, Nov 5, 2016 at 5:03 PM, Jason Johnson <jason.johnson at p7n.net>
>> True, I'm trying to make it a standard at work. I would like to help
>> anyone is interested with the rules.
>> On November 5, 2016 11:29:41 AM CDT, Sherif Mansour <
>> sherif.mansour at owasp.org> wrote:
>>> +1 an OWASP rule pack + plugin for Sonar feels like a worthwhike
>>> given its Open Source and scans many languages which is the issue
>with many
>>> Static Code Analyzers i.e. only language specific.
>>> On Saturday, 5 November 2016, Jason Johnson <jason.johnson at p7n.net>
>>> wrote:
>>>> Yeah we mostly use the defaults and write some of our own. Would be
>>>> to make a owasp sonar plugin with something like the dependency
>plugin. We
>>>> dont pay for the vb.net stuff. Mainly the work is defining the poor
>>>> descriptions of the default rules.
>>>> On November 5, 2016 10:59:48 AM CDT, Sherif Mansour <
>>>> sherif.mansour at owasp.org> wrote:
>>>>> @Jason, do you have security rules for Sonar?
>>>>> On Sat, Nov 5, 2016 at 3:27 PM, Jason Johnson
><jason.johnson at p7n.net>
>>>>> wrote:
>>>>>> We are currently working through this now. I am curious how the
>>>>>> workflow is set up in jira. We use that also and I would like to
>>>>>> security results and track the changes. We use sonar and owasp
>>>>>> checker to scan code. Sonar tracks all the vulnerable code crap.
>DevOps is
>>>>>> a struggle on the security tracking for sure. I too am looking
>for input.
>>>>>> On November 5, 2016 10:08:08 AM CDT, Sherif Mansour <
>>>>>> sherif.mansour at owasp.org> wrote:
>>>>>>> +Francois
>>>>>>> Hey guys,
>>>>>>> So we are now hitting on some important and amazing points, and
>>>>>>> thanks for sharing all this.
>>>>>>> In order to help, and in the spirit of sharing I have attached
>>>>>>> deck "Security in a Continuous Delivery World" slides 20 & 21 of
>>>>>>> attached slidedeck. This was from the first OWASP London chapter
>>>>>>> this year.
>>>>>>> What I will do is respond to some great points made here, and
>>>>>>> propose a few things we might work on.
>>>>>>> *So first Mario:*
>>>>>>> Yes and a thousand times yes we need fields like the ones you
>>>>>>> added in order to do metrics and provide enough information to
>>>>>>> developer in order for it to be useful.
>>>>>>> For each ticket I wrote down 3 guiding principles to use as a
>>>>>>> Star":
>>>>>>>    - *Unique* - No duplicate tickets
>>>>>>>    - *Useful* - Improves the security and quality of the
>>>>>>>    - *Actionable* - All necessary information is in the ticket
>>>>>>> So I had custom fields that looked like this:
>>>>>>> [image: Inline image 1]
>>>>>>> *In order to create metrics like this:*
>>>>>>> [image: Inline image 2]
>>>>>>> I did not realise you could add detailed fields like
>>>>>>> and PoC fields which is perfect for this.
>>>>>>> Where possible wanted to add URL, Domain, Subdomain, impacted
>>>>>>> parameter(s)
>>>>>>> For Static Code analysis you need to know the App, File, Line
>>>>>>> + Alternate paths/flows for the same issue (i.e. the sources and
>sinks for
>>>>>>> the vulnerability).
>>>>>>> @Mario, on the point that there should never be FPs raised as
>>>>>>> tickets, I agree these should be vetted and tweaked to never do
>>>>>>> However there is no guarantee that mistakes will not be made,
>and in
>>>>>>> security more often than not mistakes are made so it would help
>to have a
>>>>>>> resolution state for false positives, is is also an
>acknowledgment of
>>>>>>> cooperation between the devs and security team, and a commitment
>>>>>>> improvement.
>>>>>>> I.e. we know crap happens, in security crap/mistakes will happen
>>>>>>> we need to improve on it.
>>>>>>> *Issue #1*
>>>>>>> @Dinis @Mario @Simon, the challenge is when you have say 334x
>XSS and
>>>>>>> you do not want to create hundreds of tickets and you want to
>>>>>>> them into one.
>>>>>>> On the other hand you need to have a way of tracking which
>>>>>>> have already been raises as a unique ticket of or as part of a
>ticket so
>>>>>>> that you do not constantly spam the developers.
>>>>>>> *Possible solution: *The tool found the results needs to be able
>>>>>>> have the option to "group" issues in a single ticket as an
>option, but also
>>>>>>> to track each issue over time so it can inform the bug tracker
>if the issue
>>>>>>> has been resolved or not.
>>>>>>> Additionally it needs to NOT raise an issue in the bug tracker
>if it
>>>>>>> is already raised and the developer is working on it
>>>>>>> *Issue #2*
>>>>>>> @Mario, each org is a bit different so they might not score, or
>>>>>>> the same attributes so we might want to consider the lowest
>>>>>>> denominator of stuff that should be in there in order for the
>tickets to be
>>>>>>> unique, useful, and actionable.
>>>>>>> *Possible solution: *Document a set of guiding principles and
>>>>>>> requirements. Publish an ideal/boiler plate jira project that
>meets these
>>>>>>> requirements so 1) Tech Teams have something ready made to
>customize off of
>>>>>>> 2) Have a set of principles to know what to customize towards.
>>>>>>> *Issue #3*
>>>>>>> @Simon, I have been thinking about the false positive thing for
>>>>>>> a year now. In order to get false positive data the tool (I am
>just going
>>>>>>> to zap in this example to make things easier) would either need
>to do two
>>>>>>> things:
>>>>>>>    1. Have a facility for the user to input false positives from
>>>>>>>    zap or..
>>>>>>>    2. The tool would need to be able to connect to the bug
>>>>>>>    and identify which issues zap raised are not marked as false
>>>>>>>    there.
>>>>>>> *Now that you have the data, then what do you do with it?*
>>>>>>> To @Mario's point to I really want to ship my security issues
>data to
>>>>>>> somewhere else? I this case there are a few things that can be
>>>>>>>    1. Keep the data local to the org, and simply use the info to
>>>>>>>    leverage as rules to suppress future false positives
>>>>>>>       1. e.g. The following cookies do not need to be set to
>>>>>>>       etc..
>>>>>>>       2. e.g. The following pages/sub domain can be iframed
>>>>>>>       3. e.g. The following domain is a static domain and we can
>>>>>>>       have CORS set to "*" wildcard
>>>>>>>       4. Ok I'll stop now :-)
>>>>>>>    2. Ask the user if its ok to get diagnostic data and make it
>>>>>>>    explicit what we are asking for e.g.
>>>>>>>       1. we will only ask for how many times a specific rule
>>>>>>>       triggered a false positive (but not the actual content of
>>>>>>>       request/response)
>>>>>>>    3. Finally you can give the tech team to send more verbose
>>>>>>>    information, if they are happy to do so. Academics and open
>source tools
>>>>>>>    might be an example.
>>>>>>>       1. There has to be a very clear feature that carefully
>>>>>>>       explains to them what they are actually doing so they
>can't turn it on by
>>>>>>>       accident.
>>>>>>>    4. I have been thinking about Machine Learning and other AI
>>>>>>>    techniques in this use case to improve the quality of ZAP,
>there are two
>>>>>>>    areas it can work:
>>>>>>>       1. Filters false positives
>>>>>>>          1. Create a baseline model where ZAP takes all the data
>>>>>>>          contributed by the community to leverage a machine
>learning algorithm such
>>>>>>>          as logistic regression and user that to "auto filter"
>that it thinks are
>>>>>>>          false positives
>>>>>>>          2. Create a local model which takes the individual
>>>>>>>          organisation's data and does pretty much the same
>thing, only in this case
>>>>>>>          the data doesn't leave the organisation.
>>>>>>>          3. I think Spark can be useful for the baseline
>>>>>>>          and I have played around with it a little bit.
>>>>>>>       2. Improves the scanner's ability to find issues:
>>>>>>>          1. Ahhh.... this is going to be tough, my first thought
>>>>>>>          to leverage neural networks such as TensorFlows Deep
>learning but I have
>>>>>>>          never used it.
>>>>>>>          2. I can see it working for SQLi and a few others
>>>>>>>          well but this will require a lot of thought
>>>>>>> *Next steps?*
>>>>>>> *@Dinis*, I think you got quite a bit of info to think about and
>>>>>>> to incorporate into the draft, so you might want to take some
>time and find
>>>>>>> out what you think about all this info
>>>>>>> *@all *do you think it makes sense to 1) set some guiding
>>>>>>> 2) a jira project with all this info to leverage with the goal
>to have tech
>>>>>>> teal to be able to:
>>>>>>>    - Have something ready made to customize off of
>>>>>>>    - Have a set of principles to know what to customize towards.
>>>>>>> *@Simon *This might be a bit further in the future, but if there
>>>>>>> a way to configure zap to query a bugtracker for such
>information and use
>>>>>>> the info to improve either the local instance of zap or (with
>>>>>>> take some statistics to help improve the overall quality of ZAP.
>>>>>>> -Sherif
>>>>>>> On Thu, Nov 3, 201
Sent from my phone. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20161106/5e1af04d/attachment-0001.html>

More information about the OWASP-Leaders mailing list