[Owasp-leaders] SecDevOps Risk Workflow Book (please help with your feedback)

Dinis Cruz dinis.cruz at owasp.org
Sun Nov 6 23:54:15 UTC 2016


Amazing feedback Sherif (for reference I'm tracking it here
https://github.com/DinisCruz/Book_SecDevOps_Risk_Workflow/issues/170 )

On the point you made below on the need to consolidate issues, YES,
absolutely. I always find that a key element of those JIRA tickets are:

a) common sense
b) custom scripts that perform some transformations and prevent a salad of
issue to be created in bulk (in Jira)

I'm finding that it is actually better to create really small and
actionable issues (linked to a bigger one that is 'Risk Accepted' until all
variations have been fixed)

Then as those small issues  fixed, more are added to the Kanban queue

Dinis


On 5 November 2016 at 15:08, Sherif Mansour <sherif.mansour at owasp.org>
wrote:

> +Francois
>
> Hey guys,
>
> So we are now hitting on some important and amazing points, and thanks for
> sharing all this.
> In order to help, and in the spirit of sharing I have attached my deck
> "Security in a Continuous Delivery World" slides 20 & 21 of my attached
> slidedeck. This was from the first OWASP London chapter meeting this year.
> What I will do is respond to some great points made here, and also propose
> a few things we might work on.
>
> *So first Mario:*
> Yes and a thousand times yes we need fields like the ones you have added
> in order to do metrics and provide enough information to the developer in
> order for it to be useful.
> For each ticket I wrote down 3 guiding principles to use as a "North Star":
>
>    - *Unique* - No duplicate tickets
>    - *Useful* - Improves the security and quality of the software
>    - *Actionable* - All necessary information is in the ticket
>
> So I had custom fields that looked like this:
> [image: Inline image 1]
>
> *In order to create metrics like this:*
> [image: Inline image 2]
>
>
> I did not realise you could add detailed fields like request/response and
> PoC fields which is perfect for this.
>
> Where possible wanted to add URL, Domain, Subdomain, impacted parameter(s)
> For Static Code analysis you need to know the App, File, Line number +
> Alternate paths/flows for the same issue (i.e. the sources and sinks for
> the vulnerability).
> @Mario, on the point that there should never be FPs raised as Jira
> tickets, I agree these should be vetted and tweaked to never do that.
> However there is no guarantee that mistakes will not be made, and in
> security more often than not mistakes are made so it would help to have a
> resolution state for false positives, is is also an acknowledgment of
> cooperation between the devs and security team, and a commitment to
> improvement.
> I.e. we know crap happens, in security crap/mistakes will happen and we
> need to improve on it.
>
> *Issue #1*
> @Dinis @Mario @Simon, the challenge is when you have say 334x XSS and you
> do not want to create hundreds of tickets and you want to consolidate them
> into one.
> On the other hand you need to have a way of tracking which issues have
> already been raises as a unique ticket of or as part of a ticket so that
> you do not constantly spam the developers.
> *Possible solution: *The tool found the results needs to be able to have
> the option to "group" issues in a single ticket as an option, but also to
> track each issue over time so it can inform the bug tracker if the issue
> has been resolved or not.
> Additionally it needs to NOT raise an issue in the bug tracker if it is
> already raised and the developer is working on it
>
> *Issue #2*
> @Mario, each org is a bit different so they might not score, or want the
> same attributes so we might want to consider the lowest common denominator
> of stuff that should be in there in order for the tickets to be unique,
> useful, and actionable.
> *Possible solution: *Document a set of guiding principles and
> requirements. Publish an ideal/boiler plate jira project that meets these
> requirements so 1) Tech Teams have something ready made to customize off of
> 2) Have a set of principles to know what to customize towards.
>
> *Issue #3*
> @Simon, I have been thinking about the false positive thing for about a
> year now. In order to get false positive data the tool (I am just going to
> zap in this example to make things easier) would either need to do two
> things:
>
>    1. Have a facility for the user to input false positives from the zap
>    or..
>    2. The tool would need to be able to connect to the bug tracker and
>    identify which issues zap raised are not marked as false positives there.
>
> *Now that you have the data, then what do you do with it?*
>
> To @Mario's point to I really want to ship my security issues data to
> somewhere else? I this case there are a few things that can be done
>
>
>    1. Keep the data local to the org, and simply use the info to leverage
>    as rules to suppress future false positives
>       1. e.g. The following cookies do not need to be set to SECURE etc..
>       2. e.g. The following pages/sub domain can be iframed
>       3. e.g. The following domain is a static domain and we can have
>       CORS set to "*" wildcard
>       4. Ok I'll stop now :-)
>    2. Ask the user if its ok to get diagnostic data and make it explicit
>    what we are asking for e.g.
>       1. we will only ask for how many times a specific rule triggered a
>       false positive (but not the actual content of the request/response)
>    3. Finally you can give the tech team to send more verbose
>    information, if they are happy to do so. Academics and open source tools
>    might be an example.
>       1. There has to be a very clear feature that carefully explains to
>       them what they are actually doing so they can't turn it on by accident.
>    4. I have been thinking about Machine Learning and other AI techniques
>    in this use case to improve the quality of ZAP, there are two areas it can
>    work:
>       1. Filters false positives
>          1. Create a baseline model where ZAP takes all the data
>          contributed by the community to leverage a machine learning algorithm such
>          as logistic regression and user that to "auto filter" that it thinks are
>          false positives
>          2. Create a local model which takes the individual
>          organisation's data and does pretty much the same thing, only in this case
>          the data doesn't leave the organisation.
>          3. I think Spark can be useful for the baseline version, and I
>          have played around with it a little bit.
>       2. Improves the scanner's ability to find issues:
>          1. Ahhh.... this is going to be tough, my first thought is to
>          leverage neural networks such as TensorFlows Deep learning but I have never
>          used it.
>          2. I can see it working for SQLi and a few others pretty well
>          but this will require a lot of thought
>
> *Next steps?*
> *@Dinis*, I think you got quite a bit of info to think about and try to
> incorporate into the draft, so you might want to take some time and find
> out what you think about all this info
> *@all *do you think it makes sense to 1) set some guiding principles 2) a
> jira project with all this info to leverage with the goal to have tech teal
> to be able to:
>
>    - Have something ready made to customize off of
>    - Have a set of principles to know what to customize towards.
>
> *@Simon *This might be a bit further in the future, but if there is a way
> to configure zap to query a bugtracker for such information and use the
> info to improve either the local instance of zap or (with permission) take
> some statistics to help improve the overall quality of ZAP.
>
> -Sherif
>
>
>
> On Thu, Nov 3, 2016 at 1:59 PM, Mario Robles OWASP <mario.robles at owasp.org
> > wrote:
>
>> The workflow I use is very simple actually because need to be adapted to
>> different teams with different SDLC models on different Countries, it’s
>> more generic I would say:
>> Fixing: The issue is assigned to someone working on fixing it (link to
>> issue in their own Agile board), if they challenge the issue and risk is
>> accepted the issue is sent to Done using Risk Accepted or Not an issue as
>> resolution
>> Testing: When security test the issue as part of the QA process
>> Deploying: Security accept or reject the fix sending it back to Fixing or
>> providing approval moving it to the Deploying queue
>> Acceptance: Dev team move the issue to Acceptance when it’s ready on UAT
>> for final tests
>> Done: Security will send the issue back to fixing is something wrong
>> happened, otherwise will provide sign off by moving it to Done using
>> resolution Fixed
>>
>> I use Jira dashboards but also some custom macro based metrics based on
>> Jira exports
>>
>> I do really like your workflow, however in my experience Dev teams start
>> getting hesitant to follow your process when more clicks from their end are
>> needed
>>
>> btw, false positives are not included in my workflow because we never
>> should have a FP included in a list of issues, everything should be
>> validated before including it as an issue, if I have to add it, I think
>> that will be as a Resolution type
>>
>> Mario
>>
>> On Nov 3, 2016, at 06:42, Dinis Cruz <dinis.cruz at owasp.org> wrote:
>>
>> Mario that is really nice, thanks for sharing
>>
>> What workflow do you use to track the changes? Is it something like the
>> (Kanban-like) right-hand side of :
>>
>> <image.png>
>>
>> What about reporting? How do you visualise the data and stats you
>> collect? (in Jira Dashboards or in confluence?)
>>
>> Dinis
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20161106/9afbbc80/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screen Shot 2016-11-05 at 14.09.19.png
Type: image/png
Size: 551530 bytes
Desc: not available
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20161106/9afbbc80/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screen Shot 2016-11-05 at 14.08.34.png
Type: image/png
Size: 259814 bytes
Desc: not available
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20161106/9afbbc80/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PastedGraphic-16.png
Type: image/png
Size: 20362 bytes
Desc: not available
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20161106/9afbbc80/attachment-0005.png>


More information about the OWASP-Leaders mailing list