[Owasp_defectdojo_project] Owasp_defectdojo_project post from jay.paz at gmail.com requires approval

Greg Anderson greg.anderson at owasp.org
Fri Jul 29 21:18:50 UTC 2016


At the Test level we also wanted to preserve what the scanner has found,
rather than throw out findings.

On Fri, Jul 29, 2016 at 4:15 PM, Greg Anderson <greg.anderson at owasp.org>
wrote:

> Hi Jay,
>
> Great points.
>
> 1.  What do you do if the existing finding is already mitigated or
> inactive - does it get reactivated because an incoming finding matches it,
> or does a new finding get created instead.
>
> My thought would be to compare the dates. If at a later time a duplicate
> finding was added as 'active' then I believe it should take on the status
> of the latest round of tests.
>
> 2.  How will you compare them and decide they are dupes?  What if the
> incoming finding is for a different endpoint, will the endpoint be added to
> the existing finding or a new finding created with its own list of
> endpoints.
>
> How TF de-duplicates is it compares the, url, parameter, and CWE. For
> Dojo, my initial thought was to compare endpoints and titles, although we
> could instead look at / include CWE. Though often Dojo findings don't have
> CWEs.
>
> 3. What if the severity is different, or the environment, etc...
>
> Per 2, a different environment would be a different finding since the
> endpoint would be different. For severity, I would refer to the most recent
> like 1.
>
>
> My thoughts on why it should be a separate model was for filtering. For
> metrics I thought it would be easier to aggregate. Otherwise how would you
> tell them apart for the Product or Product Type metrics? What do you think
> of the idea of instead adding a 'vetted' field to the finding model
> instead?
>
>
>
> On Fri, Jul 29, 2016 at 4:07 PM, <
> owasp_defectdojo_project-owner at lists.owasp.org> wrote:
>
>> As list administrator, your authorization is requested for the
>> following mailing list posting:
>>
>>     List:    Owasp_defectdojo_project at lists.owasp.org
>>     From:    jay.paz at gmail.com
>>     Subject: Re: DefectDojo De-duplication of Findings
>>     Reason:  Post by non-member to a members-only list
>>
>> At your convenience, visit:
>>
>>     https://lists.owasp.org/mailman/admindb/owasp_defectdojo_project
>>
>> to approve or deny the request.
>>
>>
>> ---------- Forwarded message ----------
>> From: Jay Paz <jay.paz at gmail.com>
>> To: Greg Anderson <greg.anderson at owasp.org>
>> Cc: owasp_defectdojo_project at lists.owasp.org, cneill09 at gmail.com, Matt
>> Tesauro <mtesauro at gmail.com>
>> Date: Fri, 29 Jul 2016 16:02:37 -0500
>> Subject: Re: DefectDojo De-duplication of Findings
>> I think it is a worthy addition, but don't believe you need an additional
>> 'VettedFindings' model.  Te Engagement already holds a list of all its
>> findings and it is easy to query them because of the reverse look up
>> capabilities of Django.  To find all findings associated with a Product all
>> you have to do is query for them
>>
>> all_findings = Finding.objects.filter(test__engagement__product=product)
>>
>> From there you can compare the incoming findings to the existing ones and
>> weed out the dupes.  My questions are:
>>
>> 1.  What do you do if the existing finding is already mitigated or
>> inactive - does it get reactivated because an incoming finding matches it,
>> or does a new finding get created instead.
>> 2.  How will you compare them and decide they are dupes?  What if the
>> incoming finding is for a different endpoint, will the endpoint be added to
>> the existing finding or a new finding created with its own list of
>> endpoints.
>> 3.  What if the severity is different, or the environment, etc...
>>
>>
>> I think adding a new model complicates the process and is not necessary.
>> By using Django's reverse querying capabilities you can handle this without
>> affecting anything else that depends on the Finding model.  Because of the
>> search capabilities, Findings are already indexed really well, and I don't
>> believe you will have a performance issue.
>>
>> my two cents,
>>
>> Jay
>>
>> On Fri, Jul 29, 2016 at 3:52 PM, Greg Anderson <greg.anderson at owasp.org>
>> wrote:
>>
>>> Hi Everyone!
>>>
>>> *My Question:*
>>> Should this be in core?
>>>
>>> I think so, but I'm also not sure I could reasonably do this as a plugin
>>> because it touches so many things:
>>>
>>> *Okay the details:*
>>>
>>> A big item that Pearson needs for Dojo is de-duplication of findings.
>>> This is something that ThreadFix does that dojo does not. The idea is that
>>> you can upload multiple scans and Dojo will automatically try to remove
>>> duplicates. How I was thinking of implementing this was adding a ManytoMany
>>> relationship to Engagements with a new model called VettedFindings. The
>>> engagement would hold a list of findings that would be compared when new
>>> ones are added. If a match doesn't exist it would be added to the list
>>> rather than being filtered on the fly. I think this would be best for
>>> performance. However, it has far reaching impact on the metrics, e.g. all
>>> findings filters would have to be replaced with vettedfinding filters
>>> (although they would be identical).
>>>
>>>
>>>
>>
>>
>> ---------- Forwarded message ----------
>> From: owasp_defectdojo_project-request at lists.owasp.org
>> To:
>> Cc:
>> Date: Fri, 29 Jul 2016 21:07:13 +0000
>> Subject: confirm 28a5fe55861e8974ab290acb4a2db9c52bb7891e
>> If you reply to this message, keeping the Subject: header intact,
>> Mailman will discard the held message.  Do this if the message is
>> spam.  If you reply to this message and include an Approved: header
>> with the list password in it, the message will be approved for posting
>> to the list.  The Approved: header can also appear in the first line
>> of the body of the reply.
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp_defectdojo_project/attachments/20160729/81f98cd7/attachment-0001.html>


More information about the Owasp_defectdojo_project mailing list