[Owasp_defectdojo_project] DefectDojo De-duplication of Findings
jay.paz at gmail.com
Fri Jul 29 21:02:37 UTC 2016
I think it is a worthy addition, but don't believe you need an
model. Te Engagement already holds a list of all its findings and it is
easy to query them because of the reverse look up capabilities of Django.
To find all findings associated with a Product all you have to do is query
all_findings = Finding.objects.filter(test__engagement__product=product)
>From there you can compare the incoming findings to the existing ones and
weed out the dupes. My questions are:
1. What do you do if the existing finding is already mitigated or inactive
- does it get reactivated because an incoming finding matches it, or does a
new finding get created instead.
2. How will you compare them and decide they are dupes? What if the
incoming finding is for a different endpoint, will the endpoint be added to
the existing finding or a new finding created with its own list of
3. What if the severity is different, or the environment, etc...
I think adding a new model complicates the process and is not necessary.
By using Django's reverse querying capabilities you can handle this without
affecting anything else that depends on the Finding model. Because of the
search capabilities, Findings are already indexed really well, and I don't
believe you will have a performance issue.
my two cents,
On Fri, Jul 29, 2016 at 3:52 PM, Greg Anderson <greg.anderson at owasp.org>
> Hi Everyone!
> *My Question:*
> Should this be in core?
> I think so, but I'm also not sure I could reasonably do this as a plugin
> because it touches so many things:
> *Okay the details:*
> A big item that Pearson needs for Dojo is de-duplication of findings. This
> is something that ThreadFix does that dojo does not. The idea is that you
> can upload multiple scans and Dojo will automatically try to remove
> duplicates. How I was thinking of implementing this was adding a ManytoMany
> relationship to Engagements with a new model called VettedFindings. The
> engagement would hold a list of findings that would be compared when new
> ones are added. If a match doesn't exist it would be added to the list
> rather than being filtered on the fly. I think this would be best for
> performance. However, it has far reaching impact on the metrics, e.g. all
> findings filters would have to be replaced with vettedfinding filters
> (although they would be identical).
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Owasp_defectdojo_project