[Owasp-leaders] OWASP Top 10 Methodology

Paweł Krawczyk pawel.krawczyk at hush.com
Tue Mar 5 21:00:10 UTC 2013


For example: Say that we find 50k SQLi in our data corpus, and that
this number makes SQLi the number one issue in our data sets; as so we
make it the #1 issue in the new OWASP Top 10.

At this point we should ask the question: what is the data set
describes, and what Top10 description? Are these two things
consistent?
	Lets further assume that I am using a SQL database in my application
and that my application is vulnerable to SQLi. We know that having a
SQLi correlates to getting hacked via SQLi but we actually do not know
that any given attacker will use that particular attack vector in my
case. 
This is very good scenario as it describes a number of key elements of
the quantitative analysis:
	*Q: what MUST happen for the SQLi to be possible? A: the app must
have matching vulnerability AND use SQL. This is described using
logical operators in a fault tree.	*Q: what is the likelihood that
that will actually will get exploited (given ANDs are satisfied)? A:
this is calculated using Bayes network applied on top of the fault
treeIt's not trivial, as it requires that you identify the key factors
that influence website's hackability - and this can be done only by
data analysis.

	It all depends. Maybe the attacker wants to steal from me, in that
case perhaps the attacker would not do something so obvious because he
fears it will be detected and cut him off from his income. Maybe the
attacker wants to deface my website; and SQLi will not help him reach
that goal. 
Probably the key thing to understand here is that we're not talking
about any specific attacker here. We are trying to predict statistical
outcome of a large number of repeated probes to find out how many of
them (or how frequently) they will actually end up with a compromise.
The most frequent counterargument I hear when talking about
vulnerability management is "we had this app for 10 years now and it
had all the bugs and nothing happened, so what's your problem". 
	All I am saying is that this is not a metric. We can not measure the
likelihood that I will be attacked via SQLi because we can not predict
if let alone how I will be hacked. 
As soon as there's not an infinite number of attack methods, it can be
estimated and predicted. And measured to some extent - honeypots (and
poor hacked production systems) are our primitive measurement tools.
In addition to that, you can safely discard a large number of unlikely
attacks. For example in terms of prevalence 99,9% web servers on the
Internet are vulnerable to TEMPEST eavesdropping - but they don't
care, because it's not in their risk profile. 

	I am in agreement with you, I think we have the data (and plenty of
it) and the talent to do science, but I think right now we are doing
more astrology than astronomy.Most of current risk management
(high-medium-low, heat maps used as risk calculation tool) indeed is
astrology. We make some statements, apply some controls and are proud
of "at least doing something". Astronomy  makes hypothesis, builds
model and verifies it experimentally and iteratest that until the
model works pretty well. In security industry we are in the phase
before even making hypothesis, so it's good time to start doing it.
There's a really good book on risk management by Doug Hubbard with
rather pessimistic title "The Failure if Risk Management"
(http://amzn.to/Vuwy2x) but it's actually rather optimistic in the end
- it shows how to move from the astrological heat-maps to a more
rational approach.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.owasp.org/pipermail/owasp-leaders/attachments/20130305/4d8a5293/attachment-0001.html>


More information about the OWASP-Leaders mailing list