[Owasp-leaders] HP NEWS: Briefing on the First Real-time Application Security Analysis Solution

John Steven John.Steven at owasp.org
Thu Apr 28 19:32:05 EDT 2011


Jim, Dre,

I've not always had the best personal or professional experience with
folk over at Fortify (now HP), though there are a lot of bright and
high-class people that work at Fortify (and more broadly at HP). A lot
support OWASP while others haven't intersected this group.

I'm explicitly asking you not to bring up general/specific limitations
of HP's static suite with respect to Cigital's solutions, especially
not without hands-on experience with our solutions. Expect Cigital
(any good firm or individual for that matter) to outperform the tool
vendors in helping organizations produce results more suited to
driving meaningful remediation. IMO, tools (and the scant consulting
efforts sold with them) simply don't provide the necessary depth in
terms of code understanding. We saw an effect similar to what Cigital
obtains with ESP with Dinis and his O2 using IBM's solution. I've seen
at least two firms do incredible things with their own staff, without
help from Dinis or I. After about six years of pretty poor improvement
version-on-version, I think you'll finally detect improvement.

Both HP and IBM have made dramatic strides forward in both their
static tools' visibility into the Java EE stack of open source
frameworks in recent months. Less gains have been made on the .NET
side, in my experience, but improvement has occurred. I've verified
improved performance out of HP's SCA 5.10 and while it hasn't hit
production yet, speak highly of the quality and extensibility of IBM's
coming solution to <this problem>.  Now that the groundwork is
better-laid, expect incremental progress from both shops.

To me, the important questions facing static analysis (and all
whitebox code assessment more generally) are about perception of
coverage verses the actual ability of the tools to 'see into'
implementations enough to consider them covered. As such:

1) Though vendors are making strides to support framework visibility
and data-flow through  collections/MVC frameworks), they're doing so
in a linear fashion. Framework evolution, unfortunately, proceeds more
quickly than "linear". How will HP:

1a) Quantify its framework visibility for <given platforms> for its
customers--so they know how much to trust the tool and where to focus
'manual SCR' efforts?

1b) Accelerate support for framework modeling (Crowd-sourcing,
customer contribution, partners, MadResearch+automation-fu?) to keep
pace with framework development?

1c) Support the inevitable: that organizations craft their own
proprietary frameworks that their organizations need tools to support?

2) Support for MVC-style frameworks is "one thing" but how will tools
effectively test IoC (inversion of Control) frameworks, particularly
those (such as Spring) that support complex DI (dependency-injection)
at runtime?

This is a huge problem, which Dre alluded to--inevitably as a result
of our last conversation. This problem cross-cuts both static and
dynamic analysis. Static tools will be unable to figure out
relationships between components (and thus completing data flow)
without modeling (or actually executing, which makes it dynamic)
injection. Even dynamic approaches will suffer from what becomes at
best a confusing notion of 'coverage' when testing. Those designing
dynamic analysis may assert, "We don't care about code coverage--at
any level--we care about trying requests that will uncover
vulnerabilities." The engines implemented in dynamic in some sense
"encode" their designers' understanding of the web frameworks they're
testing. DI makes this much more difficult. The more parametric the
construction of the code path that services a request, the more
state-space (to test) is represented by combinations of functionality
providing response. In practice, this means that finding an injection
context, for instance, may be more difficult (or assessing the impact
of a proven injection across other contexts may prove more difficult).

Finally,

3)  With the amount of code executed in the browser growing,
"Cross-tier" data-flow makes static tools increasingly blind as to
system behavior and dynamic tools increasingly look like integration
tests rather than system tests. For me, 1) the amount of client code
generated dynamically on the server, 2) AJAX and 3) a resurgence of
object remoting represent the "last straws" breaking tools' backs in
terms of usefulness. Yet, leading vendors increasingly bid
"portal-style" full-portfolio solutions to the security testing
problem. With light-/no-touch application on-boarding, I'm just not
sure how to quantify the value of what's being provided... ...or how
to advise security managers about "how much" more there is to do...
...or how much is "enough".

Assessment tech is in real trouble, when you look beyond the ole
"badness o-meter" (which still does its job well). And, bear in mind,
I don't have solutions to the above problems. If I was asked these
questions, I'd not have good answers. _yes_ I'm actively working on
these issues, but twinkles in my eye don't exactly qualify as a
prototype--let alone a workable solution to anyone's problems.

I ask these questions because we need to upgrade the assessment
conversation comprehensively. Otherwise, we'll all be forced back into
the boutique-style manual testing we enjoyed doing years ago. While
initially enticing for individual players at the high end, this will
marginalize the field as a whole sending corporations elsewhere for
their testing approaches. HP and IBM need to do more to "build and
meet" the market for assessment of modern apps. We'll all need to
help.


-jOHN

***This, of course, sets aside the "assess it verses fix it"
discussion... but I'm not prepared to support jumping into that fray.
I _would_ like to know if HP has anything on their roadmap to support,
as I frequently ask of organizations, the ability to build an
assurance case with testing tools--showing developers to have done the
right thing--rather than just finding vulns.

-- 
Phone: 703.727.4034
Rss: http://feeds.feedburner.com/M1splacedOnTheWeb


On Thu, Apr 28, 2011 at 4:21 PM, Andre Gironda <andreg at gmail.com> wrote:
> On Thu, Apr 28, 2011 at 12:46 PM, Jim Manico <jim.manico at owasp.org> wrote:
>> Can anyone help me come up with **tough** questions to ask HP for an OWASP
>> Podcast? Conversation with HP PR below….
>
> What, you mean besides, "Why pay for something that costs easily half
> of a million dollars for a DAST-only solution when I can get it for
> free, better, using w3af emailReport, Arachni webui, or Burp Suite Pro
> in headless mode with sodapop.sh ?"
>
> Or more of the tune of "How did Cigital figure out how to deal with
> source code and code patterns that have multiple levels of indirection
> in their ESP solution, but Fortify can't figure out how to get basic
> Spring DI (or any other DI enabled framework) entry points mapped to
> sources, let alone link those sources to sinks in a forward or
> backwards tracing direction?"
>
> Or how about "Why doesn't Fortify SCA provide lost sink results like
> Appscan Source Edition does?"
>
> I would start with those, and move on to some more spiteful questions
> later in the interview.
>
> -Andre
> _______________________________________________
> OWASP-Leaders mailing list
> OWASP-Leaders at lists.owasp.org
> https://lists.owasp.org/mailman/listinfo/owasp-leaders


More information about the OWASP-Leaders mailing list