[Esapi-user] Implementation of Global Output Encoder with ESAPI

Kevin W. Wall kevin.w.wall at gmail.com
Fri May 7 23:56:13 EDT 2010


[Moving this thread *exclusively* to ESAPI-User list and CC'ing Ramesh in
case he doesn't subscribe to that list.

Jim: Is there anyway we can have posts to owasp-esapi generate a bounce with
a message to post to ESAPI-Users or ESAPI-Developers list instead? -kevin]

Chris Schmidt wrote:
> On Fri, May 7, 2010 at 7:12 PM, Kevin W. Wall <kevin.w.wall at gmail.com>wrote:
>
>> Jim Manico wrote:
>>>> Jim you are absolutely right - but there are some cases where you need
>>> the *big hammer* approach.
8<---snip--->8
> One of the thing about manifesto's, philosophies, and coding paradigms is
> that they are fantastic out the gate, but if you are working with an
> established enterprise application, in a relatively agile environment, for a
> conversion based corp - it is very difficult to justify the risk vs. reward
> for changing a lot of existing *working* code that is making money to fully
> integrate a security framework that is *unproven* in your environment.

My assumption, which could very well be wrong, is that 99% of the development
teams are NOT going to change their existing *working* code *unless*
is has been shown to have vulnerabilities.  So this is *my starting point*
for this discussion: A team with "working" code is deciding to use ESAPI to
address known existing vulnerabilities. (Presumably there could also be
compliance or software assurance reasons for this as well.)

> This
> is the very situation that I am basing a good deal of my ESAPI training on,
> because while I know we *all* dream of having the luxury of creating an
> application from the ground up, the reality is that more often than not you
> come into an existing project, with a ton of legacy code, and a lot of
> policies and procedures that limit your ability to do things the "right way"

For a greenfield approach on new projects, ESAPI might very well be chosen
out of the gate to prevent vulnerabilities in the first place. (Most of
us certainly hope so.) But I didn't think you or Ramesh were referring
to using ESAPI in a brand new project, but rather using it to secure legacy
code.

> In these situations, I have found that doing something that provides
> short-term gain with very little risk is an effective strategy for opening
> the door to further changes.

I agree. You at least need to be able to get to the point where you are
successfully blocking attacks attempted by automated worms and script
kiddies. It will take a lot more to block experts.  But some is better
than none, and one can't let the perfect be the enemy of the good.

>> That means that even if a filter will work _today_ in your code, there is
>> no guarantee that it will work in the future. As soon as someone starts
>> putting user input in some other context such as a CSS, then your filter
>> will not work and your XSS problems are back.
>>
>>
> You are absolutely correct, and in most applications, those places where it
> absolutely does not work, and in some cases makes things worse are the
> places that you will get your opportunity to do it the "right way" and give
> the framework the opportunity to prove itself in terms of effectiveness,
> ease of implementation, and reward.

A reasonable approach I think. Part of managing risks is assigning priorities
to them. One rarely has the luxury to have the business shutdown the
application while vulnerabilities are being remediated and often there
are so many of them that it will clearly take several release cycles to
address them all.

>> It takes a lot of effort to do write the code right...but not nearly as
>> much effort as it does if you write the code wrong.
>>
> That is one of the differences I think, I don't really consider the big
> hammer approach to be *wrong* per se. I consider it to be a starting point
> and a point to build off of.

I think that you misunderstood my statement. First, I was referring to the
LONG term effort, and secondly (as mentioned above) I was assuming that one
had already identified *specific vulnerabilities* that needed to be
addressed.

So what I was referring to might be something like that you already know
that the HTML form input parameters at lines mmm-nnn in Xyz.jsp are vulnerable
to an XSS. And you know that parameter A is being used in the context
between HTML tags, parameter B in the context of (say) JavaScript, and
parameter C in the context of CSS, then I would contend to say it a rather
simple matter of using ESAPI the way that Jim describes rather than
starting with some simple Java Servlet filter like what Ramesh described and
trying to extend the filter in arbitrary ways to correctly eliminate
all the vulnerabilities.

OTOH, I think it's a very different case if you don't have a clue
where in your code you have vulnerabilities and so you just have to
assume that it's pretty much everywhere.  Then a filter approach--especially
one aware of the context like you describe below--is a workable solution.
That's because in a large legacy codebase, trying to identify all the
places of where you need to instrument the code with ESAPI can be a
very large effort in itself. However, I think if you have already
identified these places, unless you have thousands of them (very
possible) that Jim's approach is better for reasons described below.

> The filter that I use in our application does
> indeed use a filter, that started out as the basics of what was above. As
> the filtered was mapped over various components of our application we
> extended the filter itself to provide additional functionality and consider
> the context and state of the request when encoding. What we ended up with
> was a filter that wrapped our request with a powerful and extendable wrapper
> that in the end was used by tags to render the encoded information in the
> correct context.
>
> The tags are now the defacto way that information on the request and session
> is accessed, and intelligent enough to use the request wrapper if it is
> present, consider the state of the request itself (as well as the source of
> the request - ie AJAX, jsp:include, GET/POST, etc.) that we now have a
> correctly implemented encoding strategy in place that is tuned to our
> application.

This is fine, but it sounds like YOUR filter is a great deal more
sophisticated than the one that Ramesh originally described.  You've
probably been working on that filter to get it to that level of
context awareness for quite awhile now. It also sounds as though even though
your filter is working, that in order to make the filter less complex,
you still had to have the have the developers to use some special tags
in order to provide clues to the filter as to the encoding context.
(And perhaps that begs the question, why not just have them use a
tag library by itself. If they have to go through the effort of
inserting tags into the code, why not have the tags do all the work?)

Another question is how long did all this take? While your approach might
not be the "big hammer" approach, it takes more than a trivial amount
of effort to get where it's at today.

Another thing that I don't like about the filter approach is the same thing that
I find objectionable about WAFs. I fear that developers will start relying
on their WAFs or in this case, on your filter, rather than learning the
proper way to address these vulnerabilities. (Note: That might not be until
they leave their current company and go work somewhere else.) So what do
they do when it is not available?

This WAF / filter approach is like applying a tourniquet. Sometime it is
just what the patient needs, but we all know if you leave a tourniquet
on for too long, that very tourniquet can case the patient to lose the very
limb that it originally saved.

In fact, Jeff and I were discussing this today with some members of my team
when I mentioned that we had replaced the CSRFGuard Java Servlet filter
with a tag library because we felt that involving the developers in
the fixes makes it easier for them to understand the dangers of CSRF
vulnerabilities as well as how to protect against it. We preferred that
over the more "invisible" (as in "magic happens here") CSRFGuard filter.
Plus the tag library account is visible to code inspections and not (as)
susceptible to configuration errors. (I don't know how many times that dev
teams deliver production code without properly configuring a servlet filter.)

Both WAFs and filters have their place and doing the virtual patching thing
is one place that I am convinced that they are very beneficial. To continue
the earlier analogy, they stop the patient's immediate bleeding. But I do not
think that relying on them should be the long term strategy.

> Now, while this may not be the answer for *everyone* it was the answer for
> me - and it works well, and more importantly it proved itself iteratively
> which was important to management.

Each situation is somewhat different, but just let me say that I was
making these comments based on context of Ramesh's original email, not
based on needs that you had that I wasn't aware of at all. If it works
for you, that's great. I guess I should have had

    Standard disclaimer: YMMV

> At the end of the day, we all have our own visions of "The Perfect Codebase"
> and anyone who has been an engineer for any length of time knows that the
> perfect codebase is a unicorn. We constantly have to adapt and make
> decisions and sometimes we have to write shortcuts to make sure that
> something more important gets completed. These are just the realities of the
> job, and it is just as important to make sure that your application security
> practices can fit the bill for *any* application, not just the great ones.
> :)

Actually, my vision for "The Perfect Codebase" is all the code that you
can remove. If you delete code from your codebase, recompile it (if
applicable), and redeploy it, I'm pretty sure that an attacker will
not be able to exploit the removed code. (If they did, you probably
forgot to clear your Java EE or Servlet container's cache. ;-)

-kevin
-- 
Kevin W. Wall
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."        -- Nathaniel Borenstein, co-creator of MIME



More information about the Esapi-user mailing list