[Owasp-phoenix] [Owasp-dotnet] Application and Execution Context Identities
email at iseric.com
Mon Aug 7 20:27:10 EDT 2006
Is it possible to locally install another
domain's SSL certificate and effectively impersonate the domain without
receiving certificate errors?
Storing an encryption key on the server in a hosted environment usually
means that it is written to a file, which can be located within the
web root itself (for many hosts). Hosts may provide a separate, additional directory
that is not relative to the website's root for storage. However, this situation
still allows your application to be copied with the appropriate key
file and executed elsewhere. For shared Windows-based hosting, access to the server's registry is rarely allowed. The separate, private, and secure local directory is an excellent additional measure of securing keys.
*I am attempting to identify multiple solutions that might allow an application to reliably and securely identify itself and it's execution context. I am attempting to identify as many solutions as possible between "store a key somewhere, somehow" and "read the server's physical serial number".
From: mikeiscool <michaelslists at gmail.com>
Sent: Monday, August 07, 2006 5:01 PM
To: email at iseric.com
Subject: SPAM-LOW: Re: [Owasp-dotnet] Application and Execution Context Identities
On 8/8/06, Eric Swanson wrote:
> I have been researching issues with validating application identity and
> execution context (environment) recently.
> What are your thoughts/comments/suggestions/etc?
> (interested in platform independant solutions as well as specific
> One of the most difficult scenarios I have brainstormed has been:
> A client has a Microsoft-based website hosted by a 3rd party. The host
> exposes a SQL Server database so that the client can connect remotely. The
> client's website stores a single encrypted connection string to communicate
> with the database. Now, if an attacker were to obtain all of the website's
> files, they could execute the application from a completely different
> domain, including writing their own code to interface with the application.
> *This example can be extended to any publically exposed service that
> requires authentication parameters, but relies on the application to provide
> them without requiring user interaction (web services, XML-RPC, AJAX,
> remoting, etc.).
> I have identified a couple of ways to help protect an application from
> out-of-context execution, but none of them are fail-safe:
> Store and communicate an application identifier somewhere other than the
> website file system that the website code can access. This may not be
> possible in many hosted environments. This also does not protect your
> application from someone with access to this "secret" location, or a
> compromised system.
> Store and communicate a list of trusted domains and IPs. These can be
> spoofed. Web farms and network communications can also introduce elements
> of complexity.
> Additionally, many programmers do not take the time to secure execution of
> their code in a web environment (i.e. once someone obtains the website's
> code they often have "free reign" over the code's execution). As you can
> see, the concern isn't simply the application-supplied authentication
> credentials (like a database connection string), but the direct manipulation
> of an application's code execution (like writing a custom class to
> initialize the website's data access layer and carry out a series of
> unsavory commands).
> Ideally, all of a website's support services (databases, application server
> APIs, etc.) would only be exposed on an internal network not available to
> the public so that communications inherently depend on internal execution
> context. However, the reality is much different.
As usual the answer to how to secure a hardcoded file is to keep the
password of the server. If your attack scenario is someone stealing
the source code [i.e some file download attack, or stolen backup tapes
or something], then it's trivial to protect it with a required 'server
That is, until you have visit: https://foo/setKey.whatever and entered
in the application key, the system cannot run.
The app will then use the key, decrypt the connection string or
whatever else, and then release the key. We can then narrow the attack
scenario down to someone accessing a live running server and copying
the runtime to disk, or finding the connection string in memory. Alot
harder then simplying getting source files ...
> Eric Swanson
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Owasp-phoenix