[Owasp-leaders] Password Storage and Hash Chaining

John Steven John.Steven at owasp.org
Tue Jul 3 11:47:27 UTC 2012


I applaud Adrian positing a solution. He makes some compelling and
concrete points I believe I've covered in my problem
definition/solution design for a 'password storage module' (shared
earlier @: https://docs.google.com/document/d/1_XDb4O84VGW95NFHFQhr87Bq6PnKhmNE3RkxEzJnYnU/edit

Note from my previous post covers this topic:

( http://lists.owasp.org/pipermail/owasp-leaders/2012-June/007423.html )

Item #2: (quoting)  "Protocols leveraging a secret key, applied to a
transform on S (appserver), maintain several advantages over [only]
custom salted schemes IFF properties of compartmentalization (S
separate from D) and separation of privilege (between access to S and
D) hold."

A) The 2nd secret, referred to as a "system salt", has (negative)
implications that will be misused/abused later by system
implementations/maintenance. How would such a "salt" be created (is it
SPRNG-generated, administrator-defined, key-derived)? How would it be
protected? If it was compromised--salts, as de-dup-agents are often
not protected--how would that affect the security posture? Will
others, having identified this 'system salt' remember that this
special-case salt can't be shared when others can?

B) The construct demonstrated in the email below [S1 || S2 || PW]] may
leak information in the resulting digest and provide opportunity for
cryptanalysis. Specifically, I'm concerned about the site-specific
salt, which is where the major benefit lies. This may not bother you
if you believe that the simplest/most successful attack vector will
continue to be brute-force rather than the more complex analyses.

C) An issue with bcrypt() and pbkdf(), both as specified formally and
implemented, is they derive the hmac key from existing user-specified
data (aka password) or system-provided entropy (aka salt). This means
that the construct below--as specified--isn't going to provide
additional search space over a plain salted hash in defense of
brute-force attack through the PBKDF scheme.

D) This leaves one aspect to the design posited below: the iteration
count. Adrian already states that the iteration count only provides
"CPU-hard" and not "MEM-hard" operation. But, 1000 to 5000 iterations
will not be sufficient in the face of GPU-based attack. Papers I've
recently read indicate that 64,000 is a more appropriate iteration
account for modern hardware. Experimentation with my home-based
servers indicate that If an implementation design wants to serve a
similar purpose to DES's performance from the mid-70's (a.k.a 1second
login times) you'll need far more iterations.

***My opinion as an architect is this: when your design relies on a
for loop and you've amped up the guard to tens-of-thousands, it's time
to rely on another--more deterministic--approach.

***As a second (more cheeky) aside, the bcrypt vs. pbkdf argument
reminds me of a bus ride home from school as a child. Kid #1: "I'm
infinity smarter than you..." Kid #2: "Well, I'm infinity smarter than
you + 1...": when you have an iteration count in both cases, when in
both cases the system-entropy is fixed (salt + pw), and when the
relative baseline performance of one vs. the other is a small multiple
(i.e. 5-64x ... hell, 1000x), does the relative performance differ
appreciably as iteration count "c" increases? ;-)

***scrypt, for instance, is an interesting alternative to bcrypt or
pbkdf. It requires a fair bit more configuration and tuning to
implement effectively. I don't believe an scrypt-based engine would
support large commercial web portals because of lack of Enterprise
support, load and performance properties, and (relative) immaturity
but remain committed to determining effective settings for scrypt's p,
n, & r parameters for the more adventurous.

I don't mean to be obtuse... I am working on solutions to this problem
that fit the various constraints (FIPS-compliant or older
environments, currently supported well-reviewed crypto-providers, and
the--in my opinion--problematic adaptive hash approaches). Remember
the reason I blogged an unsatisfactory spot-fix for this situation
(properly caveated) is because thoughtful 'green-field' design takes
far longer than the news cycle... I continue to apologize to those
(such as Jim) who await its completion.  Adrain, I'll share my
existing design, including engine specification with you in advance of
release so you can both comment on it and benefit from it in the

Finally, I'd like to remind readers of a key design point covered by
my original post but neither by this email's analysis or the
previously quoted thread: the notion that--even if you use an adaptive
hash--you need a series of mechanisms that will be responsible for
actually rotating your storage scheme. Simply placing an iteration
count in the core 'engine' of your PSM algorithm is not sufficient to
handle 1) fraudulent user barrier under attack 2) rotation, 3)
backup/retention of previous schemes. As my first email on the topic
stated: planning for these features may be more valuable than
perfecting the core algorithm itself in practice.

Phone: 703.727.4034
Rss: http://feeds.feedburner.com/M1splacedOnTheWeb

On Sun, Jul 1, 2012 at 11:45 PM, Adrian Hayes <adrian.hayes at owasp.org> wrote:
> The hashing algorithm you've described is (afaik) perfectly reasonable. It
> is similar to PBKDF2 ( https://en.wikipedia.org/wiki/PBKDF2) but with a
> system salt. PBKDF2 has the advantage that it's been well reviewed and used
> for a number of years and (maybe?) less easy for developers to mess up. How
> about something like this?
> hash = PBKDF2('HMAC-SHA-256', user_password + system_salt, user_salt, 5000,
> 160)
> 5000 = number of rounds
> 160 = output hash length
> I understand PBKDF2 is FIPS compliant
> (http://csrc.nist.gov/publications/nistpubs/800-132/nist-sp800-132.pdf) if
> used with more than 1000 rounds, proper random salt generation, and a
> compliant hashing algorithm.
> However, PBKDF2 is not a memory hard algorithm. If you can use a memory hard
> algorithm you can help remove the GPU cracking advantage (which can be a big
> deal, for SHA-1 GPUs are around 60 - 300 times faster depending on the GPU).
> I understand bcrypt is memory hard enough to foil GPU base crackers (for
> now), or at least I haven't seen one yet (if you know one, let me know).
> There seems to be many reasonable password hashing algorithms and it seems
> like ideally you should have these things:
> A well reviewed/tested algorithm
> Adjustably high computational complexity
> Both per user and per system salts
> Relatively high memory usage (compared to say, SHA-1)
> Which I'm not sure you can do while staying FIPS compliant.
> Adrian Hayes
> OWASP New Zealand Chapter Leader (Wellington)
> On 02/07/12 08:53, Jim Manico wrote:
> A lot of people are looking at the password storage cheat sheet due to
> recent breaches.
> One of the recommendations we make is regarding "hash chaining" which is
> to repeat the hash algorithm many times to slow down cracking of hashed
> passwords.
> Bcrypt does this (and in general is a good choice for password storage),
> but is often not permitted in some organizations due to it's blowfish
> roots.
> So when we cannot use B/S-Crypt, we recommend (ie: via ESAPI and the
> Password Storage Cheat-sheet) the use of one of the SHA family of
> algorithms. In addition to hashing we of course recommend salting. SHA
> variants are insanely fast, and to slow down password cracking we
> recommend hash iteration (which is really formally called "Hash Chaining").
> One of the big mistakes we have made in ESAPI and in the Password
> Storage cheat-sheet is that we recommend iterating the just the hash.
> There is no cryptographic benefit to this mechanism in the face of large
> rainbow tables, which do indeed exist.
> So this pseudo code from ESAPI is a (total) failure in terms of slowing
> down SHA-based password storage:
> // rehash a number of times to help strengthen weak passwords
> bytes = hash(user-salt + system-salt + password)
> for (inti= 0; i< iterations; i++) {
>     bytes = hash(bytes)
> }
> And the more cryptographically sound way to store a password based on
> SHA is something along the lines of:
> // rehash a number of times to help strengthen weak passwords
> bytes = hash(user-salt + system-salt + password)
> for (inti= 0; i< iterations; i++) {
>     bytes = hash(bytes + user-salt + system-salt + password + hash(i))
> }
> I advise that the ESAPI team consider strengthening the current Java
> implementation. The cheat-sheet team will make changes to the Password
> Storage Cheatsheet as well in the near future. For more information,
> please read up on the Merkle theorem (Merkle's time-memory trade-off to
> be more specific).
> I am admittedly not a cryptographer, or even an applied cryptographer.
> :) Comments are greatly appreciated. I think we have a long way to go as
> a community to clarify the best way to store a password.  John Steven -
> looking forward to the next iteration of your threat model. :)

More information about the OWASP-Leaders mailing list