[Esapi-user] ESAPI Random Number Generation Broken

Jim Manico jim.manico at owasp.org
Tue Apr 8 05:37:31 UTC 2014

Ok, first of all this is (another) clear proof of Rooks original
report. This is a serious problem. I suggest you create a second
implementation of this (fork) and change the default config to point
to it. That way the more secure option is default, and those who need
backwards compatibility have an option to roll back to. Just an idea,
I'm not 100% clear on this.

Can you also update
https://code.google.com/p/owasp-esapi-java/issues/detail?id=323 label
it critical or whatever you think is appropriate?

*sigh* I am so sorry I didn't say something earlier.

PS: Frequent reseeding is still important, Kevin... fair?

Jim Manico
(808) 652-3805

> On Apr 7, 2014, at 10:09 PM, "Kevin W. Wall" <kevin.w.wall at gmail.com> wrote:
> Okay, I ran some more tests tonight. I thought, I'm going to make this
> worse that what we have it now...I decided to use java.util.Random
> rather than java.security.SecureRandom. I wanted to see how much
> worse it was.
> I ran it using the ESAPI encoding method that DefaultRandomizer
> uses. I got an effective 2 bits of entropy at the 1% significance level.
> That was for 32-character length strings consisting solely of A-Z, a-z,
> and 0-9 and it used Random.nextInt(int) similar to how DefaultRandomizer
> uses SecureRandom.nextInt(int).
> Next I changed it to use Random.nextByte(32) and base64 encoded that.
> I ran through the Burp test again, only telling it to first base64-decode
> the "tokens". This time it said there were 218-bits of entropy (out of
> a possible 256 bits) at a 1% significance level. Much better, right?
> Then I used the exact same strings, but told it NOT to base64-decode
> things, but because I wanted to compare 32 character strings and
> the base64-encode strings were 44 characters, I truncated all 20K
> lines after the first 32 characters. Running through Burp this way
> gave 174-bits at the 1% significance. Lower than before, but better
> than the ESAPI encoding.
> So, either we are dealing with a rather biased Random.nextInt(int)
> method (the Javadoc for this methods alludes to some bias, but it
> is not clear how much), or ...my guess we are doing something stupid
> with the way we are encoding it. (Well, I know we are doing something
> stupid because we are making way more calls then we need to.)
> The call in DefaultRandomizer is normally to SecureRandom.nextInt(32)
> or 64 [in David Rook's tests; I've been using 32 just to make Burp run
> faster], but the way we are creating that string is one character position
> at a time, so for a 32 char "random" string, we are calling
> SecureRandom.nextInt(32) 32 times, then using the resulting 'int'
> to index into a character set array that consists of all upper case alpha,
> lower case alpha, and digits. From an efficiency standpoint, this of
> course is rather insane.
> So here is a question for all of you... How important to you is it
> to maintain an exact backward compatibility of format in terms of
> characters used?
> I can "fix" this in a secure manner by changing it to call
>    byte[] b = new byte[32];
>    SecureRandom.nextBytes(b)
> and then just base64 encode 'b'. That gets us pretty close to
> what we want. (For a 32-byte array [44 chars after base64
> encoding], Burp shows 231 bits of entropy [of a possible 256
> bits].) And while that may not be "fixed" (myself, I still consider
> it broke as would every cryptographer), it's still within the levels
> of acceptable security for most things. [Note: If I create a new
> SecureRandom instance each time it is called, the entropy
> increases from 231 bits to 239 bits, so reseeding seems to
> help...A LITTLE, but it is not the root cause of this problem.]
> Now, I suppose they may be things that break if the format of
> these "random" strings change from what we are using today
> in DefaultRandomizer to using base64 encoded strings (e.g.,
> someone might be doing regex matching), so I don't toss this
> idea out their casually. But IMO, I would have never invented my
> own encoding method to start with. Use hex encoding, or any
> radix R encoding or URL encoding, or whatever. But it seems
> that the current ESAPI encoding it a contributor to why this
> is an epic fail in terms of the Burp statistical analysis. While it
> looks good in theory, it apparently fails in practice. I wish I had
> a better explanation of why, but I don't.
> But to me, better to salvage something rather than having a
> critical component remain broken.  But, OTOH, I wouldn't
> be breaking my code so I'm a little biased. (I intentionally
> didn't use the ESAPI.randomizer() stuff in the ESAPI crypto
> because I didn't want to tightly couple the implementations
> or even interfaces needlessly. (If that decision had been made
> in other ESAPI components, we would have an easier job
> at getting rid of some of these dependencies, but that's another
> story for another time.)
> Thoughts?
> -kevin
> P.S.- If any of you have a firm grounding in statistics, perhaps you
>        could help us analyze this. I had some theories, but eventually
>        shot them down by further experimentation.
> --
> Blog: http://off-the-wall-security.blogspot.com/
> NSA: All your crypto bit are belong to us.

More information about the Esapi-user mailing list