W3C home > Mailing lists > Public > public-webapps@w3.org > October to December 2010

Re: Hash functions

From: Boris Zbarsky <bzbarsky@MIT.EDU>
Date: Mon, 20 Dec 2010 20:49:48 -0500
Message-ID: <4D1007BC.1020308@mit.edu>
To: Glenn Maynard <glenn@zewt.org>
CC: public-webapps@w3.org
On 12/20/10 7:42 PM, Glenn Maynard wrote:
> Has a hash functions API been considered, so browsers can expose, for
> example, a native SHA-1 implementation?  Doing this in JS is possible,
> but painfully slow, even with current JS implementations.

Before we go further into this, can we quantify "painfully slow" so we 
have some idea of the magnitude of the problem?

Using the testcase at 
https://bugzilla.mozilla.org/attachment.cgi?id=487844 but modifying the 
input string to be 768 chars, I see a current js implementation on 
modern laptop hardware take around 7 seconds to hash it 50,000 times.

Or if I modify it to only calculate one hash, but have that be the hash 
of a 3,840,000 character string, I get times around 400ms.

Running a command-line shasum utility on the same 3,840,000 characters 
(as ASCII in a file, etc) on the same hardware seems to be about 8x 
faster than that for me (50ms or so).

I have not looked at the actual JS SHA-1 implemementation involved here, 
by the way; chances are it's far from optimal for a JS 
implementation....  Chances also are it will get faster as efforts like 
Crankshaft, Mozilla's type inference, etc get deployed.

And of course for a different hashing algorithm all bets would be off in 
terms of js perf; it might be more or less js-amenable....

So I guess the question is how much data we want to be pushing through 
the hash function and what throughput we expect, and whether we think JS 
engines simply won't get there or will take too long to do so.

-Boris
Received on Tuesday, 21 December 2010 01:50:24 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:42 GMT