W3C home > Mailing lists > Public > public-webapps@w3.org > October to December 2010

Re: [Bug 11351] New: [IndexedDB] Should we have a maximum key size (or something like that)?

From: Jonas Sicking <jonas@sicking.cc>
Date: Fri, 19 Nov 2010 19:18:21 -0800
Message-ID: <AANLkTi=EG3JezO5FREcmiOfq+VUpM-LArPUqxUdgpBZv@mail.gmail.com>
To: Bjoern Hoehrmann <derhoermi@gmx.net>
Cc: Pablo Castro <Pablo.Castro@microsoft.com>, "public-webapps@w3.org" <public-webapps@w3.org>
On Fri, Nov 19, 2010 at 7:03 PM, Bjoern Hoehrmann <derhoermi@gmx.net> wrote:
> * Pablo Castro wrote:
>>>> Just looking at this list, I guess I'm leaning towards _not_ limiting the
>>>> maximum key size and instead pushing it onto implementations to do the hard
>>>> work here.  If so, we should probably have some normative text about how bigger
>>>> keys will probably not be handled very efficiently.
>>
>>I was trying to make up my mind on this, and I'm not sure this is a good idea.
>>What would be the options for an implementation? Hashing keys into smaller values
>>is pretty painful because of sorting requirements (we'd have to index the data
>>twice, once for the key prefix that fits within limits, and a second one for a
>>hash plus some sort of discriminator for collisions). Just storing a prefix as
>>part of the key under the covers obviously won't fly...am I missing some other option?
>>
>>Clearly consistency in these things is important to people don't get caught off
>>guard. I wonder if we just pick a "reasonable" limit, say 1 K characters (yeah,
>>trying to do something weird to avoid details of how stuff is actually stored),
>>and run with it. I looked around at a few databases (from a single vendor :)),
>>and they seem to all be well over this but not by orders of magnitude (2KB to
>>8KB seems to be the range of upper limits for this in practice).
>
> No limit would be reasonable, the general, and reasonable, assumption is
> that if it works for X it will work for Y, unless Y is ridiculous. There
> is also little point in saying for some values of Y performance will be
> poor: implementations will cater for what is common, which is usually
> not a constant, and when you do unusual things, you already know that it
> is not entirely reasonable to expect the "usual" performance.

The question is in part where the limit for "ridiculous" goes. 1K keys
are sort of ridiculous, though I'm sure it happens.

Note that "unusual" performance here means linear search times rather
than logarithmic. Which in case of a join could easily mean quadratic.
So it's quite commonly not "unusual" performance, but "unacceptable".

> Note that, since JavaScript does not offer key-value dictionaries for
> complex keys, and now that JSON.stringify is widely implemented, it's
> quite common for people to emulate proper dictionaries by using that to
> work around this particular JavaScript limitation. Which would likely
> extend to more persistent forms of storage.

I don't understand what you mean here.

/ Jonas
Received on Saturday, 20 November 2010 03:19:29 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:42 GMT