W3C home > Mailing lists > Public > public-webapps@w3.org > January to March 2011

RE: [Bug 11351] New: [IndexedDB] Should we have a maximum key size (or something like that)?

From: Pablo Castro <Pablo.Castro@microsoft.com>
Date: Tue, 15 Feb 2011 07:27:54 +0000
To: Jeremy Orlow <jorlow@chromium.org>
CC: Jonas Sicking <jonas@sicking.cc>, "public-webapps@w3.org" <public-webapps@w3.org>
Message-ID: <F108E2F6BA743C4696146F0B7111C26103662C@TK5EX14MBXC242.redmond.corp.microsoft.com>

>> From: jorlow@google.com [mailto:jorlow@google.com] On Behalf Of Jeremy Orlow
>> Sent: Sunday, February 06, 2011 12:43 PM
>> On Tue, Dec 14, 2010 at 4:26 PM, Pablo Castro <Pablo.Castro@microsoft.com> wrote:
>> From: jorlow@google.com [mailto:jorlow@google.com] On Behalf Of Jeremy Orlow
>> Sent: Tuesday, December 14, 2010 4:23 PM
>> >> On Wed, Dec 15, 2010 at 12:19 AM, Pablo Castro <Pablo.Castro@microsoft.com> wrote:
>> >>
>> >> From: public-webapps-request@w3.org [mailto:public-webapps-request@w3.org] On Behalf Of Jonas Sicking
>> >> Sent: Friday, December 10, 2010 1:42 PM
>> >>
>> >> >> On Fri, Dec 10, 2010 at 7:32 AM, Jeremy Orlow <jorlow@chromium.org> wrote:
>> >> >> > Any more thoughts on this?
>> >> >>
>> >> >> I don't feel strongly one way or another. Implementation wise I don't
>> >> >> really understand why implementations couldn't use keys of unlimited
>> >> >> size. I wouldn't imagine implementations would want to use fixed-size
>> >> >> allocations for every key anyway, right (which would be a strong
>> >> >> reason to keep maximum size down).
>> >> I don't have a very strong opinion either. I don't quite agree with the guideline of "having something working slowly is better than not working at all"...as having something not work at all sometimes may help developers hit a wall and think differently about their approach for a given problem. That said, if folks think this is an instance where we're better off not having a limit I'm fine with it.
>> >>
>> >> My only concern is that the developer might not hit this wall, but then some user (doing things the developer didn't fully anticipate) could hit that wall.  I can definitely see both sides of the argument though.  And elsewhere we've headed more in the direction of forcing the developer to think about performance, but this case seems a bit more non-deterministic than any of those.
>> Yeah, that's a good point for this case, avoiding data-dependent errors is probably worth the perf hit.
>> My current thinking is that we should have some relatively large limit....maybe on the order of 64k?  It seems like it'd be very difficult to hit such a limit with any sort of legitimate use case, and the chances of some subtle data-dependent error would be much less.  But a 1GB key is just not going to work well in any implementation (if it doesn't simply oom the process!).  So despite what I said earlier, I guess I think we should have some limit...but keep it an order of magnitude or two larger than what we expect any legitimate usage to hit just to keep the system as flexible as possible.
>> Does that sound reasonable to people?

I thought we were trying to avoid data-dependent errors and thus shooting for having no limit (which may translate into having very large limits in actual implementations but not the kind of thing you'd typically hit).  

Specifying an exact size may be a bit weird...I guess an alternative could be to spec what is the minimum size UAs need to support. A related problem is what units is this specified in, if it's bytes then that means developers need to make assumptions about how strings are stored or something.

Received on Tuesday, 15 February 2011 07:28:32 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:13:16 UTC