Re: [Bug 11351] New: [IndexedDB] Should we have a maximum key size (or something like that)?

On Tue, Dec 14, 2010 at 4:26 PM, Pablo Castro <Pablo.Castro@microsoft.com>wrote:

>
> From: jorlow@google.com [mailto:jorlow@google.com] On Behalf Of Jeremy
> Orlow
> Sent: Tuesday, December 14, 2010 4:23 PM
>
> >> On Wed, Dec 15, 2010 at 12:19 AM, Pablo Castro <
> Pablo.Castro@microsoft.com> wrote:
> >>
> >> From: public-webapps-request@w3.org [mailto:
> public-webapps-request@w3.org] On Behalf Of Jonas Sicking
> >> Sent: Friday, December 10, 2010 1:42 PM
> >>
> >> >> On Fri, Dec 10, 2010 at 7:32 AM, Jeremy Orlow <jorlow@chromium.org>
> wrote:
> >> >> > Any more thoughts on this?
> >> >>
> >> >> I don't feel strongly one way or another. Implementation wise I don't
> >> >> really understand why implementations couldn't use keys of unlimited
> >> >> size. I wouldn't imagine implementations would want to use fixed-size
> >> >> allocations for every key anyway, right (which would be a strong
> >> >> reason to keep maximum size down).
> >> I don't have a very strong opinion either. I don't quite agree with the
> guideline of "having something working slowly is better than not working at
> all"...as having something not work at all sometimes may help developers hit
> a wall and think differently about their approach for a given problem. That
> said, if folks think this is an instance where we're better off not having a
> limit I'm fine with it.
> >>
> >> My only concern is that the developer might not hit this wall, but then
> some user (doing things the developer didn't fully anticipate) could hit
> that wall.  I can definitely see both sides of the argument though.  And
> elsewhere we've headed more in the direction of forcing the developer to
> think about performance, but this case seems a bit more non-deterministic
> than any of those.
>
> Yeah, that's a good point for this case, avoiding data-dependent errors is
> probably worth the perf hit.


My current thinking is that we should have some relatively large
limit....maybe on the order of 64k?  It seems like it'd be very difficult to
hit such a limit with any sort of legitimate use case, and the chances of
some subtle data-dependent error would be much less.  But a 1GB key is just
not going to work well in any implementation (if it doesn't simply oom the
process!).  So despite what I said earlier, I guess I think we should have
some limit...but keep it an order of magnitude or two larger than what we
expect any legitimate usage to hit just to keep the system as flexible as
possible.

Does that sound reasonable to people?

J

Received on Sunday, 6 February 2011 20:43:25 UTC