W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2007

Re: Large content size value

From: Jamie Lokier <jamie@shareable.org>
Date: Sat, 6 Jan 2007 17:40:17 +0000
To: Larry Masinter <LMM@acm.org>
Cc: ietf-http-wg@w3.org
Message-ID: <20070106174016.GA28198@mail.shareable.org>

Larry Masinter wrote:
> One use case for large content size files that aren't downloaded
> in entirety are JPEG2000 image files using only range retrieval,

I've also seen ranges used to stream video in a home video display
system.  It actually fetches about a frame's worth of data with each
request.  (Why they don't just stream it starting from a particular
position, I've no idea).

The streamed files are vastly larger than the client could possibly

> It's realistic to expect implementations to use 64-bit integers for
> quantities that reasonably exceed 32-bit representations,

I don't agree; many implementations couldn't possibly store anything
that big (e.g. a mobile phone), and if they have no reason to do range
requests from a large resource, there's no point in them handling
64-bit size values.


> it's realistic to expect implementations to check (and fail
> gracefully) when any received protocol value exceeds its
> representation capacity.

on that I wholeheartedly agree.  It's not hard; there's no good
excuse.  I'm astonished and disappointed that Microsoft screwed that up.

-- JAmie
Received on Saturday, 6 January 2007 17:40:43 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:41 UTC