W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2007

RE: Large content size value

From: Travis Snoozy (Volt) <a-travis@microsoft.com>
Date: Thu, 4 Jan 2007 15:50:22 -0800
To: Larry Masinter <LMM@acm.org>
CC: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-ID: <86EDC3963F04D546BED8996F77D290F6049D11818D@NA-EXMSG-C138.redmond.corp.microsoft.com>

Larry Masinter said:
> > if a client can't download (or a server can't serve) a
> > file bigger than the FS can handle
>
> It is possible for a client to use range retrieval
> to get parts of a large file, even if the client couldn't
> store the whole thing because of file size limitations.
> (This can happen with JPEG2000 image files, for example.)
>
> It's quite possible for a server to serve a dynamically
> generated resource that is bigger than can fit into a
> single file on the file system.
>
> So I don't think the protocol limits and the underlying
> operating system file size limits should be linked
> in any way.
>

Excellent point! I completely ignored dynamic content (though I would think that would almost always be served with a chunked encoding in practice). You've also brought up a great solution.

Since it's possible for the client to detect when a Content-Length or a chunk-length is too long, SHOULD the client then attempt a series of byte-range requests instead? This would solve all the prior problems I've mentioned, assuming the server implements that part of the protocol (anyone know which servers do, in practice?).

Also, in regard to connection handling: as far as I can tell, the client is going to have to close the connection if an oversized Content-Length shows up, since the client won't be able to read through to the next request reliably. If this is the case, is it specified? It might make for a nice suggestion (not a requirement).


Thanks,

-- Travis
Received on Thursday, 4 January 2007 23:50:34 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:00 GMT