- From: Chuck Shotton <cshotton@biap.com>
- Date: Mon, 13 Nov 1995 17:19:13 -0600
- To: Lou Montulli <montulli@mozilla.com>
- Cc: Benjamin Franz <snowhare@netimages.com>, Gavin Nicol <gtn@ebt.com>, fielding@avron.ICS.UCI.EDU, masinter@parc.xerox.com, ari@netscape.com, john@math.nwu.edu, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
At 2:49 PM 11/13/95, Lou Montulli wrote: >If your server wishes to compute data on the fly then that's fine. >Ari's byterange proposal allows the server to specify explicitly >which objects are byterange seekable and those that are not. >Since your server is not smart enough to be able to compute a >byterange, you can simply keep byteranges off. It's not a matter of being "smart enough". It's a matter of whether or not there is a BETTER solution that doesn't impose additional computational overhead on the server side. For generated content, the content length isn't known so you are saying that in addition to the normal operations that a server has to do, when a byte range request is made it also has to count all the bytes that it intends to emit and only emit the ones that fall within a given range. This whole proposal made great sense when it was motivated by serving unmodified binary PDF files out of a static file system. The server implementation was trivial. But by trying to generalize this to a URL-oriented solution, we run across many content types where it would be nice for byte ranges to work, but the implementation of the currently proposed scheme is non-trivial, compute intensive, and of dubious value for anything other than static binary files. >> If the issue is to deliver portions of an entire document because that >> portion is a recognizably distinct object that the browser can deal with, I >> say let the server specify how those parts are to be requested and >> delivered. This is a much more rational, useful reason for byte range >> extensions to exist. Trying to justify them with some specious argument >> about resumed file transfers is perhaps the biggest red herring of all. > >You seem to be content with ignoring the fact that users interrupt transfers >on nearly every heavily laden graphical page they visit. This causes many >many valid partial transfers to occur. If you want to sit around and ignore >the issues users are having trouble with that's fine, but don't try and >block a perfectly valid solution to a very serious problem. First of all, I *don't* understand the "seriousness" of the problem. If a user chooses to terminate a transfer, that was a conscious decision on their part. As far as I've seen, the trouble with partial transfers is with certain Web client applications that can't distinguish between a partial and complete file transfer, resulting in corrupted client-side caches. I see no aspect of that problem that is solved by byte ranges. Let's get the goal straight once and for all. This whole byte range thing originally came up last summer around the desire to serve portions of large PDF documents. Are you saying that the issue now is the ability to resume interrupted file transfers? If so, then why are we dorking around with a URL oriented solution? This screams for a modification to the GET method. What you are saying is that you still want to retrieve a given URL (from the client's perspective), but you'd like to GET only the portion you don't already have. Rather like the if-modified-since header affects server responses, a byte-range: header seems more appropriate than convoluting the URL itself. I guess it's really just a matter of semantics, but all the special punctuation, separators and other ca-ca that hang off the end URLs could more easily be represented in many cases as header fields in a GET or POST request. --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton StarNine Technologies, Inc. chuck@starnine.com http://www.starnine.com/ cshotton@biap.com http://www.biap.com/ "Shut up and eat your vegetables!"
Received on Monday, 13 November 1995 15:23:57 UTC