W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1995

Re: Comments on Byte range draft

From: Brian Behlendorf <brian@organic.com>
Date: Mon, 13 Nov 1995 15:15:17 -0800 (PST)
To: Chuck Shotton <cshotton@biap.com>
Cc: Benjamin Franz <snowhare@netimages.com>, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Message-Id: <Pine.SGI.3.91.951113144433.4932S-100000@fully.organic.com>
On Mon, 13 Nov 1995, Chuck Shotton wrote:
> I want to see some
> constructive discussion about why a CGI-based implementation of byte ranges
> is unacceptable. 

To use your Xmodem/Zmodem analogy, it's like suggesting that download
parameters be tied into a name of an object to be transferred.  Using a CGI
script to deliver parts of files is definitely not "wrong".  However, writing
a PDF viewer which presumed any server serving PDF files to also have a
"/cgi-bin/pdfsplitter" CGI script or something in its namespace is going
beyond the bounds of what a transport or communications protocol should
define.  The point of trying to decide on a "standard" for this is not just
so that server authors can agree to implement common support - proxy servers
need to know that a request for part of a thing can be immediately fulfilled
if the whole thing is in the cache.  This is presumably why John and Ari
brought it to the table in the first place - so that Ari can support this
kind of functionality in the netscape proxy.  A CGI-based implementation
makes this impossible, unless you suggest that the proxy recognize and
intercept requests to /cgi-bin/pdfsplitter or whatever it is called. 

I believe what you are really asking for is a way for the client and 
server to have something of a conversation about how to access sub-parts 
of content - i.e., frames of an movie, time ranges of audio, a page 
out of a PDF file.  Obviously byteranges are inadequate for these kind of 
application-level queries.  Let's <em>separate</em> this from the 
byterange proposal for the time being - there are definitely better 
answers to this problem, Hytime is/was one solution, and I think it's 
ill-advised to hold everything up until it's solved.  Focus on byte 
ranges as being specifically for transmission recovery for incomplete 
files - the fact that some companies will be using them to access 
individual pages out of a PDF file should just be considered an abuse.

> I would like to see some rationale behind why the URL standard needs to be 
> changed to support an application-specific form of query for a specific
> subset of data types. 

You're right, it doesn't, I'm with you there.  

> I'd like to see a well-reasoned discussion on your
> part about the valid concerns that server implementors have regarding the
> need to re-render entire documents to serve you a byte range for low-value
> reasons like interrupted file transfers, etc. and why you think it is OK to
> ignore these concerns.

The algorithm I posted yesterday allows for servers to return full 
objects even when only a part is requested - the choice is up to them, 
since that point may be hard to compute or dynamic or whatever.  So, I 
would presume in your case the server would just always return full 
objects.  No sweat - that's what the current netscape implementation 
presumes anyways.  Recognizing byte ranges is just a window for 
optimization.  If you don't want to do it, fine, you don't lose.  

> You seem to want to trivialize the function of the Web and associated
> applications to the lowest common denominator of moving the contents of a
> file from point A to point B, a byte at a time. This is an absurdly
> low-level of abstraction. It's like talking about building a house by
> aligning protein and sugar molecules so that they form 2x4s, lining up iron
> atoms into nails, etc.

I hear Eric Drexler is making a comeback....


brian@organic.com  brian@hyperreal.com  http://www.[hyperreal,organic].com/
Received on Monday, 13 November 1995 16:08:57 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:15 UTC