RE: indefinite server-push (was 'Last-Modified in chunked footer')

On Monday, September 15, 1997 11:32 AM, Larry Masinter [] wrote:

> but seriously, shouldn't there be an expectation that a single
> HTTP request should get a complete reply, properly terminated,
> within a relatively small amount of time, and that a continuous
> entity body without termination (delivered through chunked encoding,
> perhaps) is not a valid HTTP response?

No, there has never been this assumption and it is impossible to
put one in now.

Servers already commonly deliver far more content than is practical
to store to disk. I regularly use HTTP to transfer files of several hundred
Mb. It is far faster using HTTP than NFS on our LAN and I suspect on
many others. TCP/IPs stream connection has great performance
advantages over UDP. If you don't believe me set up to Alphas running
Digital UNIX on an entirely separate LAN and benchmark them.
Incidentally HTTP also outperforms FTP if the server providing the 
data is a MAC and in any case vastly reduces the probability of
the data being corrupted by braindamaged character conversion.

A perfectly reasonable use for HTTP is to use it to transfer backups
across a Lan. 

> If we don't disallow such things, a proxy implementation which attempted
> to buffer complete responses before sending them on would be
> non-compliant.

Such proxies are simply broken. There are plenty of good ones available.
I see no reason to cripple the spec to make it easy for people with a
broken O/S to write code. It is a trivial matter to implement a pass 
through buffering system with threads, libwww does this without threads.
At this point however I would regard any system without threads to
be 'legacy' and not deserving of having the spec mangled to pander to it.

A continuous entity body allows the creation of interactive chat like 
services that work much better. Simply send the data as it is
generated. The big problem is that although the clients can almost
all handle this mode of use there is no way of making the window
scroll down to show the most recent material.

I really don't think that any change in this area is acceptable when a
spec is meant to be progressing to proposed standard. Arbitrary 
limits on data sizes are almost always bad.

It is worth pointing out that during the development of the chunked 
spec we considered the problem of sending chunks of more than 
2^64 bytes. 


Received on Monday, 15 September 1997 10:00:07 UTC