Re: indefinite server-push (was 'Last-Modified in chunked footer')

On Mon, 15 Sep 1997, Larry Masinter wrote:

> In reply to:
> > I think that "indefinitely long server push" should be explicitly
> > disallowed. What's a robot to do, for example? I suppose this
> > is a different topic.
> 
> Ben Laurie said:
> > Try ringing you local TV station and telling them they have to stop
> > broadcasting after 3 hours, because your VCR runs out of tape. :-)
> 
> but seriously, shouldn't there be an expectation that a single
> HTTP request should get a complete reply, properly terminated,
> within a relatively small amount of time, and that a continuous
> entity body without termination (delivered through chunked encoding,
> perhaps) is not a valid HTTP response?
> 
> If we don't disallow such things, a proxy implementation which attempted
> to buffer complete responses before sending them on would be
> non-compliant.

The problem can be forced on a proxy if it receives two pipelined requests
on a single persistent connection and needs to fanout the requests to two
servers. If for some reason, the first request is really slow to process,
the second request which could be for an arbitrarily large object 
could choke a proxy which needed to queue the response waiting for the
end of the first request.

I'm inclined to share the concern but I'm having difficulty conceiving
of a solution which can be stated in a way which doesn't break other
normal usage.

Possiblities I've thought of:
1.  Provide a proxy response code which essentially says that the 
    response to the request was impossible to proxy/buffer. Then add
    a new connection: token like "nobuffer" which could be used ONLY
    for a non-persistent connection and would advise the proxy that
    the data stream should be routed directly to the client. In other
    words almost tunnel mode. This would allow for backoff and retry.

2.  In any case, some kind of normative permission must exist which
    allows a proxy to reject any response which requires too much
    resource to handle. The real flaw in chunked encoding is that
    a proxy has no clue at the beginning of the response. An
    escape must be allowed. What is "too much" is quite subjective.
    What can we say about the minimum sized object a proxy should
    be able to handle? I don't see much difference between a push
    stream and a large object.

3.  The general problem of push and proxies would seem to be that for
    at least some applications, having the proxy cache data would be
    contrary to the needs of the application. Solving this would
    seem well out of scope for closing the WG on a timely basis.
    And perhaps we have some self limitation here. The PUSH server
    will use higher level 'chunking' and send multiple objects of
    moderate size. Such objects will arrive on a timely basis, not
    choke proxies, and would fit well with the persistent connection
    model we already have.

Anyway those are some thoughts.

Dave Morris

Received on Monday, 15 September 1997 10:11:05 UTC