Re: Connection Header

john@math.nwu.edu (John Franks) wrote:

> According to Henrik Frystyk Nielsen:
> > 
> > Using Allow has the same problem as if a Connection header is passed
> > through a proxy. Imagine the following setup
> > 
> > 	new client      ->      old proxy       ->      new server
> > 
> > On the first request, the new server will then send back a
> > 
> > 	Allow: GET, MGET, HEAD, etc.
> > 
> > On the second request, the new client will try a MGET, but the proxy
> > doesn't understand this and then we are back to the old problem having
> > lost one roundtrip time. 
> 
> I don't see this as a very big problem.  We have lost one roundtrip
> time, but only a roundtrip between client and proxy.  In normal use
> client and proxy are very close with low latency.  I don't think this
> is a very big price to pay.  This proposal then has the big advantage
> that it is backwards compatible.  It isn't necessary to try to change
> HTTP1.0 proxies to get them to strip out anything. (I am assuming that
> current proxies will return a standard error status/message for an
> unrecognized method.)

YEP, but they will also close the connection and put the socket into
wait state for about 240 seconds, so that it can't be reused. Then we
have quite the opposite result than what we wanted to begin with. I
don't think the question "which idea is most backwards compatible" is
the most important. Both require in practice that some kind of
modification is made to the current implementations in order to work
well. Another factor is that there often is a `human interaction'
between proxy clients and proxy servers: they are under the same sys
adm etc.

> The criticism of MGET that it is not as general as keep-open is true,
> but I think there is serious danger of performance degradation if that
> generality is used.

I don't quite follow, could you please give some examples?

>More likely unless those concerns can be met keep-open
> would not be widely implemented.  
>
> I would assume that an MGET addition to the protocol would be accompanied
> by a parallel MHEAD method.  It would still not be possible to mix 
> GETs and POSTs or even do multiple POSTs. I don't see any pressing demand
> for these, but perhaps I am just not aware of it.

I can see several things that I would like to do. First, it would be
nice to use the Web to have destributed code development. When you
check in your files in a remote code management system, you can use
PUT. Now, as you often have a lot of files, it would be great to do all
this using one connection. Or using many POST over one TCP instead of
FTP. In the spec we have done a great deal of work trying to give the
client the same possibilities as the server and we should not limit
this using MGET, MHEAD, MPUT etc.

In the next six months you will see many clients being a lot more
interactive and I will do everthing I can to support this in the
Library of Common Code.

> There is an important principle to keep in mind here.  Any proposal
> that can't at least match the user's perceived performance which
> Netscape obtains with multiple connections is not viable. It's a
> competitive world out there and browser writers will have to go with
> multiple connections if nothing else matches their performance.  Any
> proposed standard, MGET or stay-open, which can't measure up to
> multiple connection performance just won't be implemented.

As I wrote in an earlier mail: This is ok if one does it, but if
everybody does it then they all loose. In other words: it must be
scalable!

-- cheers --

Henrik

Received on Monday, 19 December 1994 01:06:03 UTC