Re: Sticky stuff.

>The Gauntlet (TM) Internet firewall V3.2 has an http proxy/gateway that
>does maintain connections between the client and the proxy. But since this
>is over Ethernet or faster, sticky headers wouldn't be much of a win. The
>proxy does NOT do onward persistant connections yet. 

This is not the case Roy is citing. Say you have two clients, Alice and
Bob. Alice goes to the CNN site, you establish a connection, Bob then comes 
and requests a page from CNN. The server re-uses the connection to CNN.

In the absence of any other reasons this would be a usefull 
optimization which might save a measurable amount of time but
somehow I doubt that the fact that Bob can only make his request
after Alice is finished makes this a very worthwhile optimisation.

Unless you keep the TCP/IP socket open a while after Alice finishes the 
chance that you will reach the proxy in a timeslot when Alice is finished
and the connection is still arround is very low. If you do hold
the socket open you and the remote server will both pay for that. 

I simply don't see any worthwhile benefit. Or at best I see more benefit
in sticky headers than in the fan-in optimisation.


I think we have to be realistic about what caching can and cannot 
achieve. Caching is only going to be effective if the response
is insensitive to the request.

I suggested a form of header compression a while ('93) in the
days when Mosaic spewed forth about 2K of Accept headers. The (not
so good) idea then was to hash the headers with MD5 and then
servers could quickly build up a database of Browser "profiles"
describing their capabilities. In those days 2K was a worthwhile
saving because it brought the accept list down to a reasonable
size. I consider a multi circuit content negotiation on the 
Spero or Holtman model a better solution. Especially since these
days we have proxies that can fiddle with the headers (!).


I think that before accepting any optimisation we should set 
a fairly hefty threshold for it to cross before it is allowed
to constrain the protocol. If we had adopted my compression hack
in 93 then we would have been stuck when it came to proxies.
I think that tokenising HTTP may well fit into the same 
category. We are in danger of missing out on major optimisations
if we worry overmuch about minor ones. I think that numbers
have to rule here and Koen's numbers seem to rule out a
significant advantage.


		Phill

Received on Friday, 9 August 1996 11:31:07 UTC