W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1996

Sticky stuff.

From: <hallam@etna.ai.mit.edu>
Date: Thu, 08 Aug 96 16:07:07 -0400
Message-Id: <9608082007.AA03503@Etna.ai.mit.edu>
To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/1253

Some comments on the sticky headers draft:-

1) The asymmetry between the client/server responses may be due
more to the current limited way in which HTTP is used than a
feature of the problem itself. If we get servers which can implement
the PUT and POST mechanisms then I think that the situation might
well change.

2) I'm not sure of the amount that these proceedures buy us. It
would be nice to have figures. Jim G. has made good points about
the importance of getting as much usefull information in
the first packet send out (i.e. before we hit the slow start
throttle). This mechanism appears to be aimed more at increasing
the efficiency of later packets.

I suspect that the control data is not a substantial fraction of 
the total message size. It may be more effective to push on
people to implement compression/decompression of message data
rather than to worry overmuch on the size of the control data.
Or at the very least point out this issue in the draft.
3) Section 2.2 asserts that proxies typically multiplex server
connections across multiple clients. Is this in fact the case?
What is the actual benefit of doing this? How often do two
people from the same site wish to connect to the same remote site

I am very skeptical about people having implemented such a 
feature in a multi-process server where the interprocess communication
overhead would be very large for the payoff. Certainly
the phrase "typical" does not seem appropriate. I could see it
being possible in a single process, multi-thread server. I
would like to see figures showing how often this case came up
before compromising other optimisations to adapt to this

I point out this problem because much of the complication of 
the spec appears to be working arround this convergence of
independent sessions into a single stream. 

On the other hand there may well be a number of proxies performing
usefull work undoing the effects of simultaneous connections
for image downloads. In this situation combining 4 streams from
one anti-social browser into one is quite plausible, but note that
in this case the headers will probably be compresssable!

4) The sticky header and possibly the connection header should be 
explicitly excluded from the set of sticky headers (!)

5) Section 6.

This section should note that "replay attack" problem will always
be present whenever the compression technique is possible. If
an authentication technique authenticates the message itself then
it will have to be a function of the message body and hence not

6) Some mechanism for flushing the header cache would be usefull.
This would help in the multiplexing proxy server case. After 
it is finished receiving input from one source the proxy can send
a "flush" message and reset the stream.

Overall I would like to see an awfull lot of numbers based on 
empirical measurement before deciding whether this is a 
worthwhile scheme or not. Although it looks OK to me I know
from experience that without hard numbers it is very easy to
overoptimise corner cases that almost never occur.

Received on Thursday, 8 August 1996 13:03:33 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:17 UTC