W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1996

Re: Sticky stuff.

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Thu, 08 Aug 96 13:55:29 MDT
Message-Id: <9608082055.AA07581@acetes.pa.dec.com>
To: hallam@etna.ai.mit.edu
Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/1255
    1) The asymmetry between the client/server responses may be due
    more to the current limited way in which HTTP is used than a
    feature of the problem itself. If we get servers which can implement
    the PUT and POST mechanisms then I think that the situation might
    well change.

I basically agree.  If there is a straightforward way to make
sticky headers work in both directions, it seems to make more sense
to define that now, rather than assume that the current asymmetry
will be true for every.  If there is a true asymmetry of the complexity
of supporting sticky headers, then maybe the request-only approach
is best, but so far I haven't see any suggestion that this is the case.
    2) I'm not sure of the amount that these proceedures buy us. It
    would be nice to have figures. Jim G. has made good points about
    the importance of getting as much usefull information in
    the first packet send out (i.e. before we hit the slow start
    throttle). This mechanism appears to be aimed more at increasing
    the efficiency of later packets.
    I suspect that the control data is not a substantial fraction of 
    the total message size. It may be more effective to push on
    people to implement compression/decompression of message data
    rather than to worry overmuch on the size of the control data.
    Or at the very least point out this issue in the draft.
Actually, headers are the predominant source of data bytes flowing
from client to server (i.e., in the request direction), at least as
far as I am aware.  This is not such a significant fraction of the
bytes on the Internet backbone, perhaps, but when people are using
low-bandwidth links, the request-header-transmission delays
directly contribute to the user's perceived response time, and
reducing them would seem valuable.  Also, if the home market becomes
widely served by asymmetric-bandwidth systems (such as Hybrid
Network's product; see http://hybrid.com/), then request-header
bytes become proportionally more expensive.

    3) Section 2.2 asserts that proxies typically multiplex server
    connections across multiple clients. Is this in fact the case?

(Almost?) nobody yet uses proxies that support persistent connections.
So it's hard to provide data from experience.  At least, I have not
seen any.
    Overall I would like to see an awfull lot of numbers based on 
    empirical measurement before deciding whether this is a 
    worthwhile scheme or not. Although it looks OK to me I know
    from experience that without hard numbers it is very easy to
    overoptimise corner cases that almost never occur.
I think it would be nice to get a trace of the actual bytes
carried by a real proxy (not just the URLs), and apply the
proposed compression schemes to see how successful they are.
Of course, this would have to be done carefully to avoid
breaching the privacy of the proxy's users.

Received on Thursday, 8 August 1996 14:04:16 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:17 UTC