- From: Roy T. Fielding <fielding@liege.ICS.UCI.EDU>
- Date: Thu, 08 Aug 1996 17:06:41 -0700
- To: Jeffrey Mogul <mogul@pa.dec.com>, Paul Leach <paulle@microsoft.com>
- Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
> Overall I would like to see an awfull lot of numbers based on > empirical measurement before deciding whether this is a > worthwhile scheme or not. Although it looks OK to me I know > from experience that without hard numbers it is very easy to > overoptimise corner cases that almost never occur. > > I think it would be nice to get a trace of the actual bytes > carried by a real proxy (not just the URLs), and apply the > proposed compression schemes to see how successful they are. > Of course, this would have to be done carefully to avoid > breaching the privacy of the proxy's users. Yes, and I'd also like to see it compared to a complete tokenization of the protocol (also just a translation table to implement) and the perceived benefits as compared to a multiplexing of requests on a single connection. I doubt that there will be any perceptable performance improvement when using modern user agents (ones without 1k worth of Accept headers) which do not generate GET requests larger than 2 packets, since the multi-connection overhead is worse for any congested network. But, I would like that confirmed (or even better, disproven) by actual performance data over real TCP connections, and I have no time to do that. The feedback I received while gathering design issues for HTTP/1.x is that no piddly performance improvements are worthwhile because of the added complexity to implementations. The only changes that were considered worthwhile were, in order: persistent connections (mostly to enable continuous connection to an organization's proxy), multiplexing (mostly to enable non-jerky rendering of the visible page without opening extra connections), and tokenization of both header fields and their contents (to reduce bandwidth consumption). The design of HTTP/1.1 is purposely geared towards tokenization -- that was the plan all along; once the semantics of HTTP were well-defined, it would be very easy to make an extremely efficient HTTP/2.0. Anything less is, IMHO, a waste of time. I don't mind if people want to waste their own time, but I do mind when it is done under the banner of proposed standard (which ends up wasting everyone's time even when the idea is a great one). I don't see why this can't be done as an experiment first -- I created the Upgrade header field specifically to allow such experiments to occur *outside* the standardization arena. Right now we are arguing over the wording of a proposal which hasn't even been determined to do anything useful, let alone something that should be standardized. ...Roy T. Fielding Department of Information & Computer Science (fielding@ics.uci.edu) University of California, Irvine, CA 92697-3425 fax:+1(714)824-4056 http://www.ics.uci.edu/~fielding/
Received on Thursday, 8 August 1996 17:18:21 UTC