- From: <jg@w3.org>
- Date: Thu, 10 Aug 95 14:26:10 -0400
- To: Paul Leach <paulle@microsoft.com>
- Cc: http-wg-request%cuckoo.hpl.hp.com@hplb.hpl.hp.com, janssen@parc.xerox.com, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com, blampson@microsoft.com
I was about to make a comment about why UDP for basic HTTP was inappropriate, but Jeff Mogul beat me to it, and as usual has said it better than I could have. The congestion collapse of the Internet happened just when we were building X11. The moral imperative to protect the Internet from disaster is emblazoned on the scars some of have from getting up in the middle of the night to get code across the country, when the network didn't work. DCE RPC has several other problems, the most important of which is that it isn't available universally, and very difficult to make it so quickly, independent of other attributes that are needed, some of which are below. Here are some requirements for technology used to build a new protocol: * ubiquitous: anything we depend on has to either be available universally, or available and easy to port. * needs to support non-blocking or streaming (often called "batching") of requests; if there is no error from a request and the request has no return value, the system shouldn't generate network traffic. Round trips are the death of performance on a world wide network, with round trip times often measured in hundreds of milliseconds (or more). * needs to support streaming and interleaving of return values; entities need to be able to be multiplexed on the return connection from the server so we can do appropriate prioritization and prefetching of objects. * bit efficient: the web should be more usable over high latency and low bandwidth links. Too much bandwidth is going into protocol and metadata transport in current HTTP. Think dialup modems, or use over cellular modems, and you immediately get round trips from 150 milliseconds to ~1 second before you start transiting a global network. * fast, at least for the server; think of servers connected to truly high speed networks, which may be delivering many video streams, or millions of requests/hour, for example. Think of the load generated by a television AD for a neat product on a web server 5 years from now, when that advertisment is made during the World Cup (world wide interest, outside of U.S.) or Super Bowl (U.S. only). Everyone may turn around at the next commercial break and hit the same server. Copies of data must be avoided. Cycles and bandwidth on clients are much less dear, but may be much slower systems, so speed on the client side can't be ignored either. * doesn't suffer from error 33: basing work on top of someone else's research work while it is still research is a good way to cause a project (research or not) to never be completed. * reliable, dependable, and problems can get resolved quickly so forward progress can be made. * the usual comments about size, memory consumption, etc. Not all of these requirements have to necessarily be fulfilled by the transport protocol itself, but the transport system certainly affects what you can do with what is built on top. My suspicion is that rolling one from scratch down to TCP is the likely best choice, but I feel we need to understand if any system out there might in fact be useful. I'd be VERY happy to be proved wrong. I've rolled more protocol stubs by hand than I care to think of for two different systems both in widespread use, and I certainly do not love doing so. So I intend to spend some time looking around at what is out there that may fulfill the requirements. (ILU from Xerox, and others that may be suggested to me). Butler Lampson also pointed out to me that the exercise of looking around at existing systems will likely pay off in good ideas worth stealing, even if the RPC system itself isn't appropriate to this application. (If you can't steal good code, at least steal good ideas; a good motto, me thinks). - Jim Gettys W3C
Received on Thursday, 10 August 1995 11:29:44 UTC