- From: Shel Kaphan <sjk@amazon.com>
- Date: Tue, 12 Sep 1995 12:02:44 -0700
- To: Paul Leach <paulle@microsoft.com>
- Cc: mogul@pa.dec.com, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Paul Leach writes: > Jeff said: > ] When I gave my talk at SIGCOMM last month, John Wroclawski of MIT > ] insisted that if the HTTP protocol allows pipelining, it really needs > ] to provide a means for aborting requests in progress in case the user > ] hits the "stop" button. Otherwise, the user would have to wait for the > ] upload of (potentially) huge files. > > It also needs a way to interleave responses from the pipelined > requests, else long ones will delay shorter ones. Think about the > concurrent "progressive rendering" of multiple GIF and JPEG files that > currently is done using multiple connections, and what it would take to > do it with pipelining. > This is beginning to remind me of something else -- interleaved MPEG streams. (and other related technology). I detect a wheel beginning to be reinvented. Soon (a year or so) I bet we'll be worrying about bandwidth requirements on the different interleaved media. Maybe it would be worth looking into work that has already been done. It is also not obvious to the casual observer that TCP will remain the protocol of choice when all is said and done with this sort of thing. The question is, how much can and should be wedged into it, and when should we start thinking about fundamental reengineering. If I had to guess on ways that the web might fail in the future, especially in comparison to other technology that now seems not to be competitive with it, but might be in the future, it would be because we do an inadequate job now of preparing the protocol(s) for higher bandwidth and real time requirements. Bla bla, etc.
Received on Tuesday, 12 September 1995 12:08:58 UTC