- From: <touch@ISI.EDU>
- Date: Fri, 1 Dec 1995 17:59:28 -0800
- To: touch@ISI.EDU, ses@tipper.oit.unc.edu
- Cc: mogul@pa.dec.com, marc@ckm.ucsf.edu, www-talk@www0.cern.ch, www-speed@tipper.oit.unc.edu
> Note - the server rules imply that cache updates > arrive on a different IP port than direct requests, > and that the cache loads come on different IP ports > than direct responses. > > Using these rules avoids the mistake you observe with the proposal > in HTTP-NG - using the same port requires a RTT to preempt a connection > and enable a direct response. PS - note that this *still* takes a penalty of up to two packet times, worst case. Even if you have different ports for messages to arrive on, they still come over the same wire. Unless (as in Ethernet) you can preempt a packet in transit, there's always the possibility that: a cache update packet is in transit to the server before your direct request transmitting your cache update packet would have to be preempted by your transmitting a cache update. transmitted speculation packets would have to be preempted at the server as well. and few protocol implementations provide for preemption Joe
Received on Saturday, 2 December 1995 04:36:59 UTC