Re: (IPng 6517) Packet Loss (was Re: Re: Host Fragmentation

In a message dated 9/24/98 10:17:22 AM Eastern Daylight Time, kasten@argon.com
writes:

 >At 08:28 AM 9/24/98 EDT, Volsinians@aol.com wrote:

 >>If someones provide some real data on today's network and reasoning
 >>that does not depend on the assumption of old-fashioned, poorly
 >>designed or misdesigned hardware and very old very buggy software,
 >>there would be some reason to pay attention to his analysis.  Paying
 >>attention to the <A HREF="http://members.aol.com/Telford001/#ANALYZE">Mogul
paper</A> 
 >>to design the protocol of the future is
 >>ludicrous.  

 >Packet loss in today's network is a not insignificant problem.
 
 >About a year ago I was doing some web browsing from home
 >looking at some pages that had several moderate to large
 >pictures on them. While I have a 28.8 modem, I was getting
 >only a couple hundred bytes/second aggregate goodput
 >downloading the pages, much less than the 2-3KB
 >I should see (doing simple arithmetic). At first I
 >thought it was just some odd property of the compression
 >in the modem. Soon I notived that the aggregate goodput
 >was much higher for pages with only one or two pictures
 >than, say, 5 or 6. I turned on a packet tracer, downloaded
 >a page with several pictures and discovered some truly odd
 >packet reception patterns. Lots of retransmissions, occasionally
 >I'd get the same packet (by TCP sequence number) several times
 >back to back, etc, etc. 

 >To make a long story short, the problem was that I had multiple TCP
 >connections open (one per image) and they all were imploding onto the
 >same dialup server at my ISP. The server's queue was filling and
 >packets were being discarded left and right. Basically I was getting
 >randomly selected packets from the stream. The server TCPs were not
 >behaving well -- either they weren't doing good congestion avoidance
 >and backoff, or the traffic loss was so variable that the server TCP
 >just couldn't make sense out of the acks it was getting from me. (once
 >I saw what was happening, I reduced the number of simultaneous
 >connections to 2 and goodput went up to about 2.5KB).

 >So, if these packets were IP fragments rather than individual TCP
 >segments, the odds would be good that I would have received 0 complete
 >TCP segments.

Whenever something is retransmitted on the internet, users assume that
the retransmission results from a lost packet. Under the assumption
that all delays are a result of packet loss, a large file could never
be sent across the network in less then a day:->

This example makes the argument for HTTP 1.1, not for path MTU
discovery. I believe that a large piece of the problem seen in this
example is related to the queuing delay from the network to PC over
the 28k phone line. The observed problem is a result of limitation of
the the TCP congestion control algorithm.  This algorithm cannot deal
effectively with multiple concurrent streams from or to a single host,
for it is only connection oriented and not host oriented.

As a result each connection queues at least one packet to the slow
interface.  Then, while the packets are waiting to be sent serially
across the 28K connection, the TCP retransmit timer fires and the
packet is sent again. It is possible that reducing the size of the
packets by using only IP fragmentation instead of including the extra
TCP headers might have made getting 3 streams possible rather than
only 2. The observed phenomenon was certainly not a result of lost
packets.

This example underscores why 
<A HREF="http://members.aol.com/Telford001/#SERVICES">The Critical Review of
Internet Technology and Intranet Computing (The
C.R.I.T.I.C)</A>
is useful and worthwhile for network administrators and designers as
well as for packet switch implementers, for it provides analysis and
tools to help understand this sort of situation.

Joachim Martillo
<A HREF="http://members.aol.com/Telford001/">Telford Tools, Inc.</A> 

Received on Thursday, 24 September 1998 22:22:14 UTC