W3C home > Mailing lists > Public > www-talk@w3.org > March to April 1996

Re: Improve the traffic condition on the Internet (fwd)

From: MegaZone <megazone@livingston.com>
Date: Thu, 21 Mar 1996 00:01:43 -0800 (PST)
Message-Id: <199603210801.AAA10579@server.livingston.com>
To: www-talk@w3.org
Once upon a time Gao Hong shaped the electrons to say...
>Actually the graphic and audio file is not the main traffic on the low speed network.  When the transfer rate is very slow, no one dare to get those files.  The main traffic on such net is the text files(include HTML files of course) and the compress rate for the text file is almost 70%, especially those file with JAVASCRIPT.  We can foresee that with the internet goes home and office, there will be three kinds of network, one provides for fast connection but with high fee, one provides for medium connection, and one provides for low connection with cheap fee.  On the cheap connection the traffic will be so crowded that only text file can be transfered.


Working in the industry I very much disagree with this view of where the
nets are going.

ISDN links are *exploding* for home use, and ever faster systems are in
the pipeline.  And costs continue to to fall for HW while usability 
increases.  Monitoring even small sites shows that nearly all users still
download the graphics and text is *not* the majority.  None of the sites
I have worked with, and that goes from tiny startups to Netcom, reflect
your hypothesis.  ISPs are moving towards one tier of access in general, 
sometimes two (async and ISDN), they are consolidating levels, not expanding
them.

>Based on such thought, I think the compression over the text(html) file is neccesary and feasible.

I disagree here again.

>But my idea about compression is not the traditional ones that the compressed file can only be expanded after the file transfer process is completed.  To make the reader feel no difference when reading the compressed file, the expanding process must be able to output the expanded information little by little so that in the transfer procedure, the reader can read the part of file information that has been transfered while waiting for the following parts of the file transfered through the network.

How does it deal with TCP fragmentation?  What about load balanced connections?
Dual B-channel ISDN pipes?  How will you address packet reassembly?  What
about the latency compression *always* involves?  Compressing each packet
is not feasable - what if the packet is fragmented in transit?  How will
high use servers deal with compression - it *will* be a high processor
load for them.  Or will you force users to store files precompressed? (never
happen)

It just doesn't hold up to real world experiences, as nice as it might
sound in theory.

-MZ
--
Although I work for Livingston Enterprises Technical Support, I alone am
responsible for everything contained herein.  So don't waste my managers'
time bitching to them if you don't like something I've said.  Flame me.
Phone: 800-458-9966  support@livingston.com  <http://www.livingston.com/> 
FAX: 510-426-8951    6920 Koll Center Parkway #220, Pleasanton, CA 94566
Received on Thursday, 21 March 1996 03:01:43 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:19 GMT