W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 2000

RE: Http overhead

From: Joris Dobbelsteen <joris.dobbelsteen@mail.com>
Date: Fri, 1 Dec 2000 15:28:21 +0100
To: "WWW WG (E-mail)" <http-wg@cuckoo.hpl.hp.com>
Message-ID: <001501c05ba3$28a32030$01ff1fac@Thuis.local>
Many apache servers send data chunked. Takes a couple bytes (average of
4) for every block transfered. Maybe a total overhead of an additional
20-40 bytes per transfer (maybe less). This is just a guess...


- Joris


> -----Original Message-----
> From: francis@localhost.localdomain
> [mailto:francis@localhost.localdomain]On Behalf Of John Stracke
> Sent: Thursday 30 November, 2000 16:45
> To: http-wg@cuckoo.hpl.hp.com
> Subject: Re: Http overhead
> 
> 
> dillon@hns.com wrote:
> 
> >      The latest standard (HTTP 1.1) has provisions for 
> compression and "chunked"
> > transfers which change this, but I haven't seen these used 
> in any real-world
> > situations
> > yet.
> 
> Apache will recognize a file with a ".gz" extension as 
> gzipped, and send the
> Content-Encoding: x-gzip line.  Netscape will recognize 
> Content-Encoding: x-gzip,
> and uncompress the file.  Unfortunately, at least in my 
> installation (Apache
> 1.3.14, Red Hat 7), Apache doesn't look at the extension 
> before the ".gz" to get
> the content-type; "foo.txt.gz" gets marked as Content-Type: 
> application/x-gzip.
> 
> --
> /=================================================================\
> |John Stracke    | http://www.ecal.com |My opinions are my own.   |
> |Chief Scientist |================================================|
> |eCal Corp.      |But how do we know destroying the Van Allen belt|
> |francis@ecal.com|will kill all life on Earth if we don't try it? |
> \=================================================================/
> 
> 
> 
> 


Received on Friday, 1 December 2000 14:31:44 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:33:40 EDT