W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1994

content-length vs. boundary markers

From: Larry Masinter <masinter@parc.xerox.com>
Date: Fri, 16 Dec 1994 15:07:50 PST
To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Message-Id: <94Dec16.150754pst.2760@golden.parc.xerox.com>
> John Ludeman <johnl@microsoft.com> writes:

> I disagree.  In the case of HTTP, byte counting is *not* doing a 
> strncpy of bytes.  When receiving a chunk of data, you generally give 
> the buffer directly to the network layer which fills in the buffer.  To 
> then have to scan this buffer *does* significantly add a performance 
> hit to the server.

Uh, the server doesn't have to scan the data. It just sends the data
with a random boundary marker. The client has to scan the data, but
the client's scanning the data anyway.

Content-length is either impractical or inefficient when the data is
computed or being translated on-the-fly, and unreliable when serving
files for which there might be asynchronous updates. 

> Byte counts are good.  Real protocols use byte counts. Let's make sure 
> we move in that direction.

As far as I know, real protocols only use small byte counts. 128.
1024. Not 1023523.
Received on Friday, 16 December 1994 15:09:34 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:16:10 UTC