- From: John Franks <john@math.nwu.edu>
- Date: Thu, 29 Jan 1998 15:54:53 -0600 (CST)
- To: Adrien de Croy <adrien@qbik.com>
- Cc: http-wg@cuckoo.hpl.hp.com
On Fri, 30 Jan 1998, Adrien de Croy wrote: > > However, reflecting more on that issue, the chances of a client requiring > multiple created entitities (i.e those where the server cannot know a priori > the size) in a single connection is rather low, at least at the moment. > Multiple normal requests per connection would still be possible, and > unaffected by this proposal. So, overall, the performance gains by allowing > for maintained connections in this scenario may be outweighed by the data > overhead in chunking. > Connections with transfers of multiple entities of unknown size will be very common with HTTP/1.1. They would be very common today if there were widely deployed HTTP/1.1 clients. Remember, any document with server side includes (e.g. a counter) can be considered of unknown size. It is much easier to chunk than to calculate the length before any data is sent so it can be put in a Content-length header. Chunking has very low overhead. From the point of view of efficiency there would be no problem for clients or servers if ALL transactions were required to be chunked. There is not much performance difference between Content-length: 123456 <123456 bytes of data> and Content-encoding: chunked 123456 <123456 bytes of data> 0 John Franks john@math.nwu.edu
Received on Thursday, 29 January 1998 13:56:40 UTC