- From: Brian Behlendorf <brian@wired.com>
- Date: Fri, 7 Oct 1994 16:25:17 -0700 (PDT)
- To: Henrik Frystyk <frystyk@bay.lcs.mit.edu>
- Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
On Fri, 7 Oct 1994, Henrik Frystyk wrote: > > - keep-alive and segmented transfers > > > > This gives us the ability to get an HTML file and then request the > > inlined images reusing the same connection. > > I am currently testing my implementation of the multi-threaded version > of the HTTP client in the Library of Common Code (The implementation is > *platform independent* and does not require threads) > > When this is working then clients have a far more powerful tool to keep > connection alive, not only for inlined images but also for HTTP > sessions, video etc. It should be noted that one company's solution to the problem of time when loading html pages with lots of inlined images was to 1) grab the page 2) note the images needed for download 3) open up separate TCP connections for *each* image 4) find out the width and height of each image as it's coming down the pipe, laying out a box in which the image gets filled in as it arrives - thus allowing the page to be layed out perfectly before all images are received. We here would think that 1 x N is the same as N x 1, so opening 4 connections for 4 different things shouldn't be faster than one connection containing all elements, but aesthetically it is *much* more appealing. Could we get that same effect with one connection? Sure - the browser must be (simulated if not OS_supported) multithreaded, so it can be accepting data and rendering simultaneously, and when accessing inlined images a HEAD command should be sent for each image whose response shows how much screen acreage it'll take (which I *think* can be determined in the first couple of bytes of any GIF or JPEG). Brian
Received on Saturday, 8 October 1994 12:08:15 UTC