W3C home > Mailing lists > Public > www-talk@w3.org > November to December 1995

Re: two ideas...

From: <touch@ISI.EDU>
Date: Wed, 29 Nov 1995 15:34:36 -0800
Message-Id: <199511292334.AA22209@ash.isi.edu>
To: marc@ckm.ucsf.edu
Cc: www-talk@www0.cern.ch, www-speed@tipper.oit.unc.edu, www-html@www0.cern.ch
> From: "Marc Salomon" <marc@matahari.ckm.ucsf.edu>
> Date: Wed, 29 Nov 1995 14:47:28 -0800
> To: touch@ISI.EDU
> Subject: Re: two ideas...
> |We found that server speculation would decrease latency by 2/3, to 0.7
> |RTT (yes, below the speed of light) by increasing the BW by 7x. Note
> |that this RTT is an average per page - it still takes 1 RTT for the
> |first page...
> But do these speculative pre-fetch schemes scale?  To answer that I would ask
> what is the ratio of the number of unused prefetched pages to number of
> pre-fetches total?
> -marc

They scale, but not how you'd like. Adding BW has a logarithmic
effect on latency reduction, but only within the tree of HTML
from a single source. I don't have statistics on how
long users stay at a source - that's client-side logging info,
whereas web servers (by nature) log server-side stuff only.

I would predict that they would scale to around BW = 7^3, or around
350x, and latency would reduce from 2.1 RTT's down to 0.2 RTTs avg
per request.

The real goal question is:

	what is the min BW required to support interactive web access?
		this depends on whether you support HTML/ASCII,
		icons (small gifs), screen images (gifs), or
		higher resolution stuff...

		PS - the Lowlat pages now have the graph that describes this.

PPS - this seemed like a general enough question, I thought I'd share
the response with www-speed - I hope that's OK..

Received on Wednesday, 29 November 1995 18:37:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:32:58 UTC