Re: two ideas...

> From: "Marc Salomon" <marc@matahari.ckm.ucsf.edu>
> Date: Wed, 29 Nov 1995 14:47:28 -0800
> To: touch@ISI.EDU
> Subject: Re: two ideas...
> 
> |We found that server speculation would decrease latency by 2/3, to 0.7
> |RTT (yes, below the speed of light) by increasing the BW by 7x. Note
> |that this RTT is an average per page - it still takes 1 RTT for the
> |first page...
> 
> But do these speculative pre-fetch schemes scale?  To answer that I would ask
> what is the ratio of the number of unused prefetched pages to number of
> pre-fetches total?
> 
> -marc

They scale, but not how you'd like. Adding BW has a logarithmic
effect on latency reduction, but only within the tree of HTML
from a single source. I don't have statistics on how
long users stay at a source - that's client-side logging info,
whereas web servers (by nature) log server-side stuff only.

I would predict that they would scale to around BW = 7^3, or around
350x, and latency would reduce from 2.1 RTT's down to 0.2 RTTs avg
per request.

The real goal question is:

	what is the min BW required to support interactive web access?
		this depends on whether you support HTML/ASCII,
		icons (small gifs), screen images (gifs), or
		higher resolution stuff...

		PS - the Lowlat pages now have the graph that describes this.

PPS - this seemed like a general enough question, I thought I'd share
the response with www-speed - I hope that's OK..

Joe

Received on Wednesday, 29 November 1995 18:37:53 UTC