W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1997

backoff (Re: 1xx Clarification)

From: Gregory J. Woodhouse <gjw@wnetc.com>
Date: Mon, 21 Apr 1997 13:20:45 -0700 (PDT)
To: John Franks <john@math.nwu.edu>
Cc: Jeffrey Mogul <mogul@pa.dec.com>, http-wg@cuckoo.hpl.hp.com
Message-Id: <Pine.BSF.3.96.970421131107.13979C-100000@shell3.ba.best.com>
On Mon, 21 Apr 1997, John Franks wrote:

> 
>    "If the client does retry the request to this HTTP/1.0 server, it
>    should use the following "binary exponential backoff" algorithm to be
>    assured of obtaining a reliable response:"
> 
> Is the "should" here different from SHOULD?  I don't think this level
> of implementation detail exists elsewhere in the spec.  Is there a
> rationale for having it here (as opposed to putting it with other
> implementation notes)?
>

I think this is a protocol issue and not an implementation issue. Think
about it this way: If a server is unable to respond to requests due to
load, then it is not desirable to have all clients retry their requests at
(approximately) the same time. The exponential backoff algorithm will tend
to spread out client retries and reduce the average number of requests to
the server. Certainly, this approach has precedent in lower layer
protocols (from the MAC layer up). Okay, now if this issue were one of the
server managing its own load, then I would agree that it's an
implementation issue, but since this is a matter of how clients should
handle retries so as minimize server load (and, of course, the network
congestion that also results), it is a protocol issue.
 
> 
> 
> John Franks 	Dept of Math. Northwestern University
> 		john@math.nwu.edu
> 

---
gjw@wnetc.com    /    http://www.wnetc.com/home.html
If you're going to reinvent the wheel, at least try to come
up with a better one.
Received on Monday, 21 April 1997 13:26:02 EDT

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:32:35 EDT