W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1995

Re: Improving If-Modified-Since

From: Lou Montulli <montulli@mozilla.com>
Date: Wed, 16 Aug 95 17:41:01 -0700
Message-Id: <3032901D.4292@mozilla.com>
To: Chuck Shotton <cshotton@biap.com>
Cc: Lou Montulli <montulli@mozilla.com>, Lou Montulli <montulli@mozilla.com>, Carlos Horowicz <carlos@patora.mrec.ar>, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
In article <v02120d09ac5838de1e3c@[198.64.246.22]> cshotton@biap.com (Chuck
Shotton) wrote:
> 
> The only way to solve this problem is to normalize the "size" to be the
> content-length instead of the number of bytes stored on disk on either end
> of the connection. This means that the server will have to read and
> translate the file to recompute the content-length size whenever it is
> requested. Since this is a CPU and disk I/O intensive process, it places a
> burden on the server that we should try to avoid. You seem to be ignoring
> this flaw in the IMS "size" discussion and it is a fatal flaw.

It is not at all a fatal flaw.  The size returned by the client needs
to be specified to be the same as the "content-length" returned by
the server during the request.  If line-feed conversion is being
done consistantly the size can be compared accurately.

> 
> >> This should be done by the client software, through
> >> whatever means the client has at its disposal. I don't care what the
> >> mechanism is. I just don't want to see thousands of caching clients beating
> >> on servers because they are too lame to keep track of their own cache. If a
> >> cached file is suspicious because of a date, a file size, or a bad
> >> checksum, the client should discard it. Period. Forcing the server to jump
> >> through hoops on every IMS request is contrary to the entire goal of
> >> "server serve, clients do the work."
> 
> >You seem to be forgetting that "jumping through hoops", as you put it,
> >is going to save the server time in the long run.  Remember, bandwidth
> >is not free.
> 
> And neither is CPU time or disk I/O. These are much more limited on a
> server handling lots of parallel requests than the net bandwidth on many
> systems.

The number of parallel requests are reduced by disconnecting 
user agents quickly.  This can be occomplished best via 304.
If 304 cannot be relied upon to be accurate then it can't be used
and data retransmission will have occur during every request.
That is far more costly than computing a checksum.  And as
I said before, if you are not as concerned about reliability,
ignore the size...

:lou
-- 
Lou Montulli                 http://www.mcom.com/people/montulli/
       Netscape Communications Corp.
Received on Wednesday, 16 August 1995 17:42:00 EDT

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:25 EDT