W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: improved caching in HTTP: new draft

From: Amos Jeffries <squid3@treenet.co.nz>
Date: Tue, 20 May 2014 02:34:59 +1200
Message-ID: <537A1693.4000207@treenet.co.nz>
To: Chris Drechsler <chris.drechsler@etit.tu-chemnitz.de>, ietf-http-wg@w3.org
On 19/05/2014 7:13 p.m., Chris Drechsler wrote:
> Dear editors of [Part6],
> dear working group members,
> I've written a draft about an improved caching mechanism in HTTP
> especially for shared caches (to improve caching efficiency and reduce
> costly Interdomain traffic). It can deal with personalization (e.g.
> cookies, session IDs in query strings) and varying URLs due to load
> balancing or the use of CDNs. The caching mechanism ensures that all
> headers (request and response messages) are exchanged between origin
> server and client even if the real content is coming from a cache.
> The draft is available under the following URL:
> http://tools.ietf.org/id/draft-drechsler-httpbis-improved-caching-00.txt
> I kindly request you for comments - thank you!.

* The introduction states several abuses and deliberate non-use of
HTTP/1.1 features as the reasons for this proposal.

 Which would not usually be bad, but it is actually simpler for the few
problematic systems to start using existing DNS and HTTP features
properly than it is for the entire existing software environment to be
re-implemented to support this proposed mechanism.

* Section 2.1 appears to be proposing a new header. But what does "NT" mean?
 The use of this header seems wsimilar to an extension of the RFC 3230
message digest mechanism.

* Section 2.2 things get really weird:
 - 2.2.1 is requiring mandatory disabling of all conditional request
mechanisms in HTTPbis part4.
 - 2.2.2 is requiring mandatory TCP connection termination after every
duplicate response. Effectively removing all benefits HTTP/1.1 pipeline
and persistent connections bring. It does this in order to replicate the
304 conditional response using just 200 status code + TCP termination.
 - 2.2.2 also places several impossible requirements on intermediaries:
  1) to re-open a just closed TCP connection, presumably without delays
which would violate TCP TIME_WAIT requirements.
  2) to decode the Content-Encoding of responses to compare SHA-256
values. Even if the response was a 206 status with byte range from
within an encoding such as gzip.

In summary, the proposed feature completely disables the two most
valuable cache features of HTTP/1.1 and replaces them with an equivalent
process requiring mandatory use of the worst behaviours from HTTP/1.0.

Received on Monday, 19 May 2014 14:35:31 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC