W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2010

Re: Issues addressed in the -10 and -11 drafts

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 15 Sep 2010 07:33:15 +0200
To: Mark Nottingham <mnot@mnot.net>
Cc: HTTP Working Group <ietf-http-wg@w3.org>, Roy Fielding <fielding@gbiv.com>
Message-ID: <20100915053315.GA20698@1wt.eu>
On Wed, Sep 15, 2010 at 01:44:15PM +1000, Mark Nottingham wrote:
> If lots of implementations are breaking in obvious ways because of duplicate headers, it will encourage those broken intermediaries to fix themselves.

In general it's the last installed component which is declared faulty and
which must adapt, at least for the time required to fix the first one. I
had to do that in haproxy several times (relax header names checks, extend
cookie parsing, etc...). But I tend to agree with the goal of cleaning up
existing implementations.

> However, we should discuss how UAs will display such errors to users. 
> Currently, the language implies that the content is displayed to the user by the UA, as well as (SHOULD) an error message. Will that SHOULD get implemented? 
> If not, intermediaries will get bug reports because a site sending two C-L headers will work with browsers, but won't work when an intermediary is interposed -- incenting them not to implement.
> A few possible fixes:
>   0) status quo -- UA vendors are happy to display an error to the user, because this type of error is so rare. (UA vendors?)
>   1) UA required NOT to display / make available content with multiple C-L headers, giving parity with intermediaries (if they implement).
>   2) Change requirement to focus on not using any more messages after this one in the connection, as they're tainted. 

That last point is a bit dangerous IMHO. My fears are that some people who
implement intermediaries decide to do that as an acceptable fallback solution.
If their implementation forwards the message assuming the max of the two as
the body length, the next hop may very well accept the shorter length and
ignore the rule, thus effectively process two messages.

I think it is reasonable to distinguish between two cases :

  - duplicate C-L
  - different C-L

The former may result from stupid programming but should be harmless,
because whatever the C-L the implementation uses (first, last, max, min),
the result remains the same.

The second may result from attacks or from early bugs. But it already does
not work equally through various components. For instance, firefox takes
the last one. From memories, Squid takes the max. Apache takes the first
one (just checked). So the end result is that such an anomaly can't live
long. I think that's why all times I encountered two C-L, they were
duplicates that remained unnoticed.

In my opinion, we should make it mandatory to reject a message with multiple
different C-L headers. It's too dangerous and has no valid reason to be met.
The case of the duplicates can then either be rejected because it's nothing
but as a special case of the multiple C-L, or be specifically accepted
because it's harmless, if the implementation wishes to perform this specific

Received on Wednesday, 15 September 2010 05:33:49 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:54 UTC