W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2012

Re: P1: Content-Length SHOULD be sent

From: Amos Jeffries <squid3@treenet.co.nz>
Date: Wed, 28 Nov 2012 12:59:57 +1300
To: <ietf-http-wg@w3.org>
Message-ID: <f8bffb9ba70a1611aade437e27eebdd0@treenet.co.nz>
On 28.11.2012 04:34, Phillip Hallam-Baker wrote:
> On Tue, Nov 27, 2012 at 4:32 AM, Amos Jeffries wrote:
>>
>> If I'm reading that right any recipient MUST consider a request with 
>> no
>> Content-Length or Transfer-Encoding header as being 0-length.
>>   That opens a request smuggling loophole when overly zealous
>> privacy/anonymizer config has been implemented. When proxy-A is 
>> known to
>> erase CL headers (but obeys them) it can be sent a POST with 
>> smuggled
>> request and victim request in pipeline. Proxy-A duly erases the CL 
>> and
>> passes what server X is now required to interpret as three requests,
>> resulting in proxy-A getting the smuggled requests response stored 
>> as the
>> victums reply - and some garbage at the end of the pipeline.
>>  Bit rare, but I have seen people erasing every header they thought 
>> was
>> optional because "some requests dont have it".
>>
>
> Why isn't the answer to the above corner case simply 'you lose' ?

Because the behaviour enabled is a cache poisoning vulnerability not 
just a screwed up user experience.
It does not matter how unlikely the edge case, if its creating a 
vulnerability somebody will take advantage eventually.

>
> Seems to me that a lot of potential for forward progress in the HTTP 
> world
> is being blocked by people dredging up the most bizarre corner cases
> imaginable. Often times corner cases that probably should not be 
> fixed.
>
> What is a privacy proxy anyway? And why would a person be using it?


I guess you have not spotted Privoxy in action yet (~2% of the 
Debian/Ubuntu installs run it - only $diety$ knows how many Edubuntu 
end-users that covers). With some big players on the server end pushing 
the user tracking game to full speed there is a growing force of 
over-reaction from those end-users. They seems to finally be getting fed 
up with having to enumerate the long and growing list of badness. This 
year the demand shifted from 'how do we disable feature X' as feature-X 
was announced as a tracker to 'how do we disable *everything* and only 
enable header X and Y' (not even whole features, *individual* headers). 
If they could revert HTTP back to HTTP/0.9 with only a request-line sent 
they would. Demand for this seems to be on par with demand for 
high-performance optimizations FWIW.


> And why
> would it be a good idea for the HTTP protocol to provide a way to
> circumvent the control?
>


> If people write proxies that break on very frequent cases such as the 
> POST
> request then they are going to be broken no matter what we write in 
> the
> spec.
>
>
> I don't think it is worth any working group spending time on a
> non-conforming implementation that has less than a 5% deployed base. 
> For
> HTTP that is a LOT of deployed base.

You seem to think privacy/anonymity is the concern of a small 
deployment base. For starters, schools in most western countries have 
mandatory tracking prevention by law on all students connectivity; the 
restrictions are paranoid and consistently similar. That covers a lot of 
%-points, well into your 5% before we even get near the paranoia in 
military and corporate sectors, or the edge-case crazy folks.

Amos
Received on Wednesday, 28 November 2012 00:00:22 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 28 November 2012 00:00:25 GMT