W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Rechartering HTTPbis

From: Willy Tarreau <w@1wt.eu>
Date: Thu, 26 Jan 2012 21:54:18 +0100
To: Adrien de Croy <adrien@qbik.com>
Cc: Poul-Henning Kamp <phk@phk.freebsd.dk>, Amos Jeffries <squid3@treenet.co.nz>, ietf-http-wg@w3.org
Message-ID: <20120126205418.GA14233@1wt.eu>
On Fri, Jan 27, 2012 at 09:18:07AM +1300, Adrien de Croy wrote:
> >>Maybe you shouldn't have decided to send it if you weren't ready.
> >That's not what I'm saying. Again, any intermediary has many reasons
> >to close anywhere. Many do not even know what HTTP is nor what a
> >chunk is. Chunks are not atomic. And even intermediaries which talk
> >HTTP cannot all buffer all chunks. When you have a 16kB buffer per
> >connection, a chunk rarely fits there so you have to transfer as you
> >get them.
> sure, and these types of intermediaries can continue to do the same 
> thing.  If they want to be 2.0 compliant they can at least recognise 
> abort signals on 0 chunks, even if they can't generate them.
> But I don't see that as a reason to not add such a feature to the 
> protocol.  Others can benefit.

As long as it's only the reason on the 0 chunk, I find it very cheap
for everyone, and it's even doable right now although not exploited.
So I'm clearly in favor of that option.

> >>I'd suggest the number of aborted sends due to content would outweigh
> >>network errors.
> >It depends on the environment. In your products since you have valid
> >reasons to abort when matching contents, that's certainly true. But I
> >know many other infrastructures where no such filtering happens, and
> >the primary reason for an abort is a timeout, the second one is the
> >usual process crash in the middle of a processing due to an application
> >bug.
> This comes back to my other point about who is interested (apart from 
> all my customers) in scanning at an intermediary.

There are various degrees in scanning. I see a lot of URL filtering, and
few content filtering. But this will surely evolve for legal reasons, and
due to the diversity of mobile terminals making it harder to maintain an
up-to-date panel of anti-malware suite for all these devices.

> >>But anyway, it's a basic principle, if you make a decision that affects
> >>another party, you should communicate it.  If you can't you can't, but
> >>you shouldn't say "Because I can't ALWAYS communicate it, I will choose
> >>instead to NEVER do it".  That's sociopathic :)
> >I agree with this point of view. As I said, what I'm against is making
> >it harder to support the normal case just to favor better error recovery
> >for the fatal cases.
> the abort signal isn't just so you can re-use the connection, although 
> that is an added benefit, in our case the primary benefit is to be able 
> to signal the receiver that the entity should be abandoned for some 
> reason other than a transient network failure.  E.g. don't try again.

Indeed, that was your point, I think it was PHK who advocated the connection

> >For instance, re-opening a connection is cheap. OK
> >it's a round-trip, but if it happens less that 1/10000 times it's probably
> >better than padding megabytes of chunks or making parsers more complex.
> I don't think you'd need to pad chunks.  Any intermediary that is 
> scanning will be passing data through some filtering layers.
> It's inconceivable that such data would not be de-chunked prior to being 
> passed through the filters (else all filters would need to handle chunks).
> Therefore it's likely the data would need to be re-chunked at the other 
> end of the filter chain.

For usages like yours, I agree on the benefits you could take from this
extension. Making it a best effort ensures that those who need it can,
and that we don't make the framing silly for those who wouldn't benefit
from it. So that's a +1 from me :-)

Received on Thursday, 26 January 2012 20:55:08 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:00 UTC