W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Dealing with bad server chunking

From: Willy Tarreau <w@1wt.eu>
Date: Fri, 15 Mar 2013 11:39:56 +0100
To: Daniel Stenberg <daniel@haxx.se>
Cc: "Adrien W. de Croy" <adrien@qbik.com>, IETF HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20130315103956.GB3305@1wt.eu>
On Fri, Mar 15, 2013 at 11:13:55AM +0100, Daniel Stenberg wrote:
> On Fri, 15 Mar 2013, Adrien W. de Croy wrote:
> 
> >we have recently had issues with a site where the server sends chunked 
> >responses back but closes the TCP connection prior to sending any 0 chunk 
> >(in fact we never see a packet with this).
> >
> >WinGate detects this as an abortive close, and if there were any filters 
> >processing the stream, they are reset, and the data may not go to the 
> >client.
> >
> >However, client browsers typically "forgive" this transgression without 
> >any sort of warning.  Should we be making more forceful suggestions about 
> >this in the specs?
> 
> IMHO, a broken transfer is a broken transfer. How can you know it is only a 
> 0 chunk that is missing and not any further chunks?
> 
> If browsers don't warn about broken transfers then I think that's their 
> choice but it is not saying that it was a fine transfer as far as the 
> actual HTTP transfer goes.
> 
> (lib)curl will return an error for this case.

Indeed and I can confirm that curl's strict checking helped us a lot to
fix compression in haproxy :-)

Willy
Received on Friday, 15 March 2013 10:40:28 GMT

This archive was generated by hypermail 2.3.1 : Friday, 15 March 2013 10:40:30 GMT