Re: p2: Expect: 100-continue and "final" status codes

On 24/04/2013 4:39 p.m., Adrien W. de Croy wrote:
>
>
> ------ Original Message ------
> From: "Mark Nottingham" <mnot@mnot.net>
>>
>> On 24/04/2013, at 12:41 PM, Amos Jeffries <squid3@treenet.co.nz> wrote:
>>>>>
>>>>>  I think we can give better advice than that. If a server responds 
>>>>> with a final status code instead of 100 (Continue)
>>>>>
>>>>>  1. The response must be the last response on the connection. The 
>>>>> response should contain "Connection: close" header. After the 
>>>>> response is written, the server must initiate a lingering close of 
>>>>> the connection (p1#6.6).
>>>>  That seems too restrictive; as long as the server reads the rest 
>>>> of the request properly (discarding it), it should be able to 
>>>> recover and reuse the connection.
>>>
>>>  The problem comes with intermediaries. How are they to know the 
>>> bytes following were the original advertised payload or not? the 
>>> status from server has no guarantee of arriving after the client 
>>> payload starts arriving.
>>>  The only way to guarantee safety on the connection is to close it 
>>> or always send payload.
>
>
> I'm really struggling to see what benefit can be derived by a client 
> in knowing whether a server supports 100 continue or not. So to me 
> Expects: 100-continue is a complete waste of space.  I've never seen 
> one so I guess implementors by and large agree.

I guess you have never tried uploading a video to the YouTube through an 
old intermediary which requires authentication. At best (Basic) it 
doubles the upload time and can cause the whole transaction to abort 
with a timeout. At worst (NTLM) it can do the same while consuming up to 
3x the total size of the uncompressed video in bandwidth. This exact 
use-case is why we pushed HTTP/1.1 experiments into Squid-2.7.


>
> Regardless of 100 continue being transmitted, the client has to send 
> the payload if it wants to reuse the connection.  The only early-out 
> options involve closing the connection.

Regarding pipelining sure. The benefits come from avoiding the above 
mentioned resource waste and timeouts. Adding a whole TCP setup overhead 
and RTT (few hundred ms) is far faster and cheaper that transferring >10 
MB of data multiple times (from whole seconds to whole minutes).


>
> There was quite a lot of discussion about this in the past, and my 
> understanding was that 100 continue couldn't be used to negotiate 
> whether or not the payload would be sent.  The outcome of this 
> discussion was not satisfactory IMO, since the "answer" was for the 
> client to send request bodies always chunked, and send a 0 chunk if it 
> needed to abort early.

Agred. I too am unhappy with that. It does work however.

Amos

Received on Wednesday, 24 April 2013 06:10:51 UTC