Re: p2: Expect: 100-continue and "final" status codes

------ Original Message ------
From: "Amos Jeffries" <squid3@treenet.co.nz>
>On 24/04/2013 4:39 p.m., Adrien W. de Croy wrote:
>>
>>
>>------ Original Message ------
>>From: "Mark Nottingham" <mnot@mnot.net>
>>>
>>>On 24/04/2013, at 12:41 PM, Amos Jeffries <squid3@treenet.co.nz> 
>>>wrote:
>>>>>>
>>>>>>  I think we can give better advice than that. If a server responds 
>>>>>>with a final status code instead of 100 (Continue)
>>>>>>
>>>>>>  1. The response must be the last response on the connection. The 
>>>>>>response should contain "Connection: close" header. After the 
>>>>>>response is written, the server must initiate a lingering close of 
>>>>>>the connection (p1#6.6).
>>>>>  That seems too restrictive; as long as the server reads the rest 
>>>>>of the request properly (discarding it), it should be able to 
>>>>>recover and reuse the connection.
>>>>
>>>>  The problem comes with intermediaries. How are they to know the 
>>>>bytes following were the original advertised payload or not? the 
>>>>status from server has no guarantee of arriving after the client 
>>>>payload starts arriving.
>>>>  The only way to guarantee safety on the connection is to close it 
>>>>or always send payload.
>>
>>
>>I'm really struggling to see what benefit can be derived by a client 
>>in knowing whether a server supports 100 continue or not. So to me 
>>Expects: 100-continue is a complete waste of space. I've never seen 
>>one so I guess implementors by and large agree.
>
>I guess you have never tried uploading a video to the YouTube through 
>an old intermediary which requires authentication. At best (Basic) it 
>doubles the upload time and can cause the whole transaction to abort 
>with a timeout. At worst (NTLM) it can do the same while consuming up 
>to 3x the total size of the uncompressed video in bandwidth. This exact 
>use-case is why we pushed HTTP/1.1 experiments into Squid-2.7.
similar issue with webmail uploading attachments.  that's why I wrote 
http://tools.ietf.org/id/draft-decroy-http-progress-00.txt

I removed the discussion about flow-control after the aforementioned 
discussion about using chunked transfers for requests.

But I don't see how 100 continue makes any difference in this case.  The 
client needs to either

a) close and retry.  This won't work for any connection-oriented auth 
mechanism.
b) send the whole thing
c) abort the thing with a chunked transfter terminated early (and risk 
horrible side-effects upstream).

All the 100 does is give the client some pause to reflect and spin CPU 
cycles.

Or is the point of this just to provide an easy work-around for clients 
that don't notice server transmissions whilst they themselves are 
sending?  If so, I'd suggest the protocol isn't the place to solve that, 
especially not at the costs involved in 100 continue.

>
>>
>>Regardless of 100 continue being transmitted, the client has to send 
>>the payload if it wants to reuse the connection. The only early-out 
>>options involve closing the connection.
>
>Regarding pipelining sure. The benefits come from avoiding the above 
>mentioned resource waste and timeouts. Adding a whole TCP setup 
>overhead and RTT (few hundred ms) is far faster and cheaper that 
>transferring >10 MB of data multiple times (from whole seconds to whole 
>minutes).
sure.  But only if that's an option.

>
>
>>
>>There was quite a lot of discussion about this in the past, and my 
>>understanding was that 100 continue couldn't be used to negotiate 
>>whether or not the payload would be sent. The outcome of this 
>>discussion was not satisfactory IMO, since the "answer" was for the 
>>client to send request bodies always chunked, and send a 0 chunk if it 
>>needed to abort early.
>
>Agred. I too am unhappy with that. It does work however.
I don't think anyone has tried it yet?

Is it tested?  I'd be surprised if it didn't come with a raft of 
side-effects/problems.

Adrien

>
>Amos
>

Received on Wednesday, 24 April 2013 07:47:18 UTC