W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2008

Re: [google-gears-eng] Re: Deploying new expectation-extensions

From: Adrien de Croy <adrien@qbik.com>
Date: Tue, 08 Apr 2008 09:40:11 +1200
Message-ID: <47FA94BB.9040306@qbik.com>
To: Henrik Nordstrom <henrik@henriknordstrom.net>
CC: Jamie Lokier <jamie@shareable.org>, Charles Fry <fry@google.com>, Julian Reschke <julian.reschke@gmx.de>, Brian McBarron <bpm@google.com>, google-gears-eng@googlegroups.com, Mark Nottingham <mnot@yahoo-inc.com>, HTTP Working Group <ietf-http-wg@w3.org>


I'm going to have to disagree on that one, I don't think 100 Continue 
does a particularly good job at all, for the following reasons.

1. It uses an ill-defined heuristic for what a client should do in the 
absense of 100 continue. The amount of work clients have to do to even 
calculate timeouts is unreasonable (and often impossible), and commonly 
it's not done well.  Again Thomas Moore applies: assuming silence to 
mean anything (either assent or dissent) is just plain wrong, and 
writing them into the protocol we'll bear the consequences whatever they 
are.  In this day and age there's no excuse for not designing completely 
deterministic protocols, especially for arguably the most important 
application protocol we have.
2. It doesn't fail gracefully.  If you use expects and don't get a 100 
continue, you have to wait around, terminate your connection and retry, 
resetting up auth etc etc.  The suggested smart way around this is for 
clients to "remember" which servers are HTTP/1.1, or to use chunked 
uploads.  That's just really bad design: doesn't cope with change, 
imposes unreasonable requirements on clients, and denies the ability for 
proxies to set efficient policy on upload size.
3. The common heurstics don't work through proxies well at all, due to 
timing based on connection to local proxy.  Chain more proxies and it 
just gets worse.

and finally, by no stretch of anyone's imagination is it a negotiated 
transfer.  A client can connect, and spew enormous amounts of resource 
at a server without ever having received a byte in assent from it.  
That's simply not negotiation in anyone's book.

100 Continue is a mild improvement in some scenarios (client direct 
connection to slow server), but does nothing for several common problem 
cases.

Adrien

Henrik Nordstrom wrote:
>> Adrien de Croy wrote:
>>     
>>> I think until we adopt proper handling of uploads (i.e. pre-authorised / 
>>> negotiated etc) we'll have problems - esp with large uploads and auth.  
>>> But there I go flogging that poor dead horse again...
>>>       
>
> 100 Continue + chunked encoding accomplishes this quite well, allowing
> for any length of negotiation before the actual upload is sent. It's not
> the specs fault these features haven't been properly adopted.
>
> Regards
> Henrik
>
>   

-- 
Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
Received on Monday, 7 April 2008 21:40:07 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:47 GMT