Re: NEW ISSUE: Monitoring Connections text

Henrik Nordstrom wrote:
> Yes, and they are not alone, and if NTLM-clients would utilize this they
> would fare much better, but no one does even if specified..
>   
For anyone's interest, it looks like MS deprecated in IE7 the (IMO 
bogus) methodology they had implemented in IE6 of sending a POST with 
Content-Length: 0 in the expectation of an NTLM challenge.  IE7 now 
sends the whole message body in all cases.

Unfortunately IE6 still is the most common browser (hitting our sites 
anyway).

I think chunking uploads plus another optional header (maybe called 
"Chunked-Content-Length" or similar?) would be a good workaround in 
systems where the length is known,  and is needed by the receiver for 
policy reasons yet where chunking is desired for its capability to keep 
a connection alive.  I agree the existing Content-Length header is too 
entangled in the rest of HTTP to use in this case.

Being able to advertise an entitiy content length on chunked transfers 
would be useful in many cases (not just uploads).

>> I haven't seen a client that uses chunking for POST or PUT though
>>     
>
> Exactly. And not unsuprising. Chunked encoding of requests is a pretty
> unchartered area, and quite heavily discouraged by the specs for legacy
> reasons..
>   
I think there will be many broken implementations that don't work with 
chunked sends from clients.  Just a gut feel.

>> But actually I believe in the end, the issue can't be properly addressed 
>> without a major protocol version change, one that then incorporates the 
>> concept of negotiating (i.e. obtaining mutual consent for) transfer of 
>> message bodies from client to server (as opposed to only having 
>> negotiation in the other direction).
>>     
>
> Sure it can. In fact it's already there in the form of 100 Continue and
> chunked encoding. Just that nearly nobody is using it.
>
> Client sends request header, waits for 100 Continue, gets a denial,
> sends last-chunk to terminate the request and moves on to the next..
>
>
> Problems:
> 1. Client only allowed to use chunked if it knows within reasonable
> doubt that the next-hop is HTTP/1.1.
>
>   
This has major issues if the next hop is an HTTP/1.1 proxy.  The proxy 
then needs to either

a) maintain a database of versions of servers on the internet.
This is a heap of work, and can you guarantee that next hop will always 
have same capabilities?  Not with NLB you can't.

b) spool all chunked uploads, then send them to next hop not using 
chunked encoding. 
This has major issues with user responsiveness (i.e. upload to proxy is 
at LAN speed, then long wait with no progress while proxy uploads to 
next hop at internet connection speed).  This is another good use-case 
of my I-D for progress notifications.

various methodologies were proposed, such as getting the client to send 
a fake

> 2. Client recommended to time out waiting for 100 Continue, as it's not
> supported by HTTP/1.0 and there is no guarantee the path is fully
> HTTP/1.1.
>   
This is another big problem with proxies.  By the time the client has 
given up waiting for a 100 continue, the proxy may still be trying to 
connect to the end server. 

I don't think the current implementation of 100 continue with a timeout 
is very good at all, which is why I originally proposed an interim 
response to enable a proxy to explicitly prevent a UA from sending 
message body until a subsequent 100 continue was transmitted (ended up 
being 103 Wait-for-continue).  As was pointed out, this breaks message 
semantics (i.e. message length indicated but body not transferred). 
Being able to advertise lengths for chunked transfers would solve this 
issue if people used chunking.

With NTLM, the only way for recovery from a commenced send that must be 
aborted, is if the client sends chunked data.  So lack of a capability 
to advertise sizes with chunking precludes the possibility of a proxy 
using NTLM and enforcing upload size limits efficiently.  I know how 
some people feel about NTLM, but we're stuck with it.

Currently we are forced to just swallow and dump data in such cases.  It 
has a large negative impact on the user experiences of people uploading 
large files.  We also have a lot of customers that use a proxy across a 
WAN, and this is even worse.  Chained proxies plus servers requiring 
auth makes it pretty much impossible for users.  I've seen cases where 
the message body on POSTs had to be sent 6 times every time. 3 times is 
very common.

so, the "properly addressed" method I was referring to would be where 
the client establishes credentials and permission through to the end 
server before sending the message (PUT/POST) that transfers the body.  
It may have to be a new verb for that, like the concept of the HEAD 
command for uploads.  It would be of most benefit for large uploads, so 
could be an optional thing.  Clients could for instance choose to use it 
only when uploading files over a certain size.

Regards

Adrien

> So currently it's not often both conditions to use this negotiation is
> fulfilled.
>
>
> Regards
> Henrik
>   

-- 
Adrien de Croy - WinGate Proxy Server - http://www.wingate.com

Received on Thursday, 22 November 2007 00:53:17 UTC