Re: estimated Content-Length with chunked encoding

this brings up another issue, since chunked requests was the suggested 
solution to the problem of coping with authentication (possibly of 
proxies and/or origin servers) for POST and/or PUT with large entities.

It's easy for a server to make the decision whether to use chunking or 
not, because it has the HTTP version of the connected client from the 
request.

In the case of a client sending however, unless it has prior knowledge, 
it doesn't know whether the server supports chunking or not.

An HTTP1.0 server receiving a chunked request could break in a very bad 
way, since it will most likely

a) ignore the Transfer-encoding field
b) treat the chunk headers and trailers as part of the content.

leading to breaking the content on the server, and the client may have 
no way of undoing this.

Therefore before sending any chunked request, the client would need to 
establish a-priori that the request path was capable of successfully 
processing a chunked request.

It's not necessarily trivial to do this reliably.


Daniel Stenberg wrote:
>
> On Tue, 28 Oct 2008, Henrik Nordstrom wrote:
>
>>> Interesting enough, the MacOS X WebDAV Client already seems to use
>>> something like that:
>>>
>>>> C-68-#000000 -> [PUT /content/dam/STROMBERG_2_2.mp4 HTTP/1.1 ]
>>>> C-68-#000280 -> [Transfer-Encoding: Chunked ]
>>
>> Oh, cool. A client daring to use chunked encoding in requests!
>
> Is that really so unusual? The "HTTP Implementations" spreadsheet 
> lists at least 4 clients supporting it, but it seems the info is 
> lacking for most of them:
>
> http://spreadsheets.google.com/pub?key=pAZLaupBqGqXrtsaexTjPYg&gid=0
>

-- 
Adrien de Croy - WinGate Proxy Server - http://www.wingate.com

Received on Friday, 14 November 2008 00:19:33 UTC