RE: Large content size value

The bottom line is that if the content length is too long for the client
to represent, it surely isn't going to do byte-ranges or anything else to
retrieve the pieces which it then isn't going to have anyway to deliver to
the end user anyway. A total waste of time to send a 4G file w/o
content-length to a client which can't use it.

Either the server can correctly represent the content length or it can't
and whether or not it can correctly serve the response.
I think it reasonable that any server be expected to detect when it can't
and report an error. I think it reasonable to expect a client to know when
it can't convert the content length ascii string to a value it understands
and reject the response as being in error. Furthermore, it is reasonable
for a client to reject a response it knows is too large, as soon as it
knows that a size constraint has been exceeded.

I don't see how these expectations can be added to the protocol in a
meaningful way w/o bumping the version.

It would have been useful 10 years ago if we'd had the foresight to add
resource negotiation to the protocol in a similar fashion to language,
etc.

Working from memory, seems like we had some text regarding precision in
what is send and tolerance in what is accepted. Perhaps adding text there
to highlight the additional burden of modern systems and native object
sizes in terms of fields like content-length AND chunked encoding length
segments.

We can happily talk about good programming standards, but unfortunately
the C library includes a number of commonly used conversion functions
which don't behave well in the presense of invalid input. They set a bad
example for the average software engineer and probably generate a
requirement for cautionary language where it shouldn't be required.

Dave Morris




On Thu, 4 Jan 2007, Travis Snoozy (Volt) wrote:

>
> Larry Masinter said:
> > > if a client can't download (or a server can't serve) a
> > > file bigger than the FS can handle
> >
> > It is possible for a client to use range retrieval
> > to get parts of a large file, even if the client couldn't
> > store the whole thing because of file size limitations.
> > (This can happen with JPEG2000 image files, for example.)
> >
> > It's quite possible for a server to serve a dynamically
> > generated resource that is bigger than can fit into a
> > single file on the file system.
> >
> > So I don't think the protocol limits and the underlying
> > operating system file size limits should be linked
> > in any way.
> >
>
> Excellent point! I completely ignored dynamic content (though I would think that would almost always be served with a chunked encoding in practice). You've also brought up a great solution.
>
> Since it's possible for the client to detect when a Content-Length or a chunk-length is too long, SHOULD the client then attempt a series of byte-range requests instead? This would solve all the prior problems I've mentioned, assuming the server implements that part of the protocol (anyone know which servers do, in practice?).
>
> Also, in regard to connection handling: as far as I can tell, the client is going to have to close the connection if an oversized Content-Length shows up, since the client won't be able to read through to the next request reliably. If this is the case, is it specified? It might make for a nice suggestion (not a requirement).
>
>
> Thanks,
>
> -- Travis
>

Received on Friday, 5 January 2007 01:26:34 UTC