RE: Large content size value

David Morris said:
> The bottom line is that if the content length is too long for the client
> to represent, it surely isn't going to do byte-ranges or anything else to
> retrieve the pieces which it then isn't going to have anyway to deliver to
> the end user anyway. A total waste of time to send a 4G file w/o
> content-length to a client which can't use it.

Well, from that perspective, it's an even bigger waste for a server that
generates a 4+GiB file (say, dynamically) and transmits it with the chunked
encoding (with chunks being <4GiB each). That's another case I'd like to
test, but it would require a slightly-more-than-trivial amount of setup.

> Either the server can correctly represent the content length or it can't
> and whether or not it can correctly serve the response.
> I think it reasonable that any server be expected to detect when it can't
> and report an error. I think it reasonable to expect a client to know when
> it can't convert the content length ascii string to a value it understands
> and reject the response as being in error.

It may be splitting hairs, but the response isn't in error; the client's
implementer made a design decision (conscious or not). There is no error,
only a refusal to accept on the client's part.

> Furthermore, it is reasonable for a client to reject a response it knows
> is too large, as soon as it knows that a size constraint has been
> exceeded.

> I don't see how these expectations can be added to the protocol in a
> meaningful way w/o bumping the version.

Well, aside from providing strong recommendations that aren't real
requirements, perhaps, and making explicit things that might have been
heavily implied or intended. It's certainly tricky. :\

> It would have been useful 10 years ago if we'd had the foresight to add
> resource negotiation to the protocol in a similar fashion to language,
> etc.

Hindsight is 20/20, and it's not like the spec doesn't consider a slew of
obscure-but-important contingencies already. The big questions are, how do
we fix it, and how do we avoid making the same mistake twice.

> Working from memory, seems like we had some text regarding precision in
> what is send and tolerance in what is accepted. Perhaps adding text there
> to highlight the additional burden of modern systems and native object
> sizes in terms of fields like content-length AND chunked encoding length
> segments.

I'm all for a mention in section 15 and/or 19 -- something is better than
nothing.

> We can happily talk about good programming standards, but unfortunately
> the C library includes a number of commonly used conversion functions
> which don't behave well in the presense of invalid input. They set a bad
> example for the average software engineer and probably generate a
> requirement for cautionary language where it shouldn't be required.

<snip>

Well, IIRC (I don't have my man pages with me), strtol et. al. are spec'd to
do the Right Thing: saturate, and set an error code. Now, whether or not C
library implementations adhere to the spec, or programmers check for the
error, I can't say. But ANSI C shouldn't steer folks wrong here. Unless you had other function(s) in mind...?


Thanks,

-- Travis

Received on Friday, 5 January 2007 19:35:22 UTC