Re: Large content size value

On Jan 4, 2007, at 12:59 PM, Travis Snoozy (Volt) wrote:

> All right; to rebut all of the below arguments (sorry Roy, I'm  
> going to pick on you a bit ;):

Are you going to fix the bugs in those implementations?  If so, feel
free to try to "pick on me".  If not, then go to

   http://diveintomark.org/archives/2004/08/16/specs

>> Professional software developers are expected to know better and be
>> able to use their own judgement. They don't need a standard to tell
>> them it is a bug.
>
> Then why is following clause in Section 14.6 (Age), page 106?
>
>    If a cache receives a value larger than the largest positive  
> integer it
>    can represent, or if any of its age calculations overflows, it MUST
>    transmit an Age header with a value of 2147483648 (2^31). [...]  
> Caches
>    SHOULD use an arithmetic type of at least 31 bits of range.

Because: 1) Jeff wanted it in there, not me; 2) Age does have a  
reasonable
maximum value that is way below that number (somewhere around one day
between refreshes is high); and 3) the entire section on Caching is  
written
as a tutorial, not a specification.

> This deals explicitly with the overflow condition, and tells  
> implementers exactly how to handle it. Why are "professional  
> software developers" expected to "know better" in regards to  
> Content-Length & co., but need explicit guidance in regards to Age?  
> And if the answer is "because caches have the potential to not  
> interoperate otherwise," what about the issue Content-Length has  
> been shown have? Is that not considered an "interoperation" problem?

It is unnecessary garbage that makes real specification harder to find.
When an Age calculation comes out anywhere near that high, the value
is wrong.  The requirement ensures that it is considered "really old",
but it is still a wrong value.  It would be better for real-world
interoperability if the party simply ignores Age at that point and
makes the error visible, but doing so would not be as "transparent"
as marking it as stale.

What do you expect the specification to say regarding content-length?
Something like "If the server's storage interface contains numeric
offsets that are really big numbers, maybe using a really big integer
to store that offset would be a good idea."?  "If the indicated
Content-Length is larger than the user agent's capacity to read and
store, then choose any one of a hundred different alternative solutions
to viewing partial representations."  Or maybe just
"If the content-length is wrong, the recipient may not know it?"
Or maybe you think I should have specified a 4GB limit back in 1994?

Just fix the friggin bugs.  They aren't going to be any more or less
of a bug regardless of what is placed in the specification, and the
majority of implementations are obviously capable of implementing
that feature without being buggy.  The spec does not need any more.

IETF specifications are supposed to specify the interface.  That is
only part of the total system implementation problem -- namely, the
part that we can all agree is a standard.  If you fill the standard
with a bunch of useless cruft, nobody will agree to implementing it
at all.  Which is exactly where we are today with HTTP/1.1 caching.

What you want is a book like APUE by Stevens.  Sure, it's valuable,
but that isn't a standard.  We don't have to get agreement from
hundreds of vendors on an implementation guide.

> Or is the clause redundant/unnecessary, and in need of removal? ;)

Read the archives.

....Roy

Received on Thursday, 4 January 2007 23:28:41 UTC