W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2007

RE: Large content size value

From: Travis Snoozy (Volt) <a-travis@microsoft.com>
Date: Thu, 4 Jan 2007 12:59:45 -0800
To: "Roy T. Fielding" <fielding@gbiv.com>
CC: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Message-ID: <86EDC3963F04D546BED8996F77D290F6049D11812B@NA-EXMSG-C138.redmond.corp.microsoft.com>

All right; to rebut all of the below arguments (sorry Roy, I'm going to pick on you a bit ;):

Roy T. Fielding said:
> On Dec 21, 2006, at 3:21 PM, Paul Leach wrote:
> > In this case, a 64 bit implementation could handle lengths that a 32
> > versions couldn't.
> >
> > I don't see that we need to note every place in the syntax where this
> > problem could arise, just like we don't need to be explicit that
> > implementers shouldn't code buffer overflows.
> Right.  It is actually more dangerous for implementers to have
> required size limitations in the protocol, since then they often
> assume the value is going to remain conformant to the standard
> (and we all know that isn't a constraint on attackers).
> Implementions need to handle large numeric strings no matter how
> large they might be, regardless of what the protocol says they
> should be, and generally do so by returning an error if the
> number is larger than the maximum for the internal representation
> used for the value.  This will change over time (as data get bigger)
> and may be much larger for specialized implementations than it
> would be for general-purpose implementations.  10 years ago almost
> everyone thought that 4GB would be a reasonable limit for an
> implementation of Content-Length -- now that is clearly not the
> case for the video-on-demand folks.

Roy T. Fielding said:
> Implementations can have bugs.  That doesn't change the standard.

> > Not that it's a surprise; these are the _exact_ problems that I
> > predicted would show up, based solely on what the spec said. Go
> > figure. More digging in more products will very likely uncover
> > similar issues (and not just in Content-Length, but anywhere where
> > 1*DIGIT is present).
> That is complete nonsense.  The spec does not say "Fail to use any
> common sense or valid software engineering techniques while reading
> untrusted network input." Nor does it say "Failure to recognize and handle
> integer field values larger than the expected integer size is okay."
> Professional software developers are expected to know better and be
> able to use their own judgement. They don't need a standard to tell
> them it is a bug.

Then why is following clause in Section 14.6 (Age), page 106?

   If a cache receives a value larger than the largest positive integer it
   can represent, or if any of its age calculations overflows, it MUST
   transmit an Age header with a value of 2147483648 (2^31). [...] Caches
   SHOULD use an arithmetic type of at least 31 bits of range.

This deals explicitly with the overflow condition, and tells implementers exactly how to handle it. Why are "professional software developers" expected to "know better" in regards to Content-Length & co., but need explicit guidance in regards to Age? And if the answer is "because caches have the potential to not interoperate otherwise," what about the issue Content-Length has been shown have? Is that not considered an "interoperation" problem?

Or is the clause redundant/unnecessary, and in need of removal? ;)


-- Travis
Received on Thursday, 4 January 2007 21:00:18 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:41 UTC