RE: Large content size value

Larry Masinter said:
<snip>
(Quoting Henrik Nordstrom)
> "It is not realistic for the HTTP specification to expect that all
> implementations uses bignum for every integer which may be transmitted
> in the protocol."

> is hyperbole. It's realistic to expect implementations to
> use 64-bit integers for quantities that reasonably exceed
> 32-bit representations

1. I agree that "every integer" is an overstatement. I restrict my argument
   to only those fields containing 1*DIGIT (or 1*HEX).

2. The spec says 1*DIGIT. That means "any number, 0-positive infinity" not
   "the biggest number you think is reasonable". Other parts of the spec
   imply that (servers, at least) can impose an arbitrary limit, and reject
   messages with (certain) field-values outside those limits.

3. Given that implementers DO choose the biggest number they think is
   reasonable (and they do), it can still cause preventable interop
   problems.

4. Embedded computers and old software are <del>people</del> clients too.
   It's realistic to expect *modern* implementations on *medium-to-high-end
   hardware* to use 64- and 32-bit integers. Yesterday and tomorrow are a
   problem, though.

5. (4) is an interesting problem, but we can't really fix it in HTTP/1.1.

6. (5) means both servers AND clients should have part of the specification
   dedicated to how each should fail. Servers have this, clients don't.


More on 4 (in refutation to the assertion of reasonableness, despite the
fact we can't fix it):

On a sub-32-bit processor, I certainly wouldn't want to be stuck doing 32-
or 64-bit math unless -absolutely- necessary. I'll grant that it's much less
likely for a sub-32-bit processor to have to deal with a >4GiB file.
However, having to deal with all 32-bit numbers just because the average
user will hit some (substantial, but minority) number of files >64KiB (on a
16-bit proc) is just cruel. Workaround logic to do a 16/32 switch isn't
exactly a cakewalk, either (though if performance is a big concern, and 32-
bit numbers are slow enough, it might happen). Likewise, an 8-bit proc would
be stuck emulating 16 bits enough of the time already -- requiring 32-bits
everywhere (especially when it's not _really_ necessary) is wasteful.

I'd be interested to hear any comments from folks who run HTTP on (sub-32-
bit) embedded processors.


Regarding 5:

> and it's realistic to expect implementations to check (and fail
> gracefully) when any received protocol value exceeds its representation
> capacity.

Yes, but I think we have failed to define "graceful failure". I don't think
that we'll be able to solve the transfer problem in an ideal way, but we
also haven't defined any client behavior when the client gets a message it
doesn't want to deal with.


Thanks,

-- Travis

Received on Friday, 5 January 2007 20:31:44 UTC