- From: Roy T. Fielding <fielding@gbiv.com>
- Date: Thu, 21 Dec 2006 16:10:01 -0800
- To: Paul Leach <paulle@windows.microsoft.com>
- Cc: "Travis Snoozy (Volt)" <a-travis@microsoft.com>, <ietf-http-wg@w3.org>
On Dec 21, 2006, at 3:21 PM, Paul Leach wrote: > In this case, a 64 bit implementation could handle lengths that a 32 > versions couldn't. > > I don't see that we need to note every place in the syntax where this > problem could arise, just like we don't need to be explicit that > implementers shouldn't code buffer overflows. Right. It is actually more dangerous for implementers to have required size limitations in the protocol, since then they often assume the value is going to remain conformant to the standard (and we all know that isn't a constraint on attackers). Implementions need to handle large numeric strings no matter how large they might be, regardless of what the protocol says they should be, and generally do so by returning an error if the number is larger than the maximum for the internal representation used for the value. This will change over time (as data get bigger) and may be much larger for specialized implementations than it would be for general-purpose implementations. 10 years ago almost everyone thought that 4GB would be a reasonable limit for an implementation of Content-Length -- now that is clearly not the case for the video-on-demand folks. ....Roy
Received on Friday, 22 December 2006 00:10:13 UTC