Re: New Version Notification for draft-nottingham-structured-headers-00.txt

On 11/04/2017 07:09 PM, Matthew Kerwin wrote:
> On 4 November 2017 at 20:00, Andy Green <andy@warmcat.com 
> <mailto:andy@warmcat.com>>wrote:
> 
> 
>     Mmm Lua is not that popular, and clearly it's the wrong tool to
>     write a http server with
>     ​ [...]
> 
>     ​
> 
> 
> ​<snip>​
> 
> 
>     Okay... so that specific implementation is broken and no good for
>     dealing with the >2GB reality we already live in for many years...
>     there are many things that are no good for that task... these things
>     can be forced into view by interoperability test suites.. 
> 
>     ​​
> 
> ​
> <snip>
> ​
> 
>     ​​
> 
>     I am afraid 32-bit only limit is basically useless for general web
>     use for many years already.  Languages that don't even have a way to
>     deal with >32-bit quantities are fundamentally broken and useless
>     for a web with >2GB and >4GB objects.  That can't be the guide for
>     standards, otherwise we would have inherited a web that coddles
>     WIN16 / 8088 limitations according to this logic.
> 
> 
> ​One last post from me, because I know I'm talking to the wind, but: I 
> would suggest that saying "this thing is bad or broken or wrong" doesn't 
> change the fact that someone will do it, on the open web, in a way that 
> causes Interesting™ breakages.

The question is because some people on a 64-bit capable platform decided 
to use 32-bits internally, is that a problem that needs addressing in 
the generic standards?  Or is it the problem of the people who made 
those implementation decisions by themselves?

Nothing stops those servers processing the integers in question as 
strings and seeing if they exceed some implementation limit, and 
rejecting the request cleanly.  That would be fine for everyone I think.

If they don't do that, they don't even work properly serving large files 
today; that's their problem.

> We aren't here to tell people what (not) to do; rather we're here to 
> describe how to get something done in a way that is useful and reliable, 
> hard to get wrong, and easy to sort out if/when it does eventually go 
> wrong.  That includes predicting and addressing ways we can foresee that 
> it could go wrong, or that similar things have gone wrong in the past.

If you step back a bit, a simple, clear standard is the best way to get 
something "useful, reliable, hard to get wrong, easy to sort out". 
Piling on weird stuff - what was it, four "profiles" for integer 
sizes... is not making a better standard in those regards than just 
saying compliant implementations must handle 64-bit ints.  If it becomes 
so complex just to eat an int, you will create more "interesting" bugs 
you say you want to avoid.  As shown it is not an onerous requirement to 
say it should handle 64-bit even on weak platforms like ESP8266 / ESP32.

> And nobody (for a given definition of "nobody") cares about 
> interoperability test suites, apart from the really big one (i.e. 
> reality.)  Some of the errors that come up in that one can be rather... 
> Interesting™.

Ehhh are you sure :-) I think you find many implementors really like 
having test suites.  Of course they don't "guarantee" there are no 
problems, but they will provoke expected problems and confirm adherence 
to the bits of the standard they check.  I have certainly had a lot of 
bugs and things I didn't realize from the standard I should have taken 
care about shown to me by h2spec.  And they are very useful as automated 
tests to confirm against regression too.

-Andy

> Cheers
> -- 
>    Matthew Kerwin
> http://matthew.kerwin.net.au/

Received on Saturday, 4 November 2017 12:14:35 UTC