Re: JFV and Common Structure specifications

--------
In message <E5207840-A825-43B5-B42F-6C56314EA703@mnot.net>, Mark Nottingham writes:

>Adding features like recursion has a real cost in complexity, cognitive 
>load, bugs and security. All of these things add friction to a standard, 
>and can make or break it. That's not to say that we shouldn't do it, but 
>dismissing these concerns with a catchphrase isn't going to be 
>sufficient (at least for me).

I would hope not!

But I still think it is a very important distinction to make
when we are talking about a general purpose data structure.

The inspiration for my CS analysis was your own complaint about
needing a dozen bespoke parsers.  I am just trying to replace
one more parser than you were thinking about, when you said that.

Some people want to pass a MYPRIV: header with a dictionary of
dictionaries of lists from one side of their web-app to the other,
and that is their choice.  Our opinion and policies does not,
should not and will not count.

Today we see people resort to base64(JSON) or even base64(gzip(JSON))
to scratch that itch, both of which rob us of the chance to
improve serialization efficiency.

If we make CS, *as a data structure*, support general depth, they
will have the choice of using that instead, and if we do our job
right, HPACKng/H3 will do a better job moving their data.

If we are really lucky, somebody will write a JS and PHP package,
give it a fancy name and people will start using that instead.

But giving CS that generality does not mean we have to use it
ourselves.  As I have said repeatedly, I am totally down with a
severe restriction, or preferably outright ban, on recursion in
the HTTP headers we put into IETF documents.

But I see absolutely no advantages for us or anybody else, in trying
to impose that policy on everybody else, by delivering CS as a less
capable tool[1].

> E.g., given that headers themselves are constrained (effectively,
>a list of tuples with semantics around combination, naming, etc.),
>should we remove the "policy" from them and make it a bucket of
>bytes? Or is that too focused on a policy of byte-based computing?

I sort of tried to push for something like that during the rush to
H2, but back then there seemed to be no appetite for it.

I would far prefer if a HTTP message consisted of four onion layers
instead of just header/body like today:

1. Routing information (Host/authority, URL up to '?' and session-id)
	All load-balancers will need to look at.

2. Transport headers (I-M-S, Vary, Range, Content-Encoding, Transport-Encoding)
	Endpoints and proxies also needs these

3. End-to-end metadata (Content-Type, Cookies, MYPRIV: from above)
	Application only

4. End-to-end data (the message body)
	Applications only

(In that model:  No CS-recursion in layer 2, free for all in 3.)

>Adding features like recursion has a real cost in complexity,
>cognitive load, bugs and security.

Absolutely true.  But if by being more general and usable you can
prevent having to have two standards to pay attention to, the
net benefit is very big.

Poul-Henning

[1] In my very extensive study of computing history, I have _never_
found an instance where that strategy worked for anybody, and
computing today is litered with examples of the opposite: C++ vs
C, PHP vs Perl, AMD64 architecture vs Itanic, GPL/GCC vs CLANG,
and so on.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Tuesday, 22 November 2016 09:52:06 UTC