Re: structured headers "why not JSON" FAQ

> On 13 Jun 2018, at 7:01 pm, Julian Reschke <julian.reschke@gmx.de> wrote:
> 
>> A.1. Why Not JSON?
>> Earlier proposals for structured headers were based upon JSON [RFC8259]. However, constraining its use to make it suitable for HTTP header fields requires senders and recipients to implement specific additional handling.
> 
> The constraints are on the senders. *If* they follow them, recipients using off-the-shelf JSON parsers will be fine. (And yes, the *if* is the potential issue).

We need constraints on recipients too; otherwise their laxness will encourage some senders to become non-conformant. Good interop requires behaviour on both sides.

>> Because of JSON’s broad adoption and implementation, it is difficult to impose such additional constraints across all implementations; some deployments would fail to enforce them, thereby harming interoperability.
> 
> ...but interop would only be harmed for non-conformant field values.

True, but that's small comfort if they become common in use; it effectively defines a new interoperability profile (see: HTML).

>> For example, JSON has specification issues around large numbers and objects with duplicate members. Although advice for avoiding these issues is available (e.g., [RFC7493]), it cannot be relied upon.
> 
> (see above)
> 
>> Likewise, JSON strings are by default Unicode strings, which have a number of potential interoperability issues (e.g., in comparison). Although implementers can be advised to avoid non-ASCII content where unnecessary, this is difficult to enforce.
> 
> Not sure what interop problems you are referring to. SH currently only has ASCII, forcing non-ASCII content (when needed) into binary values. That doesn't seem to be an advantage for those who actually need non-ASCII.

The easy availability of non-ASCII content in JSON invites its use when it isn't necessary.

>> Another example is JSON’s ability to nest content to arbitrary depths. Since the resulting memory commitment might be unsuitable (e.g., in embedded and other limited server deployments), it’s necessary to limit it in some fashion; however, existing JSON implementations have no such limits, and even if a limit is specified, it’s likely that some header field definition will find a need to violate it.
> 
> a) A limit could be enforced *before* feeding the field values into the JSON parser.
> 
> b) "and even if a limit is specified, it’s likely that some header field definition will find a need to violate it" - how is that different in SH?

In SH, if you violate the limits, you won't work with most SH implementations (provided we can get good conformance; that's why having a test suite and detailed algorithms are important) -- or at least enough to dissuade violating the limits. With JSON, there are strong incentives to allow "all of JSON" in.


--
Mark Nottingham   https://www.mnot.net/

Received on Wednesday, 13 June 2018 10:02:10 UTC