Re: New Version Notification for draft-nottingham-structured-headers-00.txt

On 2 November 2017 at 10:33, Mark Nottingham <mnot@mnot.net> wrote:

> Hey Matthew,
>
> > On 2 Nov 2017, at 11:22 am, Matthew Kerwin <matthew@kerwin.net.au>
> wrote:
> >
> >
> >
> > On 2 November 2017 at 09:53, Poul-Henning Kamp <phk@phk.freebsd.dk>
> wrote:
> > --------
> > In message <ABC96E9C-7426-4DEB-9E06-6CF0EA4FC46E@mnot.net>, Mark
> Nottingham wri
> > tes:
> > >Just a thought; maybe we shouldn't be defining "numbers" here, but
> > >instead "i32" or similar.
> >
> > The reason we have non-integers is the q=%f ordering parameter, and
> > that the pre-cursor draft tried to be a general purpose serialization.
> >
> > Neither of those may not be a good enough reasons to keep number
> > under this new scope/goal.
> >
> > ​
> > So what's the goal with this header format?  It feels like it's being
> moved towards defining a rich set of types, where I thought it was aimed at
> providing a simple set that provides good coverage (at the absolute
> minimum: scalar, list, dictionary -- beyond that, scalar tokens, binary
> blobs, quoted strings, and numbers).
> >
> > If it's going toward richness, is there going to be an eventual need
> for, for example, a "q-value" type?  Or a "timestamp"?  Those can look like
> numbers, but that's an implementation detail and conceptually they are
> different.
>
> Personally, I don't think so.
>
> > If the general definition of "number" were changed from "up to 15
> digits" to "guaranteed up to 15 digits, maybe more in other circumstances"
> and each header field in question specified the minimum/maximum/etc. for
> its particular case, what would that break?  I'm imagining the general
> definition to include a standard reaction to numbers with too many digits
> (beyond 15) for a particular implementation, and the individual header
> fields could build on that general exception.  Am I missing something?  Is
> this different from tokens/blobs/strings that are too long?
>
> I'd like to see generic parsers for the types that enforce the syntax and
> raise errors consistently, so that we don't get into states where
> implementations behave differently or ambiguously. That would remove a
> major part of the burden for defining and implementing new header fields.
>
> So while being *more* strict than the defined syntax is fine (i.e.,
> placing different constraints on it), I don't think relaxing the
> constraints works.
>
> Cheers,
>
> ​
I think this last point is where I trip up.  We define a generic structure
that covers most use-cases, with the most permissive version of each rule
(i.e. the generic parser can potentially accept more digits than a
header-specific parser would).  Looking at it from one perspective, each
individual header field spec defines a restricted profile, with its own
rules for how to deal with values that are ok generically, but not ok
specifically.

It seems weird, then, to pick a relatively-conservative upper limit for a
generic value, since it's meant to be the most permissive.  I don't know
about concrete use-cases for large numbers since the one people keep
talking about is Content-Length, which already exists and isn't defined
according to this generic structure;  but even without a use-case it still
feels a bit off to me.

I agree that calling it out specifically as a fifteen-decimal-digit type
(your '"i32" or similar') leaves it open in a better way.  One day we could
see an "i64", or even an arbitrary-length "number", if and when they're
needed, and having individual header field specifications describe their
uses will be a clear signal that an implementation does/doesn't need to
support them.

I'm trying to keep my personal feelings out of this about exactly how many
digits is good enough, and what sort of languages we care about optimising
towards.

Cheers
-- 
  Matthew Kerwin
  http://matthew.kerwin.net.au/

Received on Thursday, 2 November 2017 03:43:17 UTC