Re: ACTION-686: Sniffing

On Thu, Apr 26, 2012 at 8:44 AM, Robin Berjon <robin@berjon.com> wrote:
> On Apr 26, 2012, at 13:53 , Larry Masinter wrote:
>> Robin, we had an open issue about stable references, under the banner of the question of whether updates to normative references automatically apply.
>> I tend to agree that this is best resolved as process issue, and want to officially hand it off to the w3c group responsible for the w3c process.
>
> Right, but my understanding was that the AB was already on this as part of its investigation of how standards are done in W3C (this is just a branch of the "living standards" discussion). They're planning to discuss it extensively during the AC meeting.
>
>> I'm sorry it wasnt clear that this was not just about this one incident.... there are persistent questions about references to specs, not only from w3c recs but also from registries.... for example, does the text/html media type registration require an update at all?
>
> Absent the ability to upgrade the whole Web at once, I would expect that the only forward compatible strategy for text/html is to point to "whatever HTML is today". Ideally that would be maintained by W3C as an undated URI, e.g. http://www.w3.org/TR/html/ (which unfortunately points to XHTML 1.0e2). Pragmatically, it's the only option that doesn't make the text/html registry lie.

Introducing this level of indirection is very similar to creating a
new registry with a single entry. The essential difference between a
nondescript undated URI and a registry is that a registry has a
"constitution", a documented process for making changes that is
relatively more stable than the registry entries themselves. (Of
course it's turtles all the way down.) To translate this idea to a
"live" spec, a reference to any "live" spec ought to be required to be
accompanied by a link to a relatively more stable process document
that explains the process by which updates are made.

>> If we use undated URIs instead of dated, named references in a technical specification, what are the persistence requirements, how do we account for meaning in such a world?
>
> The persistence requirements are that you trust whoever operates that URI not to put something stupid there. That will fail sometimes, but I don't know of a way of preventing people from being stupid.
>
> Since you bring up accounting for meaning, I suspect that you're thinking at least in part of the long-term archival issue. If it's just for today, we don't need meaning so much as we need sufficiently consistent interpretation.
>
> For long-term archival, i.e. to ensure that an HTML 2.0 document is understood in 2319, I don't think that fixed, stable, dated, cast-in-stone documents and links between them are the way to go. The way to go is fostering a culture of backwards compatibility and progressive enhancement that can live on an adapt. Imagine von Neumann's theory of self-replcating automata, but for the standards ecosystem.

This is not reliable if there is any threat of nonmonotonicity. What
you need is an archive of past versions of the spec, or equivalently
an archived change log. In 2319, if you find an HTML document from
2211, what you will care about is what HTML meant in 2211, and the
best record for that would be the archived specs from just before and
just after 2211.

So the archiving process needs to be clear too. IEEE gets this for
free since the RFC series is write-once (so you know RFC NNNN will
never "change") and sequential (so that if at any time you have RFC
NNNN for every NNNN then you know you have all the RFCs). It's a great
system. To support so-called "live" standards something equivalent
would be needed, ideally something that doesn't place too heavy a
burden on archives and future scholars.

Jonathan

> --
> Robin Berjon - http://berjon.com/ - @robinberjon
>

Received on Thursday, 26 April 2012 13:26:46 UTC