Re: Revisiting Authoritative Metadata (was: The failure of Appendix C as a transition technique)

Robin Berjon wrote:
> 
> > Without falsification of REST, claims that it's wrong have about as
> > much credibility with me as the intelligent design folks' denial of
> > evolution.
> 
> That's an interesting claim. I happen to be particularly interested
> in how architectural/constitutional rules will orient an ecosystem
> towards certain stable situations (and how we can fix such rules
> rather than treat the symptoms). So allow me turn it this way: if the
> architectural principle you are defining is indeed conducive to
> robust protocols, how do you explain the persistence of sniffing as
> an evolutionarily stable strategy throughout the ecosystem and as
> amply evidenced in the (not so) fossil record?
> 

Well, for starters, I wouldn't call sniffing evolutionarily stable.
Anne points out <img> tag sniffing, which I say destabilizes the
evolution of the Web.  Authors should be faced with a situation which
either works, or doesn't work; instead, they get a situation which
looks like it works, when really it's borked.  This proliferates the
problem (permanent breakage), instead of proliferating the solution
(temporary breakage).

Further, I don't believe idiot-proofing the Web is an attainable goal
to begin with.  You can fix the rules until it's dumbed down to a point
where you're removing utility from whole swathes of publishers, and
there will still be cadres of brothers-in-law with two-sided business
cards offering their services as wedding photographers / Web developers
who will foul up the simplest of things -- which I don't think will
leave us any better off than we are now, and probably worse.

>
> > If you want to convince me, you'll need to resort to
> > the methods and language of science.
> 

By which I meant publication, and the attendant peer-review process.
This isn't "Eric's theorem," it's how science is done.  REST has been
out there for long enough now, that if it was a pithy matter to debunk,
someone certainly would have ripped it to shreds scientifically by now.

> 
> In the absence of a Web Police we have no choice but to build rules
> that contain within themselves the incentives to be followed.
> 

That requires counting on folks to behave as expected, which is the
whole problem with Authoritative Metadata.  If browsers followed the
protocol, the incentive for authors to get it right would be their
sites breaking.  Instead, there's a very proportionally small number of
developers who code browsers, who disregard the protocol for business
reasons, despite technically knowing better -- de-incentivizing the
whole notion of developers actually learning their profession.

These business reasons are externalities as far as the technology goes;
how can any technological solution guarantee not to be thwarted by
similar unforseen externalities?  I mean, if we're going to scrap
anything that 10% of authors can't handle, I don't see how we'll arrive
at architectural stability, or if we do, how the result will be in any
way desirable.

> 
> I'm not sure that I can prove Ruby's Postulate, but would you
> disagree that it's borne out by ample experimentation?
> 

No, but I won't agree that it's relevant.  Who, besides developers,
saves HTML to disk from the Web, anyway?  Bookmarks seem to be quite
the popular method for referring back to a site, so I don't see this as
an issue.  If someone does try saving Web pages sans all context,
they'll quickly learn that this doesn't work for far more reasons than
Content-Type.  Metadata that only works when the site is used as
intended, seems like a non-problem to me.  Unless you're saying Link:
is also harmful because it isn't saved to disk?  Expires?  Is this an
argument against the very concept of protocol headers?

> 
> I can return to your other points later, but since the replies all
> stem from the above let's look at this first.
> 

Ditto.

-Eric

Received on Tuesday, 26 February 2013 04:32:23 UTC