W3C home > Mailing lists > Public > www-tag@w3.org > February 2013

Re: Revisiting Authoritative Metadata (was: The failure of Appendix C as a transition technique)

From: Eric J. Bowman <eric@bisonsystems.net>
Date: Mon, 25 Feb 2013 11:13:26 -0700
To: Robin Berjon <robin@w3.org>
Cc: Larry Masinter <masinter@adobe.com>, Henri Sivonen <hsivonen@iki.fi>, "www-tag@w3.org List" <www-tag@w3.org>
Message-Id: <20130225111326.bc0119db0f606d5c39ee3cea@bisonsystems.net>
Robin Berjon wrote:
>
> > Antipattern?  This *is* the architecture.
> 
> I beg to differ. That's hardly an argument is it? :)
> 

I'm referring to REST, a peer-reviewed thesis which forms the accepted
science on this matter.  You're correct that stating "that's wrong" is
hardly an argument; better if you could refer to falsification of said
thesis?  I'm open-minded, but...

Without falsification of REST, claims that it's wrong have about as
much credibility with me as the intelligent design folks' denial of
evolution.  If you want to convince me, you'll need to resort to
the methods and language of science.

> 
> Seriously, I have looked, yet I can't find a single piece of 
> justification for the Sender Intent Dogma that doesn't involve hand 
> waving and the same example practically pasted in over and over again.
> 

I've used the existence of sender intent for years, as a means of
collaboration with other developers.  Like so:

http://charger.bisonsystems.net/conneg/;type=txt

Much easier than "view source" where browser-resident XSLT is used.  So
I have a hard time believing this to be a bad example, given that's how
I actually work.  I will now be dismissed as an anomaly, I'm sure...

This can also be implemented using context-sensitive links, i.e.
REFERER can trigger a different media type.  Or, URI aliases which
allow source code viewing.  Point is, I can link you to my source in a
way which blocks it from rendering in any protocol-compliant browser.

If this wasn't a useful feature, it wouldn't form my basis of
collaboration with other developers.  Nor is using curl a substitute to
being able to debug a website entirely within a browser.  Nor is the
<plaintext> element a solution -- creating a whole 'nother document just
to display the source of a document which is, in fact, just a text file,
introduces a whole lot of unneeded complexity vs. contextually altering
Content-Type.

Not to mention, changes the line numbers.  If I want you to look at
line 20 of my code, I want to link you to my code in a way which allows
you to easily discern what I mean by line 20.  Not line 20 of some sub-
section of another document, not the results of line 20 being rendered,
just line 20 of the code you see when you follow the link I provide, by
clicking on it, not pasting it into curl.

>
> I would presume that sender intent, if valuable, would be informative
> at best. User intent trumps all.
> 

Only when user intent is known, which it isn't before the payload is
received; I fail to see how browser sniffing respects user intent
either.  In the face of unknown user intent, I as a publisher expect my
intent to be honored -- "browser knows best" introduces fragility from
my perspective, and leaves me with no mechanism to express my intent.

> 
> Again, we give such prominence to sender intent, despite obvious 
> wide-scale deployment issues, because [blank].
> 

Again, the peer-reviewed falsification of REST is where...?  I'll humor
you, though:  Because only *I* know what's best for my users, not you.
Someone visiting my website, is primarily my user, not the user of any
particular browser, likely changes browsers over time.  So I expect any
and all browsers to follow the same protocol I am, not wing it in
unexpected ways, because then my code doesn't interoperate and there's
nothing *I* can do about it.  All *I* can do, is follow the protocol.

Punishing the lot of us, because a small minority are clueless, isn't
what got the Web where it is today.  Which I see as a success that
shouldn't be messed with by making ill-informed choices not backed up
by any relevant science.

>
> > Given any architecture which supports a variety of data formats, if
> > intermediaries are to be allowed to participate, they can't be
> > required to decode (sniff) the payload to determine the format
> > anyway, without requiring them to have high-end CPUs.  As it is,
> > the Web scales nicely, right down to my decade-old desktop
> > embedded-Linux squid router.
> 
> But intermediaries that do that will be making the wrong decision on
> a very regular basis.
> 

No clue what you're talking about.  My squid router caches all text/
html, application/xhtml+xml, CSS, JS, and images.  Works like a peach.
Doubt it could keep up if it had to sniff payloads to make this
determination.

>
> > The alternative to Authoritative Metadata, is for the TAG to
> > deprecate sender intent altogether; iow, redefine what Web
> > architecture *is*.
> 
> If the TAG can't redefine Web architecture (in this instance I would
> say "properly define"), what is it for?
> 

Exactly what the charter says it's for.  Documenting and building
consensus, isn't defining.  Web architecture evolved organically before
ever there was a TAG.  Charters can be changed; in the meantime, I
don't see where redefining the architecture is covered.

>
> When you note that a system of rules is broken (and I think that the 
> fact that we need a sniffing document demonstrates that very clearly) 
> you can do one of two things:
> 

I disagree with your premise that sniffing is needed.  I've heard the
arguments, but they fail to convince me -- particularly when it goes
against both the architecture and the protocol.  Following the protocol
is the only way for publishers to realize their mistakes; silent error
correction prevents the discovery of misconfiguration at the server to
the detriment of the Web by allowing misconfigured servers to
proliferate, instead of being phased out over time.

>
> • Ask that people "behave better" and go against their best interest 
> (this works, why bother fixing it?).
>

Asking that http developers follow the protocol is in the best interests
of interoperability over the long term.  While I understand there exists
a *perception* of going against their own interests, I disagree that not
following the protocol favors anyone's interests.  I'm a small fish in a
big sea -- I certainly can't get away with going against the protocol,
all I can do is follow it and hope nobody the size of Google manages to
pull that rug out from under me to their own competitive advantage and
at my expense.

The insignificant cost to Google of following the protocol, pales in
comparison to screwing over an entire industry that's evolved around
the architecture we have.

-Eric
Received on Monday, 25 February 2013 18:13:49 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 25 February 2013 18:13:50 GMT