Re: Use cases

On Sat, Jan 1, 2011 at 5:14 PM, Julian Reschke <julian.reschke@gmx.de> wrote:
> On 01.01.2011 16:57, Benjamin Hawkes-Lewis wrote:
>>
>> On Sat, Jan 1, 2011 at 2:40 PM, Julian Reschke<julian.reschke@gmx.de>
>>  wrote:
>>>>
>>>> Sprinkingly some namespaces around does not magically produce a document
>>>> that software can turn into a human-friendly hypermedia interface.
>>
>>> It may or may not.
>>
>> Please detail when it would work and why?
>
> It would work when producers and consumers agree on how to handle the
> namespace. We've seen this happening for things like SVG and MathML, so
> there's evidence that it can happen.

That's not magic.

The hypermedia interface is enabled by vendors agreeing on the common
interpretation of the semantics of a vocabulary and building interface that
reflects that interpretation, not by any particular mechanism of enforcing the
uniqueness of vocabulary terms.

> It can also happen in controlled environments, where you may be able to rely
> on certain browser extensions to be there.

At that point, by definition, the interface is no longer uniform and instead
requires specialized client knowledge, breaking REST.

Also, in controlled environments you can just use other media types including
all the text/html vocabularies if you want arbitrary XML vocabularies, so this
*cannot* be a use case for adding such functionality to text/html.

>> That is, how might it happen without the web's client software being updated
>> to build interfaces on top of the semantics expressed by those namespaced
>> vocabularies?
>
> For instance, by sending Javascript code along with the page that consumes
> the XML.

Consumers request text/html and expect the recognized semantics on which they
can construct the uniform interface.

When the server includes elements from unrecognized vocabularies in the
response, the initial state of the document is nonsensical. This is the
opposite of progressive enhancement, where the initial state of the document
makes sense and only depends on widely implemented features.

If the consumer threatens applies untrusted JS, then at some later state the
document might be made sensical by converting the nonsense into recognized
semantics. This is a big "if", because of network unreliability and varying
implementations of the language and DOM APIs. But worse is that this is
"forcing users to put themselves at risk by executing untrusted code just to
gain access to basic content and functionality", as I mentioned before.

At web scale, not all consumers will apply untrusted JS or even implement JS.
For these consumers, the document will remain nonsensical. In this way,
unrecognized vocabularies break the uniform interface.

Or as Fielding puts it:

"Distributed hypermedia provides a uniform means of accessing services
through the embedding of action controls within the presentation of
information retrieved from remote sites. An architecture for the Web
must therefore be designed with the context of communicating large-grain
data objects across high-latency networks and multiple trust boundaries."

http://www.ics.uci.edu/~fielding/pubs/dissertation/introduction.htm

Breaking RESTful expectations and endangering end-users in this way is
the exact opposite of what W3C should be encouraging.

> But sometimes, annotating the HTML clearly is not the best approach, in which
> case alternate embeddable vocabularies may be the better choice.

Prove it.

Please give a real example of a resource that you imagine *cannot* be
represented in terms of the uniform interface provided by text marked up
with generic HTML/MathML/SVG semantics.

Please further prove that this resource is nonetheless best
represented using the text/html media type rather than any other media
type that currently exists or could be created.

Please further prove that it is better to break the uniform interface
rather than extend the uniform interface (by adding to the common text/html
vocabulary) in order to represent that resource.

> Your previous email sounded like you're saying that sticking arbitrary XML
> into data-* attributes or script tags is somehow better than using the
> XHTML/namespaces extension point. I just wanted to state that *if* you want
> to stick arbitrary XML into the document, "hiding" it in existing elements or
> attributes just for the sake of document conformance is a bad idea.

On the contrary, roundtripping nonsense with the representation is absolutely
fine so long as the combination of text and the most applicable text/html
semantics are used to convey the state of the resource to consumers of the
uniform interface.

--
Benjamin Hawkes-Lewis

Received on Saturday, 1 January 2011 23:33:05 UTC