Re: Use cases

On 02.01.2011 00:32, Benjamin Hawkes-Lewis wrote:
> ...
>>>>> Sprinkingly some namespaces around does not magically produce a document
>>>>> that software can turn into a human-friendly hypermedia interface.
>>>
>>>> It may or may not.
>>>
>>> Please detail when it would work and why?
>>
>> It would work when producers and consumers agree on how to handle the
>> namespace. We've seen this happening for things like SVG and MathML, so
>> there's evidence that it can happen.
>
> That's not magic.

I didn't claim it is.

> The hypermedia interface is enabled by vendors agreeing on the common
> interpretation of the semantics of a vocabulary and building interface that
> reflects that interpretation, not by any particular mechanism of enforcing the
> uniqueness of vocabulary terms.

So?

>> It can also happen in controlled environments, where you may be able to rely
>> on certain browser extensions to be there.
>
> At that point, by definition, the interface is no longer uniform and instead
> requires specialized client knowledge, breaking REST.

Um, no. "Uniform" doesn't necessarily mean that everybody needs to 
understand it right now. It can depend on the intended audience, and a 
point of time. What's not understood widely today maybe tomorrow. If 
this wasn't the case, we couldn't evolve the language, and add 
vocabularies the way we just did (with SVG and MathML).

> Also, in controlled environments you can just use other media types including
> all the text/html vocabularies if you want arbitrary XML vocabularies, so this
> *cannot* be a use case for adding such functionality to text/html.

Just because there's more than one way doesn't mean the other way 
"can't" be used.

>>> That is, how might it happen without the web's client software being updated
>>> to build interfaces on top of the semantics expressed by those namespaced
>>> vocabularies?
>>
>> For instance, by sending Javascript code along with the page that consumes
>> the XML.
>
> Consumers request text/html and expect the recognized semantics on which they
> can construct the uniform interface.
>
> When the server includes elements from unrecognized vocabularies in the
> response, the initial state of the document is nonsensical. This is the
> opposite of progressive enhancement, where the initial state of the document
> makes sense and only depends on widely implemented features.

I like progressive enhancement. It would be nice if it's always possible 
to use. It works best if you start with data that's close enough to what 
HTML already allows.

> If the consumer threatens applies untrusted JS, then at some later state the
> document might be made sensical by converting the nonsense into recognized
> semantics. This is a big "if", because of network unreliability and varying
> implementations of the language and DOM APIs. But worse is that this is
> "forcing users to put themselves at risk by executing untrusted code just to
> gain access to basic content and functionality", as I mentioned before.
>
> At web scale, not all consumers will apply untrusted JS or even implement JS.
> For these consumers, the document will remain nonsensical. In this way,
> unrecognized vocabularies break the uniform interface.

Yes, that's a drawback.

> Or as Fielding puts it:
>
> "Distributed hypermedia provides a uniform means of accessing services
> through the embedding of action controls within the presentation of
> information retrieved from remote sites. An architecture for the Web
> must therefore be designed with the context of communicating large-grain
> data objects across high-latency networks and multiple trust boundaries."
>
> http://www.ics.uci.edu/~fielding/pubs/dissertation/introduction.htm
>
> Breaking RESTful expectations and endangering end-users in this way is
> the exact opposite of what W3C should be encouraging.

Benjamin, with all due respect, could you please stop the lecturing 
mode? What does the 2nd sentence have to do with what we're discussing?

The Web architecture includes multiple extension points. New media types 
are one. XML namespaces are one. Compound formats are one. Sending JS 
with the page is another one.

All of these have their pros and cons.

>> But sometimes, annotating the HTML clearly is not the best approach, in which
>> case alternate embeddable vocabularies may be the better choice.
>
> Prove it.

We just added MathML and SVG, right?

> Please give a real example of a resource that you imagine *cannot* be
> represented in terms of the uniform interface provided by text marked up
> with generic HTML/MathML/SVG semantics.

Oh, so you say that after adding these, no new use cases will ever surface?

There are many more vocabularies that might qualify; the 3D stuff is 
one, Music and Chemistry might be others.

> Please further prove that this resource is nonetheless best
> represented using the text/html media type rather than any other media
> type that currently exists or could be created.

I didn't claim that, so I'm not going to prove it.

What I'm saying is that there are cases where you want to *embed* this 
data in an HTML document.

> Please further prove that it is better to break the uniform interface
> rather than extend the uniform interface (by adding to the common text/html
> vocabulary) in order to represent that resource.

The only difference here is that you want central control. That's a 
process question.

> ...

Best regards, Julian

Received on Sunday, 2 January 2011 10:57:34 UTC