W3C home > Mailing lists > Public > public-xmlhypermedia@w3.org > July 2012

Re: Hypermedia & web architecture

From: Liam R E Quin <liam@w3.org>
Date: Fri, 27 Jul 2012 20:03:07 -0400
To: "Rushforth, Peter" <Peter.Rushforth@NRCan-RNCan.gc.ca>
Cc: "public-xmlhypermedia@w3.org" <public-xmlhypermedia@w3.org>
Message-ID: <1343433787.14238.27.camel@localhost.localdomain>
On Fri, 2012-07-27 at 23:25 +0000, Rushforth, Peter wrote:
> >XML is almost always converted to another format before display by a Web
> >browser, though, and Web browsers today are not going to interpret any
> >attribute at all on a non-HTML XML element as a link, regardless of how
> >you spell it, 
> Well, here's my current version of reality. Javascript is the
> prevalent standard way to extend the functionality of browsers
> nowadays, and if RESTfully applied, this is called code-on-demand.  It
> allows clients (ie browsers)  which do not understand a particular
> media type to be extended with code that does.

Erm... be careful here. JavaScript is only available for specific
predefined media types - generally text/html and application/flash, and
their respective derivatives. It is not available to media types a
browser does not recognise.

It _is_ possible to use JavaScript to build new data formats that can be
used by JavaScript from HTML pages, of course.

>   So, the understanding of a particular non-HTML media type (most
> often json nowadays) does not need to be pre-programmed into
> browsers. 
JSON is actually a subset of JavaScript.

>  I agree browsers are unlikely to look through any _json_ for links,

They're forbidden from doing such things by the JSON spec.

>  and the need is hardly there since the <script> tag is used to
> side-step the same-origin policy that is applied to xhr, and browsers
> know what to expect from a declarative script tag, or at least they
> think they do.  Having recognizable (ie standard) hypermedia
> affordance vowels in XML would at least let javascript libraries be
> built to support them.
> >and few people are likely to serve up non-HTML XML on the
> >Web and risk the resulting search-invisibility. 
> I think this is a big-picture topic in its own right.  I will make a
> wiki page for it.
> A search client can't RESTfully provide search functionality over
> media types it does not understand.  For example, google does not
> provide Dublin Core metadata search (AFAIK), perhaps because  the DC
> stuff is buried in text/html and is not explicitly called out by a
> media type, perhaps for other reasons (not everyone uses DC, so why
> bother).

It's nothing to do with REST or content types. Dublin Core is *designed*
to be embedded in HTML, and Google does in fact use some of it. See e.g.
the rich snippets testing tool in the Google webmaster console.

> >And explicitly putting MIME
> >content types on link elements is definitely a huge, huge step backwards
> As an XML web developer and user I want XSLT, or XQuery, [...], to be
> usable  on XML-based media types directly in the browser, in a way
> that is simple.
> This might be accomplished if browsers used the @type to distinguish
> css from xslt and thereby a) negotiate for the appropriate type and b)
> delegate processing to the appropriate handler:
> <link rel="stylesheet" type="application/xslt+xml" href="mystyles.xslt">
> I don't think this pattern is against web architecture, is it?

It's actually pushing at the fuzzy edges (and always has been, same as
with text/css). The meaning of, for example,
  <script language="JavaScript/1.1" src="argylesocks.jpg"></script>
is that a user agent should only fetch argylesocks.jpg and apply it to
the current document if the user agent understands JavaScript version
1.1 and later. But of course what the remote Web server returns will
have a content-type label, and if that says the result is something
other than JavaScript it has to be ignored in this case.

But this is the compound document use case, and is actually somewhat
different from the single-document href use case.


> Today`s browsers seem to just assume that what comes from a particular
> link is going to be of a certain media type.  In other words, they
> just negotiate mostly for */* and hope for the best.

No. Not at all. They may indeed offer */*, but they handle the result
appropriately. For example, try fetching a PDF file or a JPEG image with
a Web browser, and see how it's not in fact interpreted as HTML.

When the browser gets an HTTP response it looks at the Content Type and
uses that to handle the data stream.

>   They probably sniff the content, too, which  _is_ a big step
> backwards in web architecture.

IE used to do that years ago.

> >I *do* think it's worth thinking about ways to represent and document
> >hypermedia, and declarative link discovery and presentation techniques.
> That's great.  There's hope yet!


The conversations are interesting. I'm pushing back *hard* partly
because my job (as I interpret it) involves trying to keep XML stable as
much as possible, and partly to get a clearly-stated position...


Liam Quin - XML Activity Lead, W3C, http://www.w3.org/People/Quin/
Pictures from old books: http://fromoldbooks.org/
Co-author, "Beginning XML", Wrox, July 2012.
Co-author, "Recovering from Writing XML Books", Squashed Flat Press.
Received on Saturday, 28 July 2012 00:03:49 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:42:06 UTC