W3C home > Mailing lists > Public > public-esw-thes@w3.org > November 2004

RE: working around the identity crisis

From: Jason Cupp <jcupp@esri.com>
Date: Fri, 19 Nov 2004 23:47:30 -0800
Message-ID: <491DC5F3D279CD4EB4B157DDD62237F4055FBF06@zipwire.esri.com>
To: "'public-esw-thes@w3.org '" <public-esw-thes@w3.org>

With or without a frag id, the server is getting the entire URL line, and
can do whatever it wants. Frag IDs in practice have been used by HTML
clients, but it's not a rule that frag IDs can only be acted on by the

RFC2396 says:
>The semantics of a fragment identifier is a property of the data resulting
from a retrieval action... Therefore, the format and interpretation of
fragment identifiers is dependent on the media type [RFC2046] of the
retrieval result.

It's nailed down for HTML clients, but for RDF user-agents we don't have to
stick to the HTML model.

RFC2396 continues:
> Individual
   media types may define additional restrictions or structure within
   the fragment for specifying different types of "partial views" that
   can be identified within that media type.

Who is to say that GETing http://foo/#term1 for the RDF media-type, has to
return an RDF/XML file pre-existing on the server? If I have an RDF service
running, I'm not going to base it operationally on files, HTML style, but on

Requesting http://foo/#term1, might return a "partial view" in RDF/XML
consisting of the resource http://foo/#term1 and it's outward-bound
properties, a sub-graph. 

Either way, getting an actual RDF/XML file or getting RDF/XML as serialized
by a service, the client is going to interpret the result, which in RDF's
case, could mean checking to see if http://foo/#term1 is actually in the
return graph, in a subject position at least. 

Even if GETing RDF does follow the HTML model, we shouldn't be afraid of
parsing large documents. If I have an RDF application, which aggregates from
other RDF sources locatable via HTTP GET. It doesn't make sense to dump from
your RDF catalog all the assertions from a source you wish to get updates
from, then re-assert those again. You compute a digest hash from the new
GET, compare it with what you fetched before, then only if the hash is
different do you go on with the expensive operation of cosuming the updated
RDF/XML. - Jason
Received on Saturday, 20 November 2004 07:48:03 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 13:32:04 UTC