W3C home > Mailing lists > Public > www-tag@w3.org > February 2003

RE: Proposed issue: site metadata hook

From: <Patrick.Stickler@nokia.com>
Date: Wed, 12 Feb 2003 11:44:16 +0200
Message-ID: <A03E60B17132A84F9B4BB5EEDE57957B5FBB1A@trebe006.europe.nokia.com>
To: <joshuaa@microsoft.com>, <timbl@w3.org>, <JeffreyWinter@crd.com>
Cc: <www-tag@w3.org>

> -----Original Message-----
> From: ext Joshua Allen [mailto:joshuaa@microsoft.com]
> Sent: 11 February, 2003 23:17
> To: Tim Berners-Lee; Jeffrey Winter
> Cc: www-tag@w3.org
> Subject: RE: Proposed issue: site metadata hook
> I like the idea of having a convention for URIs that provide 
> information
> about other URIs.  Too bad we can't use ' (the symbol "prime" used to
> denote "meta" in Calculus).  So you would have
> <http://www.microsoft.com'>, <http://www.microsoft.com''>, etc.
> Anyway, please share opinions on the following two questions:
> A.  Should the "meta" URL for a URL return *all* metadata for the URL?
> What if I want to get just metadata of a certain type?  If I 
> am querying
> for robots data, I don't want to get everything else.

It should be possible to submit a query which selects particular
statements about the resource, akin to the functionality in PROPFIND.

And all descriptions should be constrained to be resource specific,
including only statements having the resource as subject, and
recursively, for all blank node objects, all statements with those
blank nodes as subjects. I.e. shouldn't get a description of 
the whole DC vocabulary when asking for a description of dc:title.

Yet solutions which simply point to documents will likely point
to large, monolithic schemas describing many different resources
and a such have no definition of constrained resource-specific
descriptions. You get whatever is in the document, like it or not.
Open wide and swallow...  ;-)

For MGET, the following RDF input selects only those statements
having the predicates dc:title and dc:creator (if defined):

<?xml version="1.0"?>

  <!ENTITY rdf  "http://www.w3.org/1999/02/22-rdf-syntax-ns#">
  <!ENTITY dc   "http://purl.org/dc/elements/1.1/">
  <!ENTITY rdfq "http://sw.nokia.com/RDFQ-1/">

   xmlns:rdf    ="&rdf;"
   xmlns:dc     ="&dc;"
   xmlns:rdfq   ="&rdfq;"



And if you didn't know what rdfq:Select means, you (eventually)
can just do an MGET on http://sw.nokia.com/RDFQ-1/Select
to get its RDF description ;-)

> B.  Why not let sites return metadata about URLs which they 
> do not own?
> In other words, <http://www.microsoft.com'> gives me the *Microsoft*
> report of metadata for that URL, but what if I want IBM's report?

Then you just need a standardized interface from which to ask
about resources from extra-authority sources.

I'm working on an agency model I call URIQA (the URI Query Agent)
which is a collaborative solution to extra-authority statements
(and authoritative statements) about resources.

And just as NNTP is configured in a collaborative manner with
particular servers sharing particular content feeds with one another,
so URIQA servers will share particular knowledge feeds with one
another, as well as provide for querying the authoritative servers
with MGET as well, to provide a consolidated syndication of knowledge
from the authoritative server and all collaborative servers about
a particular resource.

So, to ask Microsoft about its own site, you'd ask

   MGET http://www.microsoft.com

which would return a concise description of the site in RDF
as Microsoft chooses to describe it.

And if IBM had an implementation of URIQA, you could then ask

   GET http://sw.ibm.com/URIQA?http://www.microsoft.com

which would return a concise description of the site in RDF
as IBM chooses to describe it.

The entity returned by URIQA is constrained in precisely the
same fashion as a description returned by MGET, so a SW agent
interacting with a URIQA server can expect to get concise
and bounded descriptions. And like MGET, one can include a
query to ask specific questions about the resource, rather
than getting the entire body of knowledge known by the 
server (which may be alot).

And consider Google syndicating knowledge obtained by an MGET
Semantic Web crawler, and being able to ask

   GET http://sw.google.com/URIQA?...

about any resource. And consider if the knowledge base employed
by that URIQA server included RDFS support and inference...



Patrick Stickler, Nokia/Finland, (+358 40) 801 9690, patrick.stickler@nokia.com
Received on Wednesday, 12 February 2003 04:44:31 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:32:36 UTC