W3C home > Mailing lists > Public > www-tag@w3.org > February 2003

RE: Proposed issue: site metadata hook

From: <Patrick.Stickler@nokia.com>
Date: Tue, 11 Feb 2003 19:33:39 +0200
Message-ID: <A03E60B17132A84F9B4BB5EEDE57957B5FBB17@trebe006.europe.nokia.com>
To: <seairth@seairth.com>, <www-tag@w3.org>



> -----Original Message-----
> From: ext Seairth Jacobs [mailto:seairth@seairth.com]
> Sent: 11 February, 2003 18:37
> To: www-tag
> Subject: Re: Proposed issue: site metadata hook
> 
> 
> 
> From: <Patrick.Stickler@nokia.com>
> > >
> > > From: ext Seairth Jacobs [mailto:seairth@seairth.com]
> > >
> > > I agree.  Repeating a post I made [1], you could use OPTIONS
> > > to accomplish
> > > such a thing.  No need for additional verbs.
> >
> > Well, what about MPUT and MDELETE (and likely MUPDATE)?
> >
> > So even if you could make it work the same as MGET, you'd
> > only have part of the puzzle...
> 
> Not if you return a URI to a conventional resource that you 
> can then use
> GET, DELETE, etc. on just like any other resource.  Then 
> there is no need
> for additional verbs that perform the same function. 

Quite so. I stand (or rather sit) corrected.

>  At the 
> same time,
> doing an OPTIONS on the returned URI could return yet another 
> URI, etc.
> This takes care of the meta-metadata bit.

MGET has no meta-metadata problem, as my response to TimBL
points out.

> > > However, these approaches require at least two hits on the
> > > server.  While
> > > this may be fine for favico or P3P (from the client
> > > perspective), I wonder
> > > if you will be able to convince crawlers, bots, etc. to 
> give up the
> > > robots.txt file.  From their perspective, any of these 
> solutions would
> > > double the amount of time it would take to do their job.
> >
> > MGET wouldn't. One single call to the server based on the site URI
> > (<scheme>://<authority> portion).
> 
> I don't see how you can perform only one access with this 
> method unless all
> possible metadata is returned within the single response or 
> you are using
> some form of conneg.  

The response from MGET would be a precise and bounded set of
statements where either the subject is the resource identified
in the request, or recursively, for all bnode objects, all
statements with that bnode as subject.

The graph returned from MGET would thus terminate in either
URIrefs, literals, or blank nodes with no statements (unusual
but possible).

> In the first case, this means you would 
> potentially
> have a large entity when you only wanted a fraction of it 
> (e.g. get the
> equivalent of the favico out of a collection of robot rules, 
> p3p documents,
> etc.)  

I am presently defining MGET to take input in the form of a
simple query which allows for selection (filtering) of
particular properties (similar to PROPFIND). So one can
obtain selective knowledge about a resource.

> Otherwise, the returned entity would likely contain a 
> pointer to the
> resource of interest (favico, robots.txt, etc.).  As a 
> result, it would have
> to process the returned entity to find the appropriate 
> pointer, then turn
> around and make a second request to the server for that URI.

It would, of course, be up to the site owner to decide whether
to syndicate all knowledge from all describing documents into
its knowledge base and return all knowledge about a resource;
or optionally to simply note which documents described a resource
and provide rdfs:isDefinedBy statements.

The former would be more efficient, since it requires only one
system call and provides knowledge specific to the resource
in question. The latter would require N+1 system calls where
N is the number of rdfs:isDefinedBy statements, and would
likely result in the agent getting lots of knowledge not
immediately relevant to the resource in question, which
it would have to filter/process itself.

> Or maybe I'm just not understanding how MGET would work.

Not surprising, since I've yet to finish and publish the 
specifics ;-)

I'm trying to have a short writeup and a demo done for the 
tech plenary.

> > > Would it be possible to use OPTIONS along with a new series of
> > > content-types?  For instance, suppose there was a
> > > "metadata/favico" and
> > > "metadata/robots".
> >
> > Preferably not.
> 
> I didn't say it would be pretty.  :)  But such an approach 
> does have some
> advantages...

If you say so.... ;-)

Patrick
Received on Tuesday, 11 February 2003 12:33:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:47:16 GMT