W3C home > Mailing lists > Public > public-awwsw@w3.org > April 2008

Re: N3 rule for proposed Resource-Description header

From: Jonathan Rees <jar@creativecommons.org>
Date: Mon, 7 Apr 2008 07:37:59 -0400
Message-Id: <2EDE9058-727E-45C3-8094-9E442E30C41A@creativecommons.org>
Cc: "Booth, David (HP Software - Boston)" <dbooth@hp.com>, "public-awwsw@w3.org" <public-awwsw@w3.org>, "Williams, Stuart (HP Labs, Bristol)" <skw@hp.com>
To: noah_mendelsohn@us.ibm.com

OK, I will add crawling-and-search to my use case list. I look  
forward to it actually happening for the semantic web (or maybe it  
already exists? I don't know). It will be very interesting to see  
what the interface looks like and how the various protocol elements  
(200, 303, Link:, etc) get used. Right now it's kind of difficult for  
me to imagine.

By "web closure" I'm referring to the notion discussed in the ESW  
wiki (http://esw.w3.org/topic/WebClosure and related pages).

I really do find the ability to dereference a URI to learn more about  
what it denotes to be quite useful, but it feels like a debugging  
thing for developers, not really what I'd call an application or use  
case, yet. Of course maybe I'm just not using the best RDF browsers  
available. I'm not trying to say anything negative, just lamenting my  
own limitations and/or wishing that the use cases (especially those  
involving computational agents as opposed to human-guided browsing  
and search) were better developed.


On Apr 7, 2008, at 12:40 AM, noah_mendelsohn@us.ibm.com wrote:
> Jonathan Rees writes:
>> I actually have a very hard time coming up with use cases for follow-
>> your-nose in general - the only ones I know of are web closure,
>> semantic browsing (eg tabulator), and self describing web, and none
>> of these seems very compelling to me - they don't have constituencies
>> saying "I'm having a hard time getting my work done, and if only
>> follow-your-nose worked better, I'd be much happier".
> I would argue that for the traditional (non-semantic Web), Google and
> similar crawler-based search engines are a "killer app" for the
> self-describing Web.  These engines depend completely on the  
> ability to
> dereference a URI without any prior arrangement with the resource  
> owner.
> It's absolutely crucial that the spider either understand what it's  
> been
> given, or else, be able to reliable determine that it does not  
> understand
> it (e.g. Content-type is a media type that the crawler doesn't know).
> While it's currently the case that Google doesn't, as far as I  
> know, take
> much advantage of RDF, GRDDL, or similar technologies, it still seems
> fairly evident to me that search should be included in your list of  
> use
> cases above.  Did you mean that to be covered by "Web Closure"?  That
> comes close, in a way, but I don't find it really evocative of the
> importance of Web-scale search.
> Noah
> --------------------------------------
> Noah Mendelsohn
> IBM Corporation
> One Rogers Street
> Cambridge, MA 02142
> 1-617-693-4036
> --------------------------------------
Received on Monday, 7 April 2008 11:38:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:21:06 UTC