- From: Chris Bizer <chris@bizer.de>
- Date: Wed, 22 Sep 2004 09:51:28 +0200
- To: <www-rdf-interest@w3.org>, "Matt Halstead" <matt.halstead@auckland.ac.nz>
Hi Matt, simple question. Many possible answers. My current take is to have the crawler capture povenance information using Named Graphs and to use different trust policies which are translated into TriQL.P queries afterwards to determine which information is trustworthy. See: Using Context- and Content-Based Trust Policies on the Semantic Web: http://www.wiwiss.fu-berlin.de/suhl/bizer/SWTSGuide/p747-bizer.pdf Named Graphs Homepage: http://www.w3.org/2004/03/trix/ TriQL.P: http://www.wiwiss.fu-berlin.de/suhl/bizer/TriQLP/index.htm Many other possible answers are found in the Semantic Web Trust and Security Resource Guide: http://www.wiwiss.fu-berlin.de/suhl/bizer/SWTSGuide/index.htm Chris > > I realize there is 'trust' in the semantic web cake[1], but I am > intrigued to understand how this is envisaged to work at even a simple > RDF level. If we have something as simple and useful as a semantic web > crawler, e.g. swoogle [2], then how do we ignore the work of spammers > which inappropriately attribute properties and values to, or reference > in any way, a particular resource URI? > > > [1] http://www.w3.org/2004/Talks/0412-RDF-functions/slide4-0.html > [2] http://pear.cs.umbc.edu/swoogle/index.php > > cheers > Matt > > > >
Received on Wednesday, 22 September 2004 07:51:18 UTC