W3C home > Mailing lists > Public > www-rdf-interest@w3.org > February 2001

Re: [freenet-tech] RE: Freenet, distributed search and simple RDF queries

From: Johan Hjelm <johan.hjelm@era-t.ericsson.se>
Date: Mon, 05 Feb 2001 09:27:39 +0900
Message-ID: <3A7DF37B.F3C813DC@era-t.ericsson.se>
To: Dan Brickley <danbri@w3.org>
CC: tech <tech@freenetproject.org>, Dan Brickley <Daniel.Brickley@bristol.ac.uk>, www-rdf-interest <www-rdf-interest@w3.org>
Just to note a few things: An assertion is a statement being published, which means
that publishing rules apply to it (including whichever laws apply in the country of
publication, etc). This does not necessarily mean American laws, although this is
often assumed. As a matter of fact, those countries with the laxest libel laws could
make this a good business, in the same way some Caribbean countries (I think it is
Aruba) has made a good business out of being tax havens for casino sites and the like.

There will be some interesting court cases concerning someone elses use of metadata
(picture, for instance, a large organization marking up thousands of pages - which may
be external to them -  and suing small companies trying to build a service from that).
There ought to be some things impacting this from the W3C workshop on digital rights
management. Actually, it might be a good idea for the W3C to be proactive about this,
and maybe initiate some kind of activity (e.g. a workshop about electronic publishing,

Depending on the construction of the service, the service provider ought to be able to
claim innocence of the content (like ISPs do for transporting sexually explicit
content). Of course, the actual construction then becomes a technical matter....


Dan Brickley wrote:

> On Sun, 4 Feb 2001, Bill de hOra wrote:
> >
> > Dan,
> >
> > : Consider a second example. Instead of two Freenet keys that identify
> > : documents, one of which is a critique of another, we have a document in
> > : Freenet that is a critique of another resource with an http: URI. Regardless,
> > : Freenet can still be used to look up metadata pointing one towards
> > : critiques. This seems to fit with some of the free speech agenda often
> > : associated with Freenet, and suggests a particularly robust form of Web
> > : annotation. Nothing in the scenario below particularly spins on party
> > : A's document living in Freenet; it might just be a traditional HTTP
> > : website.
> >
> > Well, any time I look at freenet, I see place to find the rdf description
> > services you wrote about once upon a time. Putting direct metadata into freenet
> > would be silly, but putting pointers to the places that do would be useful. That
> > might help in the short term with dereferencing metadata (anything freeenet
> > points is by convention not just a namespace URI, it holds downloadable
> > information type stuff).
> If Freenet matures according to plan, I expect folk will be writing rather
> a lot of metadata into it. Think of it as machine-readable free speech.
> There are plenty of precedents for critical materials having been chased
> off the Web through legal threats etc. If Freenet provides an environment
> in which anyone can say anything about anything, we shouldn't be suprised
> if such claims are made in machine as well as human- readable form.
> > : Hmm... I've almost convinced myself this'll just work. But it all sounds
> > : far too easy. Someone please point out the fatal flaw...
> >
> > Here's a strawman flaw: predicating that clients and servers are a fundamental
> > part of web architecture. Clients/server isn't scaling so well wrt people want
> > from a global information network. It's also a politically and socially dicey
> > proposition if you believe (like Lawrence Lessig, say) that distribution (and
> > not information) is power.
> I don't think I was claiming that the Web is fundamentally about
> client/server. After all, we have HTTP <-> HTTP for caching etc, not to
> mention Web identifiers for Usenet content.
> I expect metadata distribution issues, and liability w.r.t. being a
> carrier, to keep a lot of lawyers busy for some years to come. If I
> aggregate RDF from multiple sites into one database, which can be queried,
> I fear I'll be held responsible for the contents of that database.
> Actually I've been running an RDF robot since last june, on a very modest
> scale, and it has been crawling and collecting RDF 3-tuples about whatever
> folk care to express. That includes personal data, claims made by one
> person about another, image metadata and plenty else. Should the
> aggregation points for such a bot be considered akin to search engines /
> usenet servers? What if they filter, process etc the data rather than just
> let it wash through? (but I'm straying from topic here...)
> >
> > Using freenet as you describe in your use case sounds like a good go at a
> > join/discovery system for metadata. But only until such a point as collective
> > smartness or serendipty (I'll take either, the sooner the better) figures out a
> > better way to do evolve web architecture out of hardcoded clients and servers.
> > Maybe freenet and gnutella *are* the first serious crack at this issue, I don't
> > know.
> Perhaps; or the first to threaten to go mainstream on a massive scale
> anyway. There was certainly work on distributed search, query routing etc
> before we learned to call it P2P though.
>  >
> > [Aside: how far away are we from someone being sued for libel over their
> > metadata?]
> There've been a bunch of court cases about HTML META tags, mostly about
> use of trademarks. And I've a vague recollection of something to do with
> PICS, though don't have the reference handy.
> Since I have trouble seeing 'metadata' as a separate category from data,
> it's kind of hard to answer this. BTW search for 'dan brickley' on Amazon
> and see my comments on the book they're flogging with my name on the front for
> another of my currently favourite RDF/SW annotations-meet-ecommerce use
> cases...
> Anyway the legal/liaibility side of things is currently a topic I'm
> worrying about, since one of the projects that pays my wages is
> building PICS-like infrastructure for quality labelling of online
> health-related resources. It's an EU-funded effort to ensure that
> citizens/consumers (whatever they're called ;-) can call up contextual
> metadata from trusted 3rd parties when looking at heath information
> online.
> While I think we've got the tech side covered, I'm concerned that everyone
> will be so afraid of running such a server that deployment will be
> difficult. Hence investigation of Freenet, NNTP and other distributed
> mechanisms. So I'm quite serious about writing metadata into Freenet (or
> less glamorously, Usenet) as a workaround for the problem of annotation
> services being a legal risk...
> So, if we want to try this, we just need to agree a way of generating
> Freenet keys from a pair of arbitrary URIs? Didn't Sergey have some such
> algorithm in the Stanford RDF API package?
> Dan

          Johan Hjelm
Nippon Ericsson KK, Ericsson Research
    Applications Research Group

  Read more about my recent book at
Received on Sunday, 4 February 2001 19:27:46 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:34 UTC