Re: [freenet-tech] RE: Freenet, distributed search and simple RDF queries

On Sun, 4 Feb 2001, Bill de hOra wrote:

>
> Dan,
>
> : Consider a second example. Instead of two Freenet keys that identify
> : documents, one of which is a critique of another, we have a document in
> : Freenet that is a critique of another resource with an http: URI. Regardless,
> : Freenet can still be used to look up metadata pointing one towards
> : critiques. This seems to fit with some of the free speech agenda often
> : associated with Freenet, and suggests a particularly robust form of Web
> : annotation. Nothing in the scenario below particularly spins on party
> : A's document living in Freenet; it might just be a traditional HTTP
> : website.
>
> Well, any time I look at freenet, I see place to find the rdf description
> services you wrote about once upon a time. Putting direct metadata into freenet
> would be silly, but putting pointers to the places that do would be useful. That
> might help in the short term with dereferencing metadata (anything freeenet
> points is by convention not just a namespace URI, it holds downloadable
> information type stuff).

If Freenet matures according to plan, I expect folk will be writing rather
a lot of metadata into it. Think of it as machine-readable free speech.

There are plenty of precedents for critical materials having been chased
off the Web through legal threats etc. If Freenet provides an environment
in which anyone can say anything about anything, we shouldn't be suprised
if such claims are made in machine as well as human- readable form.

> : Hmm... I've almost convinced myself this'll just work. But it all sounds
> : far too easy. Someone please point out the fatal flaw...
>
> Here's a strawman flaw: predicating that clients and servers are a fundamental
> part of web architecture. Clients/server isn't scaling so well wrt people want
> from a global information network. It's also a politically and socially dicey
> proposition if you believe (like Lawrence Lessig, say) that distribution (and
> not information) is power.

I don't think I was claiming that the Web is fundamentally about
client/server. After all, we have HTTP <-> HTTP for caching etc, not to
mention Web identifiers for Usenet content.

I expect metadata distribution issues, and liability w.r.t. being a
carrier, to keep a lot of lawyers busy for some years to come. If I
aggregate RDF from multiple sites into one database, which can be queried,
I fear I'll be held responsible for the contents of that database.

Actually I've been running an RDF robot since last june, on a very modest
scale, and it has been crawling and collecting RDF 3-tuples about whatever
folk care to express. That includes personal data, claims made by one
person about another, image metadata and plenty else. Should the
aggregation points for such a bot be considered akin to search engines /
usenet servers? What if they filter, process etc the data rather than just
let it wash through? (but I'm straying from topic here...)

>
> Using freenet as you describe in your use case sounds like a good go at a
> join/discovery system for metadata. But only until such a point as collective
> smartness or serendipty (I'll take either, the sooner the better) figures out a
> better way to do evolve web architecture out of hardcoded clients and servers.
> Maybe freenet and gnutella *are* the first serious crack at this issue, I don't
> know.

Perhaps; or the first to threaten to go mainstream on a massive scale
anyway. There was certainly work on distributed search, query routing etc
before we learned to call it P2P though.

 >
> [Aside: how far away are we from someone being sued for libel over their
> metadata?]

There've been a bunch of court cases about HTML META tags, mostly about
use of trademarks. And I've a vague recollection of something to do with
PICS, though don't have the reference handy.

Since I have trouble seeing 'metadata' as a separate category from data,
it's kind of hard to answer this. BTW search for 'dan brickley' on Amazon
and see my comments on the book they're flogging with my name on the front for
another of my currently favourite RDF/SW annotations-meet-ecommerce use
cases...

Anyway the legal/liaibility side of things is currently a topic I'm
worrying about, since one of the projects that pays my wages is
building PICS-like infrastructure for quality labelling of online
health-related resources. It's an EU-funded effort to ensure that
citizens/consumers (whatever they're called ;-) can call up contextual
metadata from trusted 3rd parties when looking at heath information
online.

While I think we've got the tech side covered, I'm concerned that everyone
will be so afraid of running such a server that deployment will be
difficult. Hence investigation of Freenet, NNTP and other distributed
mechanisms. So I'm quite serious about writing metadata into Freenet (or
less glamorously, Usenet) as a workaround for the problem of annotation
services being a legal risk...

So, if we want to try this, we just need to agree a way of generating
Freenet keys from a pair of arbitrary URIs? Didn't Sergey have some such
algorithm in the Stanford RDF API package?

Dan

Received on Sunday, 4 February 2001 16:19:29 UTC