RE: Cursors face defining moments on the Web

Aaron, thank you for your comments.  Some response below:

> The benefit of Nupedia is that it is free (libre). Thus people are allowed
> to modify, translate, copy and sell Nupedia articles without having to
> worry about copyrights.

Also I think that another motive was to eliminate the bias that corporations
might selfishly impose.  Even Britannica I suppose was an instrument of
British cultural imperialism.  Nupedia will probably be biased towards the
interests of people who are opposed to copyrights, right?

> (swinging back around to RDF) Interestingly, the project has been
> ways of cataloging articles without forcing a cataloging system upon the

Yeah, that's what I meant by them trying to build the semantic web from
scratch.  First, nupedia allows anyone to contribute content, so no
different than WWW.  Next, nupedia gives you the ability to look at only
those articles you "trust", by artificially creating a community of people
with similar interests (anti-copyright).  But the point of the semantic web
should be *inclusive*, IMO.  That is, anyone can publish metadata, and you
should be able to choose to "trust" content based on any definition of
community that you wish.  Just like you should be able to categorize in
different ways.

Here is a scenario to describe what I mean:

1. You are browsing the web, and you see a page that you think is good.  You
have a thumbs-up and thumbs-down toolbar buttons in your browser toolbar.
You click "thumbs-up", and a small XML packet containing your e-mail
address, the URL in question, and your metadata gets silently sent
somewhere.

2. Somewhere is a server containing your information and a set of groups to
which you belong.  Membership in the groups could be decided in a manner
similar to advogato, free to all, or however the person creating the group
wished.  Anyone could create groups.  You could use some UI at this server
to rank any groups you were interested in and weight how much you trust or
distrust each group's metadata.

3. Periodically, maybe every hour or so, each metadata collection server
(where you sent the xml packet) aggregates the metadata based on how many of
each metadata value were sent by members of each group.  The aggregates are
sent to various services that have subscribed to the metadata (google and
hotbot, for example).

4. You do a google search on certain keywords, and the results are
automatically ranked to show those results that you would find the most
trustworthy, and filter out those that you did not trust, all without ever
having to modify the original pages.

[Note that nothing about this example claims that the "metadata collection
servers" have to _only_ collect "sucks/rules" info about pages, or that
subscribers have to subscribe to _all_ of the metadata or have it aggregated
by groups, or that the "metadata servers" have to be centralized, controlled
by one organization, etc.]

IMO, this is one of the major underlying visions of the SW.

Received on Wednesday, 31 January 2001 23:23:54 UTC