A WWW-annotation protocol (good ideas and bad ideas)

Since this has very little to do with implementation of a particulaur
variant of annotation software, and more to do with my opinions on
what is the right (or wrong) way to implment WWW annotations, I'm
moving this discussion back to the list.

I hope you don't mind, Jon. :-)

Also, since I'm saying all sorts of (nice) things about Crit, I feel
Ping should have a chance to correct me if I make any mistakes.

On 1999-08-28, 15:55:17 (-0400), Jon Garfunkel wrote:
>
> All we've got is the 3V protocol.
[...]
> I'd like to have an OpenSource clone to work with the 3V protocols.

Standardizing on the 3V protocol would be a very bad move, IMHO.  In
fact, I think use of 3V should be discouraged...

I say this, because from what I've read and seen (screenshots), 3V
seems to have some major problems.  Most of them stem from what looks
like a genuine wish to emulate "Post-Its".

One of the values of WWW-annotation would be facilitating critical
discussions - but you can't have a real discussion using "Post-Its".
They're just too small to write well structured arguments, and they
have that stupid glue strip on the back which makes handling them
akward.  :-)

Time for a disclaimer.  I'm basing my judgements of 3V on something I
read:

    The 3V protocol involves sending the annotation itself 
    (not just an URL) as a reply when the client requests 
    annotations for a given web page.

If this is incorrect, then please substitute "3V" with "Theoretical
Annotation System" for the rest of this message. :)


Anyway, this implies that 3V servers either store or directly
manipulate the content of all annotations.  If a 3V server admin
doesn't like the word "potato" then you [the client] might /never/
see that word in an annotation coming from a 3V server.  They can
remove it from all annotations you are sent, no matter what the
annotation's author meant to say.

This demonstrates the problem with having the annotation server send
you the annotation itself: the annotation server can filter what you
see, and neither the client nor the author have any control over what
gets filtered.  This is one reason why all mediators should be
local.

3V appears to be even worse than that though.  The 3V server doesn't
just send you the annotation content, it also stores the annotations
locally.  Censorship on 3V is therefore not limited to irritating
"potato-filters" - the admins can just delete or edit stuff they
don't like!

That, in short, is why I don't like 3V.  That and the fact that it is
a closed-source proprietary windows-only technology... :-)

I think reverse-engineering the 3V protocol is only useful as a
method to provide a painless "upgrade path" from 3V to a "real" WWW
annotation system.


OTOH, I do like the Crit model, because it doesn't really have these
problems.

Crit currently consists of 3 parts:

	1) A database of annotations (which are just web pages).
	2) A database of links between web pages, including but not
	   limited to 1).
	3) A mediator which inserts information from 2) into the
	   web page which is requested by the client.

2) can link together any two web pages - not just annotations stored
by the Crit server.  Thus different Crit server link databases,
instances of 2), can contain links to annotations stored anywhere
online - including annotations stored by other Crit servers.

Currently Crit implements all these features in one integrated
system, but that isn't required by the architecture - each item on
the list could be implemented & run seperately.

Also, the client is responsible for fetching each annotation itself.
The mediator, 3), only provides information about where the
annotations are, not what they actually say.  "Potato-filtering" is
impossible, because the client fetches the annotations' content
directly from the source.

The client's experience, 3), can trivially be migrated to a local,
user-run process, such as the the browser or the "local proxy" I
proposed in an earlier message.  It depends only on communication
with 2).

So a Crit-based system doesn't have 3V's flaws: the server admin can
only censor pages stored locally, in 1), or limit which annotations
the server points to, in 2).  Both problems can be avoided by:

	- hosting your annotations on your own personal web site
	- telling multiple Crit servers about them
	- asking multiple Crit servers for links when browsing

Also, this crit-based model for annotations leverages the current WWW
infrastructure as much as possible.  The only new standards we really
need would describe how the client communicates with the link server.
This involves defining a file format (XML based?), selecting a
transport/request method (HTTP/CGI?) and a fine-grained link format
(Xpointers?).


As Jon pointed out in an earlier message, Crit as software isn't
something we should focus on.  But if we want to aim for
standardization on a useful method to annotate the WWW, I think we
should definately make use of the exellent ideas it demonstrates.
Especially since it appears to only be a matter of writing down a
spec for a file-format...


Finally, in spite of all the flaws I've mentioned, a standard should
allow (not require) the server to send the annotation text along with
the link information.  This would be a valuable optimization for
glossary- and dictionary-style services (*).  But I do think the
standard should require a link to the "source" of the annotation in
all cases where it isn't actually stored by the link server itself,
so the user can independantly verify the integrity of the data it
receives.


(*) Imagine just hitting a button in your browser to transform each
    and every word on the current page into a specially selected link
    to the corrosponding entry in one of many online dictionaries...

-- 
Bjarni R. Einarsson                           PGP: 02764305, B7A3AB89
 bre@netverjar.is           -><-           http://www.mmedia.is/~bre/

Netverjar gegn ruslpósti: http://www.netverjar.is/baratta/ruslpostur/

Received on Sunday, 29 August 1999 10:43:47 UTC