Re: Prelim. DAV spec.

>That's what I'm saying as well. The bottom line is: if both
>the client and the server need to refer to the same thing,
>give it a name (i.e. a URL) and consider it a resource.

This is one reason why early attempts to "solve" the versioning
problem ended up tied in knots. If you decide that everything
has a URI and that the URI is all that is needed to retrieve a
resource its not possible to provide versioning through lexical
properties of the URIs. Ie one cannot create a convention that
http://foobar.com/fred;3 is version 3 of fred.

I agree with Dan that we have to name everything and keep to the
principle of only using one typr of URI rather than attempt
to create ad hoc accessors for "versions" of a URI.

What we have is not a resource with many versions but a collection of
resources and a collection of assertions stating which resources 
are earlier variant of others.

Ie if we take the "fred" resource we may have the following URIs :-

Fred		The resource itself, 
		the operation GET("Fred",t) returns the current value 
		of Fred, the Fred of time t.
		It is the "essence" of Fredishness.

Fred.v1		Fred version 1,
		A Fred that is known to be immutable in time. 
		GET("Fred.v1", t) returns the same value for all
		values of t.
		This value can be cached since it cannot change 
		by definition.

Fred.v2		Fred version 2, see above.

The advantage of separating the assertions from the resources is that
we then have a scheme in which we can move the assertions about in
lieu of objects. 

If we want to synchronise two repositories we can do so by first 
exchanging the assertions as to which fred versions have changed.

The other key advantage of this approach is that because we have not 
relied on particular lexical properties of the labels (ie convention
fred;3 fred;4 are versions 3,4 of fred) we can go to offline distributed
authoring and make it work.

Consider the following scenario. I have two machines, a laptop and 
a desk machine. I work on projects on both machines, the two machines
are only briefly in communication with each other so we have a
continuing problem of synchronising two databases.

I'm using this example because I think it gets to the core of what is 
hard in the distributed authoring problem. If we can solve this problem
in a principled manner I think we can do the collaborative, multi-author
one as well. I also think that it is the key to making the Network 
computer something that would be an advance on our current systems
rather than an Oracle/IBM big systems, MIS centric disaster. There is
a reason why the world has gone to Microsoft and some people just
"don't get it", but thats a digression.

The rules for the laptop/desktop scenario are as follows:-

1) If compatible changes in the two databases are made the system
synchronises transparently.

2) If there are collisions the system responds with a semantic level 
merger of the two systems.

By "semantic" level I mean that the merger cannot be the CVS style
lexical kludge. The merger process must exploit structure or else 
it is all hopeless. 

To make this work I think we need to consider carefully the nature of 
hypertext structure. I believe that hypertext has structure in the
broad scale that is as "cliched" as ordinary text but is beyond
merely headings, paragraphs etc. For example where there is a list of
objects there is probably a need for a cliche which allows that
list to be extended. Anyone who has done requirements engineering
will be familliar with a structure of text in which requirements and
architecture have to be mutually cross linked. The generation of 
tables of authorities, crossindexing and cross referencing is all
part of this structure.