RE: More History comments

> > This could prove to be a
> > scalability issue; if every minor change in the archive 
> > results in a lot of things having a different situation, then 
> > the history system will rapidly become an enormous corpus of 
> > data.  Maybe that's OK.  My point is that as far as I'm aware 
> > this vital issue hasn't been looked at even vaguely in the 
> > original or current work of the History system.
> 
> The original History System work did indeed consider this, as 
> well as some of the issues raised here regarding hash-based 
> naming and differences among situations. For example see [1]. 
>  I am encouraged, though, by the vigor of review that we are 
> seeing in this discussion, which complements that prior work.
> 
> [1] History Requirements
>     
> http://lists.w3.org/Archives/Public/www-rdf-dspace/2003May/0099.html

The 'vital issue' I'm referring to isn't just the scalability issue, but the whole modelling of distinct states of objects, and exactly what the state change of one object means for another object.  This original requirements briefly brushes the surface:

'The history data component receives the object via either method calls
or Java event mechanisms. (Note that this does not preclude other
interested parties from acting on object as well). Upon reception
of the object, it serializes the state of all archive objects referred
to by it, and creates Harmony-style objects and associations to
describe the relationships between the objects. (A simple example is
given below). Note that each archive object must have a unique
identifier to allow linkage between discrete events; this is discussed
under "Unique Ids" below.'

But it doesn't go further; the Unique Id section only talks about how to name objects, not different states of objects.  The example doesn't consider any objects related to the example Item being edited.  The implementation certainly takes no account of the issue.

The fundamental missing piece here is some complete example models of what happens when something changes.  That's never been done.  Before and currently we've just been talking about little parts of the model, and hoping that if we drop all these triples into this big sea then a complete, queriable model with all the information we need in will plop out.  I don't think that's the case, since the exercise of modelling a simple change to the simplest possible archive turns out to be really quite complex.

 Robert Tansley / Hewlett-Packard Laboratories / (+1) 617 551 7624

Received on Thursday, 22 May 2003 16:53:29 UTC