On the web what constitutes a document is no longer a
fixable object. Its only when you activate the links that the entities
should be considered as included in the top level node.
I understand how this follows from documents having multiple possible
layers of links semantics that are "late bound" i.e. at parse time.
What is the default set or is there one? Specifically, I am wondering what
Web crawlers will need to do in order to harvest descriptive markup for
On an un-related note:-
In Tim Bray's Lark implementation notes he says that the putative
one week implementation of XML turned out to take a bit longer! I am
not suprised (Frankly, if James Clark can do it in X, developers with
normal abilities are looking at 5X (if they are SGML cognescenti) and
10X if not. IMO)
If mapping Alink to the FigRef element type and/or activating different
link sets requires non-validating XML browsers to parse DTDs as well I think
we can push back the Beta dates that much further...
I think if DTD parsing is going to be required for non-validating XML tools
we will need to provide a reference DTD parser (along with the
reference XML parsers they will have available) to give developers a Jump Start.
Microsoft recently announced OFC - Open Financial Connectivity - an interchange
format for Financial Transactions. It uses SGML with a MS defined DTD.
Developers who sign up as OFC developers get a reference implementation of an
OFC parser to start them off.
The moral of the story. If you are introducing something that developers might
find intimidating - give them sample code to chew on.
Finally, at the risk of being branded a blasphemer, I think that
<?XML-LINK ALink FigRef,XRef>
needs to be comprehensively shown to be silly before I can get it out of
Sean Mc Grath