- From: Derek Denny-Brown <ddb@criinc.com>
- Date: Mon, 23 Dec 1996 11:59:24 -0800
- To: "W. Eliot Kimber" <eliot@isogen.com>
- Cc: w3c-sgml-wg@www10.w3.org
At 09:41 AM 12/23/96 -0900, W. Eliot Kimber wrote: >At 06:50 PM 12/22/96 -0800, Tim Bray wrote: >>Probe 1: >> >>1. Web links contain lots of ends that don't know they are anchors, thus >>2. if we require all anchors to be self-aware, we won't be able to subsume >> the current Web mechanisms. > >I suppose this is true, but I'm not sure it's useful. In other words, I >can't imagine a workable system that would *always* require anchors to be >aware of their anchor status (or said another way, that all servers must >know which objects they *could* address are anchors at any given moment). > >Therefore I would say that XML cannot *require* that all anchors be "self >aware" at all times, but neither should it assume that no anchors are ever >self aware. In other words, XML should suggest a system where, from the >point of view any given client examining a body of XML documents, as many >anchors as practical are known *in advance of traversal to them*. I think it might be usefull do define a hyperlink resolution capability requirement declaration for documents. level 0: an anchor is never self-aware (no real hyperlinking capability) level 1: an anchor is self aware only if it is itself also a declaration of a hyperlink (the HTML model) level 2: an anchor is self aware relative to it's own document only (it knows about links declared in it's document, but no-where else) level 3: an anchor is self aware for a specific bounded document set (HyTime model) A browser would have a specific resolution level to which it conformed, and would give a warning anytime a document requesting a higher resolution level was loaded. This provides a quick way for vendors to get into the game, (by starting off with implimentations which only support lower levels) but provides a clear migration path to more powerfull applications [snip] >I know there are some hard implementation problems in doing link management >in a distributed networked environment (probably the same problems faced by >any distributed system that requires concurrency in rapidly-changing data, >i.e., transaction-support systems and parallel operating systems). But I >don't think the basic data model or resolution algorithms are very >complicated. And we can certainly simplify from the general HyTime model >(which I have always assumed we'd do). Part of the issue with link management is that different users have differnt needs. If there were an easy way to provide tools to the simple cases and a clear migration path to the more sophisticated models, a user could start with the cheaper, simpler tools and only move on up the migration path as their needs demanded. Part of the whole equation is also how we intend to support locating... -derek "that which is not slightly distorted lacks sensible appeal: from which it follows that irregularity - that is to say, the unexpected, surprise, and astonishment, are an essential part and characteristic of beauty" - Charles Baudelaire
Received on Monday, 23 December 1996 15:05:03 UTC