Re: Permitting non-indirect links (3 CCs removed)

At 11:07 PM 1/8/97, Gavin Nicol wrote:
>Len's quote regarding fragment specifiers:
>
>>Traversal of such a reference should not result in an additional
>>retrieval action." [draft-ietf-URL-syntax-00] 29 Dec 1996.
>
>Looks like a good reason for saying that fragment specifiers cannot be
>mapped to specialised URLs...

Well, since the stuff before the # is an opaque string, then we can't have
_any_ solution that will work for both transparently...

   If this is correct, then the URL is supposed to be hands off for the
client, and the fragment specifier hands off for the server (even if the
client _wants_ to hand it back).

   Terry's post indicated that behavior was only defined for HTTP fragment
specifiers, but the above contradicts it. I guess I'll have to read the
latest version, but if anyone can save me the misery, I'd appreciate it.

    I actually have some real questions about the server-side management of
TEI pointers though:

    If I'm addressing a character string within an element, what does the
server return? It's not returning anything that is well-formed in terms of
XML, because we have queries just to handle the case that we are pointing
at things that don't correspond to the divisions already in the text. It
might be a subrange of the document with fragmentary tree-parts, or any
number of other hairy things. The example I cam up with a moment ago is the
end of one element and the beginning of another, where the two are
unrelated except by sharing some common ancestor somewhere else in the
document.
   If I'm addressing a point in a document, what does the server return?
For instance I could have a link to a zero-length substring in a document.
Or I might have a link to an empty element....

   So we will really need a client side solution for some of these things
anyway... or a robust XML-fragment protocol. XML clients will already have
most of the machinery required to implement XML document-fragment
addressing, so we might as well put it to some use.

    I still think that what we _must have_ is a way for the client-side
processing to happen. It would also be nice to have smart servers, but it's
not _absolutely essential_. I also think that clever document partitioning
and linking can make it possible to control how much client computation is
required, even with a client-side solution. I agree that the need for
clever coding is a sign of an underpowered or undergeneralized system, but
the WWW as a whole shows you how much mileage you can get out of an
underpowered system.

    I think required server-side solutions get us into a lot of hot water:

    1. Links can designate non-well-formed bits. (This is the worst, as
properly handling it requires protocol changes, and solving some nasty
problems about general document fragments).

    2. Server administration and document authoring are frequently under
control of different parties, increasing the number of personnel at a given
site who must be convinced in order to get XML buy-in.

    3. The opaqueness of URLs makes it hard to share client- and
server-side responsibility for resource and fragment resolution. (This
_may_ be a strategic mistake by the W3C but it is unlikely that we can
convince them of that. I've already tried, and I think the alternatives are
bearable anyway).

    4. We're already assuming that (at least some) browsers will have to
change. Why take on the task of trying to change the servers as well?

I am not a number. I am an undefined character.
_________________________________________
David Durand              dgd@cs.bu.edu  \  david@dynamicDiagrams.com
Boston University Computer Science        \  Sr. Analyst
http://www.cs.bu.edu/students/grads/dgd/   \  Dynamic Diagrams
--------------------------------------------\  http://dynamicDiagrams.com/
MAPA: mapping for the WWW                    \__________________________

Received on Thursday, 9 January 1997 17:06:02 UTC