W3C home > Mailing lists > Public > w3c-sgml-wg@w3.org > January 1997

Re: Permitting non-indirect links (3 CCs removed)

From: Gavin Nicol <gtn@ebt.com>
Date: Fri, 10 Jan 1997 12:29:30 -0500
Message-Id: <199701101729.MAA13948@nathaniel.ebt>
To: dgd@cs.bu.edu
CC: w3c-sgml-wg@www10.w3.org
Again, I'd like to state that I am not arguing agsint fragment
specifiers, but against fragment specifiers being the only *standard*
addressing mechanism. I would prefer both recommended, and neither
standardised, than having one standardised, and one not.

>If I'm addressing a character string within an element, what does the
>server return? 

I could return a number of things:
   1) The text data
   2) A TR 9601 fragment specifier
   3) A well-formed document that contained only that text.

>If I'm addressing a point in a document, what does the server return?
>For instance I could have a link to a zero-length substring in a document.
>Or I might have a link to an empty element....

What would you expect it to return? This depends on whether you want
the point in context, or the point itself.

>   So we will really need a client side solution for some of these things
>anyway... or a robust XML-fragment protocol. XML clients will already have
>most of the machinery required to implement XML document-fragment
>addressing, so we might as well put it to some use.

Yes. I'm not arguing for *only* server-side addressing. I am arguing
against fragment specifiers being the only mechanism. Both have thier
place, and both have pros/cons.

>I still think that what we _must have_ is a way for the client-side
>processing to happen. It would also be nice to have smart servers, but it's
>not _absolutely essential_. 

It is absolutely essential if you want any form of scalability. 

>I also think that clever document partitioning and linking can make
>it possible to control how much client computation is required, even
>with a client-side solution. 

This is putting the onus on the publisher though. I would prefer to
relieve the publisher of that burden. You give the WWW as an example,
but we all know how unscaleable the WWW publishing role is too...

>1. Links can designate non-well-formed bits. (This is the worst, as
>   properly handling it requires protocol changes, and solving some nasty
>   problems about general document fragments).

This is a red-herring. If you want to retrieve some non-well-formed
bits, you can. If you don't want to, you don't have to *and* you can
always wrap such things to make them well-formed anyway.

>2. Server administration and document authoring are frequently under
>   control of different parties, increasing the number of personnel at a given
>   site who must be convinced in order to get XML buy-in.

This is a valid concern, but again, anyone who cares about scalability
at all *will* be forced to upgrade the publishing process/system

>3. The opaqueness of URLs makes it hard to share client- and
>   server-side responsibility for resource and fragment resolution. (This
>   _may_ be a strategic mistake by the W3C but it is unlikely that we can
>   convince them of that. I've already tried, and I think the
>   alternatives are bearable anyway).

URL's, are not opaque, even though people pretend they are. Also, this
has nothing to do with server vs client-side element addressing.

>4. We're already assuming that (at least some) browsers will have to
>   change. Why take on the task of trying to change the servers as well?

Because servers *will* have to change anyway for 3 reasons:

  1) Scalability
  2) To add correct mime type labelling.
  3) Changes to protocols (we are going to be transitioning from HTTP
     1.0 to HTTP 1.1 this year).

(2) Will require that all servers have the MIME type mapping file
changed, which means that the server will have to be shut down once
anyway. Seems like a good time to install additions....
Received on Friday, 10 January 1997 12:31:11 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:25:06 UTC