Re: Addressing....

>Writing it is easy. Getting it installed on the Web servers of the world
>is a bigger hassle. Right now we much encourage browser writers to embed
>XML and authors to use it. We can perhaps cut out the browser vendors if we 
>ship XML viewers as applets. Throwing system administrators into the mix 
>turns it into a big chicken and egg problem. Netcom, Compuserve or MindSpring
>won't install a CGI unless many users ask for it. Many users won't ask for 
>it unless they have seen it on the web before and think it is neat.

Sure, but you are also relying ona revolution in browser technology,
*and* changes to server installations anyway. 

This reminds me *so* much of the I18N argument about labelling of
text types, where people said that requiring it would lead to huge
incompatabilities, and require massive changes to servers etc. In
actualy fact, *not* requiring it has given us a situation far worse
than the original.

>It also takes us out of the language design business into the
>protocol design business. I think that "SGML people" should be in the
>protocol design business, but not necessarily *this group of SGML
>people right now. 

OK then, let's stop discussin hyperlinks because hyperlinks *are* a
form of address, which you say crosses over into the protocol level.

>>   1) You are talking about special code in the client, which would be
>>      easily comparable to the complexity of the code in a server.
>Sure, but it isn't the code complexity that is the problem: it is the politics
>of getting it installed.

How many servers? How many clients? How many companies? Do you have 
*proof* that upgrading all XML-serving servers will be much harder
than getting all possible clients updated? 

>>   2) Any instance/entity that is small enough to be transmitted across
>>      the internet, will not incur a great parse/traversal overhead on
>>      the server: certainly no greater than that required on the client
>>      side. 
>But most clients are Pentium 100s spinning their wheels.

Not in Europe, Japan, Australia, Thailand....

>>   4) With the scheme I proposed, each URL is unique, and so can be
>>      cached effectively by caching proxies.
>It could work the other way: retrieving an entire entity (and caching it) may
>often be faster if the user is often going to want many elements from that
>entity. (for instance footnotes)

We are talking about caching in a single location vs relying on client
side caches. Client side caches work equally well with both proposals,
server side caches do not (another scalability issue).

>Anyhow, there are other reasons for making the special server *optional*.
>The #-translated system that was proposed scales from a simple smart-client
>system to a smart-client, smart-server system.

Both proposals do.

>> structure left for you to address. In this case, we would be forced to
>> send however MB the source is in it's entirety, or *fake* an entity
>> structure (easily done for DynaWeb, *much* harder for other types of
>> databases). 
>Could you please elaborate on why this is so difficult? If the server can
>serve elements separately, then couldn't it make "entity wrappers" for 
>every element?

You need an entitisation algorithm for the database, and for some
databases, it is non-trivial as they have no inherent structure to key
from, so generating fake entity boundaries could be prohibitively

>it is likely in the short term. Tieing XML to it could be very dangerous,
>in my opinion. Even if XML survived the chicken and egg problem, many
>users would not have access to its functionality because they could not 
>get their ISP to install the CGIs or special server.

I find that very debatable. The people who would care enough about XML
in the initial stages are precisely the kind of people who would
control the publishing environment. Early XML publishers will
be a small group of dedicated people who will most likely control
publishing, and if it takes off, popular demand will force sites to
support XML.