Re: Ephemeral XML?

At 7:36 AM 1/12/97, Digitome Ltd. wrote:
>The more I read about BOS, ilink storage strategies, multi-pass XML
>parsing etc.
>the more I see a tension between client-side and server-side implementation
>of XML processing.
>
>The disadvantages of server side processing have already been discussed on this
>list. The advantages, in terms of simplicity, for XML browser tools however,
>would surely be significant.

Simple browsers will be useless without any data to feed them. Only Gavin
has emerged to defend the notion that the server infrastructure of the WWW
is likely to change significantly to accomodate XML. I don't believe that
enough specialized servers (or custom cgi-scripts, for that matter) will be
deployed to make XML a going venture. For someone who wants to deploy XML
in a restricted environment (like Sun's documentation viewers, for
instance) it will be easy to create the custom server, and simple client to
talk to it. But if we want to affect the WWW as a whole, we need to slot
XML into as much of the current infrastructure as we can, as a data format
like all the others out there, so it can win on its own merits.

   I see nothing that prevents a server from providing custom normalized
data streams or fancy address resolution as desired, even if we define
things in terms of nominal client processing. But I see real problems with
wide XML acceptance if we make definitions that require specialized server
just to put up XML documents.

    I'm also not convinced that the problems we are proposing that clients
be able to solve are very difficult. To get XML established we need just
one of the two big browser vendors to devote their (quite massive)
resources to a client implementation. The other will then probably have to
follow suit, and we will be in good shape. Even the most ambitious
proposals we've made are not beyond Netscape and Microsoft's reach, I
think.

    If we factor in the 3-10 significant server platforms as well, we have
a real opportunity to slow XML acceptance.

>What I am trying to get at is that XML docs as seen by the client can either
>be "raw" XML or "cooked" XML that simplifies (and speeds up!) client-side
>implementation. Cooked in terms of AFs. Cooked in terms of BOS. Cooked in
>terms of ...
>The transition from raw to cooked - if done at the server side - does not
>complicate the client.  Simple client - more complex servers. Having said
>that, BOS (I like DJD's term "document working set") or multiple
>parsing of XML docs, will be, IMO, kinda tough on the client end.

Why is it hard to keep more than one abstract syntax tree around? sounds
like a hash table from docnames to pointers to me.

>Analogies abound. The first that springs to mind is the difference between
>a networked database application that downloads records looking for matches
>and one that sends SQL to the server and retrieves matching records only.

This is a misleading analogy, If only because of the difference in scale
factors between the database example (frequently 1000s-100,000s of records)
and the typical document examples (I imagine 1-5 documents as the typical
numbers of documents parsed in parallel on almost any reasonable
scenarios). We're not talking about search, but essentially an author
guided form of pre-caching. Equivalents to ilink are a standard staple of
proposals to extend the web -- I've seen at least 3-5 in the last year and
a half, and I don't spend any time looking for them, either.

>Whether or not cooked XML is an option seems to me to depend on what
>client are advised to do with XML docs over an above browsing. Are they
>intended to be thrown away NC style or harvested once for local re-use? For
>throwaway docs, cooked is fine. For harvesting, cooked is perhaps, far from
>fine.

   Seems to me that the current client-cache model is already "harvesting"
and that the XML BOS stuff fits into that pretty nicely. But note that
nothing prevents a server from cooking documents now (and people do) or in
the future, reagardless of how we define things.

    I think the extra work will be worth it because of the power that
separately stored links can give -- the other things strike me as easy once
you do the work to handle separate links at all.

    For XML to fly at all, the code for all this processing has to be
written by someone, anyway, whether this happens at the server, or the
client doesn't seem to make that much difference, but... I don't see any
technical reason that we _have_ to move things to the servers (or that
failing to do it now prevents us from doing it later, once XML is regarded
as essential by the whole world). And I see real practical drawbacks to
requiring new code to be deployed on servers _as well_ as clients.

    KISS is good, but if we can't offer benefits over HTML, we may look
stupid, since HTML has already got simple all wrapped up (and deployed).

    We already have significant benefits to offer for docuemnt
representation, but I think we should offer linking benefits to match
them...

  -- David

I am not a number. I am an undefined character.
_________________________________________
David Durand              dgd@cs.bu.edu  \  david@dynamicDiagrams.com
Boston University Computer Science        \  Sr. Analyst
http://www.cs.bu.edu/students/grads/dgd/   \  Dynamic Diagrams
--------------------------------------------\  http://dynamicDiagrams.com/
MAPA: mapping for the WWW                    \__________________________

Received on Sunday, 12 January 1997 14:56:35 UTC