Re: XML, SGML & the Web (was: Shorthand for default attributes)

Bert Bos wrote:
> No, the fact is that XML is designed around variable length elements,
> arranged into a tree with a variable number of children at each
> node. If large documents are to be processed in random-access mode,
> with deterministic response time, they have to be indexed, or stored
> in a different format than XML. I'm not making it any worse by
> attaching extra information to each node.

If you are talking about requiring arbitrary data at an arbitrary point
earlier in the document to affect a subsequent elements processing, then
I don't know of a data structure that allows me to write a database that
can handle that efficiently. If I've misunderstood your proposal, I
apologize. So yes, you ARE making the processing of that node
substantially slower.
 
>  > That is not true. Point 2 has been met fully. Point 1 was half-met.
>  > Existing SGML tools *can* read XML documents. They just cannot
>  > (typically) write them without some small tweaks.
> 
> What about the "/>" delimiter? 

Many tools have no problem with it.

> What about the "encoding="? 

How can the fact that XML has an extra inline syntax for declaring
character encodings break applications that already have their own
mechanisms for determining the encoding?

> what about the keep-all-whitespace rule? 

This is indeed an annoying difference between validating XML parsers and
non-validating ones, but NOT between validating XML parsers and SGML
parsers.

> What about the absence of !doctype?

Valid XML documents must have a doctype.

> "Almost met" still means it failed.

Yes, for us to have met condition 1 fully would have required XML to
look identical to the input and output of common SGML tools. We have
achieved partial success. Partial success is still partial success. It
is not a call to throw the baby out with the bathwater.

 Paul Prescod

Received on Thursday, 15 May 1997 20:47:49 UTC