- From: Elliotte Harold <elharo@metalab.unc.edu>
- Date: Sun, 01 Apr 2007 08:44:30 -0400
- To: Dave Pawson <dave.pawson@gmail.com>
- CC: www-tag@w3.org
Dave Pawson wrote: > Downstream processing of xml content requires validation and hence > versioning to assure the processor that the content being worked > is as expected. Actually, no, it doesn't. Far more downstream processing of XML never bothers to validate the XML at all than does. Consider that no mainstream web browser ever validates anything nor does any XSLT processor. Clearly validation is not a prerequisite for getting useful work done. > When archived XML is pulled from storage, how will it be processed > without guesswork if it's lineage is unknown? By guessing from the root > element? There's always guesswork in such a situation. Choosing the schema to apply is just one more guess. Of course most of the time you do know something, and you use that to inform your choices. However XML is designed such that it is possible to reverse engineer the XML's meaning even if the schema, the documentation, and indeed even the XML specification itself have been completely lost in the depths of time. -- Elliotte Rusty Harold elharo@metalab.unc.edu Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Received on Sunday, 1 April 2007 12:44:44 UTC