Re: Error handling: yes, I did mean it
Tim Bray wrote:
> 5. Clearly, authoring systems and the like must be able to deal,
> transiently, with non-well-formed documents. Is the input subsystem
> of an editor required to be an "XML processor"? Why should it?
Is the input subsystem of a browser required to be an "XML processor?"
If not, then your requirement will not have its desired affect.
> That's exactly what I'm trying. Do you expect malformed PostScript to
> produce a valid page, or a birth-date of September 48th to be accepted
> by a payroll system?
No. But I don't expect that the PostScript standard says that malformed
PostScript must be ignored nor any of the date representation standards
say that invalid dates must be surpressed by the parser.
> Another example. It is very difficult indeed to devise an input
> stream that will cause [gtn]roff to complain. It is very difficult to
> hand-author any significant amount of PostScript and not have the
> first few drafts thrown out with syntax errors. Both are successful.
If XML is only going to be hand-authored as often as PostScript is, then
there will be no problem: the editors will handle it.
> Sean McGrath again:
> >I was simply making the point that the likes of
> > nsgmls foo.sgm | grep -c "^(BAR$"
> >can be a useful thing to do even if foo.sgm markup contains errors.
> Sure; but if you replace "grep -c" with some fancy java applet that
> does a business-critical application, this is no longer useful but
> highly dangerous.
Which is exactly why people must have the option to decide for
themselves. They have different applications and different
needs. Hopefully the business-critical application people know how to
capture stderr and know how to pipe to /dev/null if that's what they
decide is best. Let's please leave the whole class of
business-mission-life-critical applications out of this discussion
because those people can take care of themselves. If they can't we
have much bigger problems than well-formedness.