W3C home > Mailing lists > Public > w3c-sgml-wg@w3.org > April 1997

Re: Error handling in XML

From: Derek Denny-Brown <ddb@criinc.com>
Date: Sat, 19 Apr 1997 13:30:19 -0700
Message-Id: <3.0.32.19970419133018.00945100@mailhost.criinc.com>
To: Peter Flynn <pflynn@curia.ucc.ie>
Cc: w3c-sgml-wg@w3.org
At 12:28 PM 4/19/97 +0100, Peter Flynn wrote:
>Before we expend a large amount of effort on this, can someone confirm
>that it stands some chance of being listened to. Forgive my scepticism
>(and as I wasn't at WWW6 I didn't have the chance to hear it from the
>horses' mouths): I know both N and M contain numbers of people who are
>seriously committed to making a better shot at it this time round, but 
>both organizations also contain rather more people who want to ship 
>something slick that will do what the masses want: gobble bytes and 
>never gag.

The big M is in the process of going around to a number of large businesses
and investigating what those busisnesses see as requirements for the "next
generation" of borwsers in order to better support intranets.  I was not
part of any of these mtgs, but I did have an interesting talk with someone
who did attend one.  XML appears to be a hot topic for Microsoft.  I am
guessing that they see that as a way to move in on some of Netscape's
earlietr groundwork.  The question of DSSSL vs CSS was asked, and they are
planning on working with CSS as much as possible (which makes sense, from
their point of view.  CSS is *much* easier to impliment than DSSSL)  But
they are dedicated to suporting XML, at least as a way of standardizing how
people extend HTML.  

>>The subject is violations of well-formedness.  Well-formedness should be
easy 
>>for a document to attain.  In XML, documents will carry a heavy load of 
>>semantics and formatting, attached to elements and attributes, probably
with 
>>significant amounts of indirection.  Can any application hope to 
>>accomplish meaningful work in this mode if the document does not even
manage 
>>to be well-formed!?!?
>
>Joe and Jill Homepage are not likely to give the proverbial tinker's cuss
>whether their documents are well-formed or not, if the browsers are as
>forgiving and tolerant as N or M. The browsers are going to have to offer
>significant new features to compensate for the penalty of having the 
>parser gag on invalid syntax. They already know this, and already have
>their own agenda for dealing with it. Are we singing from the same score?

This may be why Microsoft is looking at XML.  It help define what is
"correct" HTML and thus Microsoft can warn you about incorrect and tell you
that it may not display properly. (but it is not their fault... )  Joe and
Jill Homepage are not the focus of the more extended plans for
InternetExporer.  Intranets are.  the money is in big business, and that is
where Microsoft is aiming.  (Not that they will completely ignore Joe and
Jill.  But microsoft software has never been know as the most reliable...)


>[It's really a weird phenomenon: no-one expects a C compiler to gracefully
>accept syntax errors, put them right as it sees fit, and carry on compiling.
>But everyone expects a Web browser to handle HTML like N and M. Anyone
>investigating the psychology of this?]

It is (usually) trivial to notice if the HTML browser recovered from your
errors and displayed your page properly.  It is much more difficult to tell
if the compiler recovered properly.  (I speak with the experience of
debuging programs where the compiler did not work properly.  You don't what
to have to deal with that...)


-derek

--------------------------------------------------------------
ddb@criinc.com || software-engineer || www/sgml/java/perl/etc.
  "Just go that way, really fast. When something gets 
      in your way, turn."  --  _Better_Off_Dead_
Received on Saturday, 19 April 1997 16:30:57 EDT

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 10:04:24 EDT