W3C home > Mailing lists > Public > www-dom@w3.org > October to December 2005


From: Ray Whitmer <ray@personallegal.net>
Date: Sat, 3 Dec 2005 02:55:41 -0700
Message-Id: <44CE602E-B7C7-4EDB-BA78-1FC37FD83472@personallegal.net>
Cc: Maciej Stachowiak <mjs@apple.com>, DOM mailing list <www-dom@w3.org>, vicki Murley <vicki@apple.com>, andersca@mac.com
To: "L. David Baron" <dbaron@dbaron.org>

On Dec 2, 2005, at 11:34 PM, L. David Baron wrote:

> I don't think an erratum to make something *optional* is the right
> solution.  (Likewise for the getAttribute discussion.)  What's  
> happened
> here is that there are two classes of DOM implementations, Web browser
> implementations and server-side implementations (or is this latter  
> class
> more than one class?).  The authors of both of these classes value
> interoperability within the classes but not between them.  (I've never
> seen browser bug reports reporting that we don't interoperate with a
> server-side DOM implementation.  Do server-side DOM implementations  
> get
> bug reports that they don't interoperate with browsers?)

With load and save, there could easily be code expected to be shared  
between client and server.

Convince Sun to support a separate Java implementation behavior for  
browsers and servers, which can get bound in to browser javascript,  
not unlike possible xpconnect binding for languages that might  
otherwise be used on the server or server side Javascript.  It is  
anyone's guess what Applets or standalone applications do for DOM.

Perhaps this is not considered mainstream stuff at present, but those  
who were following the existing standard may get rather upset anyway  
and it gets quite messy to define.

> So I think the solution here is not to make things optional.

Unless it could be interpreted to be already optional, see my prior  

> I think
> it's to make the spec tell the truth:  that there are two distinct
> classes of implementations that follow slightly different rules.  The
> spec could define two separate conformance classes, and each class  
> would
> be required to follow its respective rules.  What would we lose by  
> doing
> this?  Who cares that the two classes of implementations interoperate?

Does this make WRONG_DOCUMENT_ERR optionally thrown in the browser or  
are most operations just completely undefined with respect to what is  
likely to happen in the browser because no one will fix the  
implementations to throw the appropriate exception?  There are cases  
in browsers, including Mozilla as I last knew of it and I believe IE  
as well, where the nodes are not compatible to cross between document  
boundaries because an HTML DOM is not compatible with an XML DOM,  
etc., but they probably don't throw the correct error now.

> Perhaps having to do this is unfortunate, although I'm not sure  
> what the
> original rationale for using the same interfaces for both browser and
> server-side DOM implementations was, and other than losing that the  
> main
> cost is some extra spec writing and testing work.

It requires a new standard, as occurred with HTML DOM Level 2, which  
is incompatible with HTML DOM Level 1, which had significant mistakes  
which I requested (at the request of others) the W3C to delay to  
ultimately make incompatible to fix the problems recognized up until  
that time, which then proceeded and made the standard quite late.

> What would have
> prevented it is more serious testing:  thorough enough testing to
> demonstrate real interoperability, which I think should be a  
> requirement
> for entering Proposed Recommendation, and which the CSS working group
> now makes a requirement.  I think very few W3C groups have learned  
> this
> lesson, though.

DOM clearly didn't test until level 3, which now has a test suite  
that contains level 1 tests to test these particular cases, and I  
believe there are probably still many missing tests which would  
expose more common differences between the implementations and the  
standard  because such testing of finite cases is inherently  
incomplete in the infinite set of inputs and outputs.  It would have  
significantly delayed DOM Level 1, but saved time later.

But the question seems to be as much about intentional drift as  
incorrect initial implementations.  The big question is commitment of  
dominant players to set the expectation.  I believe that the Mozilla  
code base defining and initially implementing DOM Level 1 had a JS  
binding mechanism that couldn't return a null in a string return,  
such that getAttribute was initially previously completely correctly  
implemented as defined. It certainly wasn't the users of the Java  
binding who asked for that non-null string return -- but they are the  
ones expected to keep it while Mozilla apparently chases later IE  
improvisations rather than keeping the original correct standard  

I say turn the finger around because dominant browser vendors need to  
take more responsibility, as the test platform for web content, to  
enable web authors to easily distinguish good content from broken.   
If it were possible to clearly communicate to web authors testing  
against a browser the problem with their content, fixing content is  
feasible and preferable.  Being compatible/tolerant for end users  
shouldn't preclude this or it encourages very broken content.  The  
worst known flaws with common implementations could also be  
identified allowing avoidance -- standards evangelism backed by  
appropriate tools.  A web author strict checking mode for browsers  
would seem to be called for.  It could even help to deal with  
inevitable incompatibilities between browser versions.  Yes, it is  
more difficult than this explanation I have given, but the resulting  
better web content is worth it.

As has been pointed out, this is hardly the first case in which this  
has occurred and there is no reason to believe it is the last just  
because they are the only ones currently highlighted by tests. I  
believe that reconciliation must become possible by changing the  
browser implementations and wielding more influence with authors by  
providing a less-treacherous test platform for them. These particular  
cases we have before us today hardly occur in most content, and once  
a maintainer has identified the problem, it is not hard to devise  
alternative mechanisms that avoid known browser bugs.

Ray Whitmer
Received on Saturday, 3 December 2005 09:55:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 20 October 2015 10:46:12 UTC