- From: Ray Whitmer <ray@personallegal.net>
- Date: Wed, 7 Dec 2005 14:58:43 -0700
- To: Maciej Stachowiak <mjs@apple.com>
- Cc: Brendan Eich <brendan@meer.net>, www-dom@w3.org
On Dec 7, 2005, at 1:34 PM, Maciej Stachowiak wrote:
>
>> Mode 3 is for compliance testing with standards that also work in IE.
>>
>> So we wish to discourage use of getAttribute to check whether the
>> attribute exists (fail if they do it).
>>
>> Therefore, calling getAttribute on a non-existant attribute in
>> mode 3 would ideally throw an exception, so that any script that
>> called getAttribute without knowing whether an attribute existed
>> first would get an exception in this mode, because due to the
>> ambiguity / tension between the specification and the status quo,
>> there is no predicting what happens.
>
> A mode that did this would not be compliant with the standard and
> would fail the test suite. So really this is inventing a new kind
> of nonstandard behavior. This mode would also reject some obvious
> idioms that work with both the written spec and the de facto standard:
>
> if (!e.getAttribute("foo")) {
> // code to run if e is either empty or null
> }
I think it was clear from the initial description that the purpose of
mode 2 was to pass the de jure test suite as an implementation, and
neither mode 1 nor mode 3 would pass the de jure test suite, just as
neither 2 or 3 pass the de facto tests. That is what subsetting the
functionality means. A subset function set does not pass the
superset of tests. If all modes could pass all tests, there would be
little need for modes. Again, the purpose of mode 3 is for web
authors to test against, to produce code that will run under either
mode 1 or mode 2 without problems. It is not the purpose to have
other end users run under it, although if they have problems with
content of a web site, they can recommend it as a test platform to
the one responsible for it, which helps verify flexibility of web
content beyond a single platform.
> As well as breaking both compliant code and de facto standard code.
This is the design, purpose, and reason for its existence. To break
all code that is not simultaneously compliant with both the de jure
standard and the de facto standard. Anywhere there is special code
that does not meet the standard or the de facto standard, it is
disabled in mode 3. The return from getAttribute with a non-
specified attribute is such behavior. Therefore, when the browser
reaches that place, it gives up because that is not implemented.
Therefore, code which is written that cannot satisfy both, fails. It
is created as a better test, which allows web authors to more-easily
write code compliant with both de jure and de facto standards. It is
fairly trivial to write around the definitional problems if they are
flagged for you.
> Furthermore, since the error only occurs at runtime, you might
> never go through the code path where it comes up and so could never
> tell your code was right.
That is true of all testing of web apps in a browser built on
Javascript. Or do you use something better? But at least you don't
wind up thinking your web app works in cases where you did test where
it is relying on behavior that is not specified the same way for the
de jure and de facto standards. So it is not as treacherous for
testing as a normal web browser that happily satisfies requests that
are not in line with any particular standard that in many cases are
not expected to work the same between browsers. So when you write
behavior into the browser that has issues with the de jure or de
facto standard compatibility, you just add a mode check and prevent
it from occurring if the browser is being used to verify the
functionality of a web app. Fairly strait-forward, fairly useful.
> I don't think adding a new mode that is both noncompliant *and*
> incompatible with existing content would be very helpful.
The content cases that successfully run in the mode 3 without hitting
special exceptions are both compliant and compatible (the special
exceptions could be made non-catchable just to make sure). In order
to do that, it cannot be a compatible or compliant end-user browsing
mode DOM implementation, because it must rule out things that are not
compliant and compatible in both mode 1 and mode 2, even if they are
defined to work in 1 or 2, or 1 and 2 differently.
By way of comparison, a C lint verifier tool also fails to be
compatible with many forms of C (without generating errors), because
that is it's express purpose, to find problems that the lax standard
let through and thereby improve the code of an author. But the C
code that passes the verifier is all standard, higher quality, and
more-portable (assuming that is the sort of thing the lint tool is
looking for). Mode 3 is such a tool. Mode 3 is restricted to
runtime checks, just as lint is restricted to compile time checks
(there are other runtime tools). In each case, it is a stricter mode
of the tools that helps weed out problematic cases. End users do not
use lint to execute their code any more than mode 3 is used by end
users. Slowly, to really reach the developers, lint-like checking is
integrated into C compilers as special checking modes and the de jure
standard bar is even raised hand in hand with the de facto standard.
Browser vendors could learn from this.
The whole point is to not to be able to execute existing code that
relies on behaviors that are not well defined between de jure and de
facto standards. As a web author, I would find it extremely useful,
which is the whole point, because it does not encourage blind use of
behaviors with broken definitions. Browsers only written with
supposed compatibility for end users in mind do encourage blind use
of behaviors with broken definitions, which is why we are where we
are after so many years and efforts in the standards arena. In
reality, it is fairly easy to avoid such behaviors if they are
flagged for you.
The normal users would normally run in mode 1, or possibly 2 in
unusual circumstances. Never 3 unless gathering information to
report a broken web site.
There is no one to assign responsibility for current messes other
than the browser vendors and if they want better-standardized
content, they need to provide tools with more integrity that holds
the content for web authors to a higher standard than it generally
allows normal users to execute. Standardization efforts rise or fall
on the efforts of browser vendors. It is quite clear how we got here.
Ray Whitmer
Received on Wednesday, 7 December 2005 21:58:56 UTC