RE: Precision and error handling (was URL work in HTML 5)


First, I think you have the right framework for analysis -- what is the positive benefit to be gained for the whole ecosystem. And I include in benefit the convergence to consistent behavior, but also extensibility, reliability, ease of security analysis, etc.


> I don't think you can get away with calling it a false trichotomy
> without showing that these cases are, in fact, equivalent.

The cases aren't equivalent, they are just three of nearly infinitely many possibilities on a continuum.
> As an editor, if confronted with the possibility that an author may
> place a <p> inside another <p>, I can say:
> 
> A) Nothing, or explicitly that I have no clue what happens.
> B) That nothing is produced, that the DOM stops there or that there is
> no DOM.
> C) That the currently opened <p> is automatically closed (or other > well-defined handling).

rTere's a continuum of specificity with no clear dividing lines between them.

You can say, with <p> inside <p> that:

(a) it is undefined (not say)
(b) say that nothing is produced
(c) any current open <p> is automatically closed
(d) that conforming processors will indicate to the user in a clear (but undefined) manner that there is a markup error
   (kind of like a + c)
(e) say that it is undefined, but that a conforming processor SHOULD make best effort to render the markup in some way
  with implementation advice ("If this is a site the user has visited before, for example, you might consider adding an
  -ignore bad markup for this site- button").
(f) ... (need I generate more?) ...


> Each of these choices will have different impacts on the ecosystem, and
> their influence will be felt more the longer and the more broadly the
> concerned technology is deployed. Implementers will not behave in the
> same manner for each. This will cause authors to behave differently.
> Users won't see the same results.

Users seeing the "same" results is a matter of judgment. Surely users on smart phones don't see the "same" results as users on desktops, so "same" depends the intent of the author. If there are at least some browsers which will give intelligible but otherwise not as completely pleasant results on malformed data (and <p> inside <p> is a stalking horse, since I don't think that was invalid in HTML4, but take some other bad egregious markup which HTML5 insists on specifying completely).

I'm saying that unnecessarily locking down the exact nature of the behavior as HTML5 often does is harmful, and that the choice of "error recovery" behavior could profitably be specified less precisely with positive benefits for the ecosystem. 


> I am unsure about what you mean by the impossibility of reproducible
> behaviour due to dynamic, asynchronous, security, or privacy
> constraints. Can you cite how any such constraints may for instance
> render the HTML parsing algorithm impossible?
>

(I admit this is conjectural and whether it applies exactly to HTML5 parsing, or whether it only applies to some of the APIs) Often implementations find themselves filtering content to avoid triggering security properties in downstream processors. For example, the automatic treatment of ISO-8859-1 as Windows-1272 might be a security problem if not uniformly implemented, because different recipients of HTML5 might render the results differently, to the point where an implementation, for security reasons, might choose behavior which is not conforming to the HTML5 spec.     

The behavior of the page rendering module depends on the timing of availability of multiple resources. Sniffing, for example, leaves the processor the option of sniffing based on how many ever bytes are available. Images can either be available or not, and image.width and image.height either accurate or 0 depending on timing. So page rendering itself is not fully specified. So why is it that specifying that <p> within <p> must behave as if the first <p> closed is more important to specify exactly and deterministically? The web is dynamic. Pages look different on different devices.  Improving consistency among browsers is valuable, but not completely dominant over other requirements of security, reliability, consistency independent of latency.


> > The standards process involves a classic "prisoner's dilemma": if
> > everyone cooperates, good things can happen, but even just one rogue
> > participant, acting for individual gain, can grab more for
> > themselves, by attempting to be "friendlier". To gather more
> > consistency and robustness of the web requires ALL of the
> > implementors to agree to do something which might seem "not
> > friendly". Do not sniff, do not track, do not over-compensate for
> > user spelling mistakes by quietly DWIM-ing misspelled <htmlll>
> > <bodddy> <hh1><pp>  as if the user had typed <html> <body><h1><p>. To
> > do so would introduce chaos. It might be "friendly", and if you were
> > the "dominant browser", might even seem like a way of cementing your
> > dominance.
> >
> > Avoiding escalation of DWIM-ish features involves convincing ALL of
> > the major players to reject (ignore, not process, treat as error,
> > fail to retrieve, fail to treat as equivalent) things that would
> > otherwise be friendly to accept. That would then allow the otherwise
> > unruly content community to learn to create more conservative
> > content.
> 
> I think that you are conflating many things here. Most importantly,
> having a well-defined output for any given input is not DWIM. It's
> simply reducing variability in standards, which is a good practice.
> Undefined behaviour on error introduces discretionary items in
> implementation behaviour.


"well-defined" is a value judgment which varies by the nature of the processor and the application for which it is used.

Different processors and different purposes have different requirements. A well-written spec makes no unnecessary conformance requirements, "unnecessary" for the roles which it emphasizes. I can believe there may well be a general requirement for defining <p> within <p> but perhaps not so strong for many other conditions.

> See for instance http://www.w3.org/TR/spec-variability/#optionality (and
> many other parts of the QA framework).

I'm familiar with the material there, but I'm not sure how it supports your case or detracts from what I am saying.

> In general, DWIM is orthogonal to clear and concise specification.

I think you be using the term "DWIM" differently than I do.
The Wikipedia entry http://en.wikipedia.org/wiki/DWIM   
cites (http://larry.masinter.net/interlisp-ieee.pdf). 

I donated all of my Interlisp manuals to the Computer History Museum, unfortunately, or I'd cite them as earlier references (the IEEE article was an after-the-fact retrospective.)
I'll respond to your comments about DWIM once you fill in my 
Wikipedia page.... 

In the meanwhile:

> > Getting all the players to agree requires leadership, and a clear
> > vision of robustness objectives.
> 
> I am always highly suspicious of social mechanisms that require
> leadership and clear vision to function. It seems like an inherently
> broken design to boot — why introduce reliance on something known to be
> so brittle, especially over the long term?

Insuring that popular receivers do not accept misconfigured variations of new content before new capabilities are introduced adds robustness and reliability.

If there is a *NEW* <video> tag, then we would be much better off requiring receivers of video to signal an error if the content-type of the video stream is incorrect than we would be if we allowed vendors to "sniff" the video type. This is especially true if there may be video formats which would not sniff reliably. So here is a case where being "Friendly" causes downstream harm.

And there is (or was) no legacy content labeled <video>. Yet now we have tons of *new* content with mislabeled video types, which adds to unreliability because the video sniffing algorithm is underspecified and doesn't match what implementations do, and vendors who are happy that THEIR browser works on some content won't change readily to match less popular browsers. It's a mess. Fixing it requires leadership. But lack of leadership is not sustainable, we'll be thrown into browser wars 3.

> http://en.wikipedia.org/wiki/DWIM 

gives another example:

In wider application, [DWIM] has drawbacks for security. For example, suppose a server tries to filter html in user inputs to prevent Cross Site Scripting by rejecting all inputs that contain "<script". However, an attack could input "<sccript" which will pass through the filter unchanged, but the browser will helpfully correct this back to "<script" and thus open itself to attack.

Larry
 

Received on Tuesday, 2 October 2012 23:58:04 UTC