RE: ISSUE-41/ACTION-97 decentralized-extensibility

On Thursday, October 15, 2009 at 3:49 PM <jonas@sicking.cc> wrote: 
> On Tue, Oct 13, 2009 at 9:57 PM, Tony Ross <tross@microsoft.com> wrote:
> > Using Namespaces:
> >
> > Beyond allowing extended markup to be valid within HTML documents, a
> couple of other motivations contribute to the desire to utilize
> namespaces as a solution.
> >
> > The first of these is greater consistency with XML-based documents.
> > Ideally the experience here would be as close to that experienced in
> > XHTML as possible. This is particularly relevant with the introduction
> > of SVG and MathML into HTML  since we can fully expect content to be
> > directly pasted in from these document types. Without namespaces,
> > pieces of content that aren't native to SVG or MathML won't behave as
> > expected when accessed from script.
> 
> I will note that HTML has been wildly more successful than XML when it
> comes to web pages, so following XML isn't obviously the right thing
> to do.

The success of HTML is not necessarily tied to its non-support of namespaces. I agree that there are aspects of XML that are undesirable to port to HTML, such as draconian error handling, but I'm not convinced namespaces fall in this bucket. Furthermore DOM Consistency with XHTML remains one of the HTML Design Principles. 

I truly anticipate that authors will construct HTML 5 documents with namespaces and expect them to work. Typically this will be when content has been taken from XML documents such as SVG. From an author's point of view I don't find that expectation to be much of a stretch given that HTML 5 already supports namespaces in the DOM. The HTML 5 parser currently assigns namespaces to all the elements it creates depending on the parsing context. Even certain attributes with auto-mapped namespaces can be already be created on SVG or MathML elements (xlink:title, xml:lang, etc). Heck, I can dynamically create any element or attribute in any namespace from script, even in an HTML document. 

If we can work around the compatibility risks, why not take the next step and make these two worlds even more consistent with each other?

> > The second motivation is to allow developers to quickly target groups
> > of related extensions without introducing a host of new APIs. Thus a
> > developer can now use getElementsByTagNameNS or CSS namespace selectors
> > to target large swaths of extended content. This ties in even further
> > with the first motivation since this matches the experience a developer
> > would have in XHTML.
> 
> I would actually say that this this is even more true for a solution
> like prefixed-based naming, like <example_com_myelement>. I.e. using
> getElementsByTagName and namespace-less selectors is even more
> familiar to developers than their namespaced counterparts.

You're correct that with prefix-based naming, a developer could easily access all elements of a single type, but that's not exactly what I meant. My point was to illustrate the ease of accessing a collection of related, but different elements types. For example, in a prefixed world if I have my_element1, my_element2, etc, I cannot select all of them without resorting to either N queries (one for each name), or using a blanket query for all elements and filtering the list manually (once again with a condition for each name). If those elements belonged to the same namespace, I could write a single query using existing APIs to select all of them at once. Furthermore, I could do this from CSS in addition to script.

Of course, this scenario could work with prefixes if query mechanisms let you match only part of a name. For example, using something like "my_*" to match all elements beginning with "my_". To my knowledge this is not possible with existing APIs.

> > Compatibility:
> >
> > Many have expressed the opinion that the proposal as stated may or
> > will break the web. I agree that this outcome is a possibility. Rather
> > than rejecting the proposal outright, however, I would prefer to
> > discuss how it can be tweaked to reduce such risk. One possible
> > approach I can see is to scale back the base proposal to be even more
> > like what IE does today.
> >
> > I would also like to come to some consensus on what the tolerance for
> > breakage is. One page? 100 pages? 10,000 pages? Of the billions of
> > pages on the web, certainly any change will break some of them.
> 
> I think it's very hard to put an exact number on this. Or indeed to
> measure an exact number. I do definitely agree that some breakage is
> acceptable though. I'm personally often in the camp that thinks that
> breakage is more acceptable than others. My strategy is generally to
> try to deploy a desired change in alpha and beta releases, and see if
> people complain.
> 
> My experience has actually been that Microsoft has been more
> conservative here, though I'm very happy if that is not the case (or
> no longer is the case). Is microsoft ok with this breakage for the
> default "compatibility mode"? Or only for example if the document uses
> the doctype specified by HTML5, i.e. <!DOCTYPE html>?

I agree that the real challenge is measuring the impact. Yet it's often easier to measure the set of sites that might be affected as opposed to those that actually break. A rough threshold could help determine when the list has been narrowed far enough. Of course as you mentioned, the real measure is whether people complain.

Received on Friday, 16 October 2009 01:37:13 UTC