Re: [widgets] How to divorce widgets-digsig from Elliptic Curve PAG?

As fun as this is, all this mud slinging is really not getting us anywhere useful.  

Lets go back an look at the options we have  to divorce Widgets/XML Dig Sig from Elliptic Curve:  

  1. Remove ECC from XML Dig Sig (in my opinion, "the right thing to do"™):
    
  pros:  
     - frees both XML Dig Sig and Widgets Dig Sig to progress to REC at full speed.  
     - begins a pattern of divorcing signature algorithms from processing (a good thing, which avoids this kind of mess!)  

  cons:  
     - new small spec needed
     - XML Dig Sig missing an important algorithm.   
  
 2. Pointing to /latest/
   Pros:  
      - Always pointing to Rec  
      - Conformance always bound to latest Rec  

   Cons:  
      - As XML Dig Sig does not include SHA-256, this does not currently help Widgets Dig Sig progress
      - Conformance always bound to latest Rec  



Kind regards,
Marcos

And because I was told by some that this is very entertaining… though I personally don't want to spend time arguing in circles (and around the main point which is to progress Widgets Dig Sig)…. So some parting words :)    

On Tuesday, December 20, 2011 at 5:49 PM, Charles McCathieNevile wrote:

> TL;DR: JC and Leonard are right.

I'm sorry, but they are not.   
> Pointing to a moving target makes any statement about conformance pretty
> much unusable in the real world.

This is also bogus. I manage the conformance suites for WAC and for W3C Widgets, and the fact that we fix tests and patch specs every week is evidence otherwise. Almost all the standards I manage tests for are "done" (i.e., REC or close to / or equivalent in WAC), yet we continuously fix issues in both the tests and the specs.   
> Which is significantly worse than having
> a statement of conformance to something known to contain errors and bugs.

Leaving things broken is not in the interests of users or developers, and is bad business. One would not claim to conform to a broken test and would need to revise their conformance if a test is found to be broken after claiming conformance (hence, any claim to conformance is temporally bound and not forever). Furthermore, updates to software can again introduce regressions: hence, you constantly need to be checking if your software actually conforms because any update you make to software may break conformance (i.e., introduce a regression). This is common:

http://my.opera.com/core/blog/2009/10/13/automated-testing-of-the-browser-core  
> Browsers don't implement "living standards". They implement something  
> undefined for an open environment where people are continually innovating,  
> and they make considerable and expensive efforts to tell the community  
> what it is they implement.

If they implement the HTML5 spec, then they are implementing a living standard.  You can dress the mutton as lamb all you want, but the fact remains that HTML5 is a living standard.  
> Browsers like Opera who have commercial enterprise customers work with  
> those customers to establish sensible conformance requirements that take  
> into account the evolutionary nature of the web (rather than arbitrary  
> stacks of requirements that happened to be easy to test), but neither side  
> wants a Statement of Work that doesn't actually clarify what is expected  
> to be delivered under a legally binding contract.

I understand this business requirement, but it's flawed (or disingenuous):   

It makes no sense, for instance, to say to a costumer "we are going to implement the june 10th version of HTML5" because that version might be missing feature X (or feature X was badly specified, or changes, whatever). Also, any conformance tests will have been updated after June 10th, so you are screwed anyway (unless you took a snapshot of the test suite). But then when you ship your "June 10" snapshot, developers cry foul because the feature does not match the behavior of the latest version of HTML5… So then your customer is pissed off because you delivered what was in your contract, but no one wants to use the software because it's outdated and buggy (you tell your developers "oh! it's based on an old, outdated, spec… sorry guys, we were under contract to deliver something outdated to you… have fun working around it!"… and you tell your client, "oh, that's what you paid for. Thanks! enjoy the mass of angry devs and users, suckers!").  

Meanwhile, your competitor did the right thing and kept up with the latest spec: they take all the developers and leave your ecosystem for dead.  

I.e., snapshotting is a nice form of commercial suicide, and you will just have to catch up. Sure, you will make a quick buck, but it will just lead to a market failure.  

Better is to have a maintenance agreement with your customer: "ok! we will try to keep up with HTML5 for the next year… that will be X Million dollars please".      
> A lack of stability and precision about which version of a specification  
> is meant also makes it harder to get a decent royalty-free license.


Harder, maybe, but not impossible.  
  
> Details below...
>  
> On Mon, 19 Dec 2011 14:03:58 +0100, Marcos Caceres <w3c@marcosc.com (mailto:w3c@marcosc.com)> wrote:
>  
> > I'm sorry, but Leonard is not correct: this is the W3C, not ISO.
>  
> Yes, so far so good...
>  
> > ISO is a real "standards body" (i.e., can be legally binding for  
> > governments). W3C is a business/community "consortium" (i.e., not a  
> > legal standards body and specs are not legally binding): W3C makes  
> > "recommendations", which are not (and should not be) legally binding.
>  
>  
> So what? In practice, people make contracts (legally binding documents)
> that include such terms as "conformance to...".

At the W3C, the conformance suite is always changing from under your feet (a good thing!) - so "conformance to" will be a moving target.    
  
> > > In ISO specs, undated references are forbidden. There is a team of
> > > people (called ITTF) whose job includes checking these things and
> > > bugging spec editors to fix them.
> >  
> >  
> >  
> > Yes, but this is not ISO. And just because they operate in that manner,  
> > it also doesn't mean that ISO is right.
>  
> True, but it also doesn't mean they are wrong.

It's pretty easy to prove statistically and historically that tests and specs are constantly updated, removed, added, and fixed (except for dead specs/test suites, that no one cares about).  
> > > There is such a thing as certification. It is impossible to do if the
> > > spec is not fixed, including references.
> >  
> >  
> > What if there is a bug in the spec? or a test is wrong and it's fixed  
> > after someone has claimed compliance?
>  
>  
>  
> This happens all the time, and has done for decades.
>  
> Wat happens is tha by dating explicit versions, you can deal with a known
> state.  

If a section is missing or in error, all you can do is innovate (against the point of a standard) or go and ask the spec editor to finish that section… but you are screwed because you already said you were going to implement the dated draft.    
> Something that conforms to the earlier, buggy version, or to the
> later better (but almost certainly not bug-free, as you seem to freely
> acknowledge) version, have a set of known properties. This is helpful when
> you are trying to include them as *parts* of an ecosystem.

If you are under contract to deliver the broken version, then you are kinda screwed (won't reiterate what I said above).
> > > What you are advocating is entirely counterproductive given the source
> > > of the discussion (= a PAG): if the spec has undated references, you
> > > cannot make sure it is royaltee-free.
> >  
> >  
> >  
> > Yes you can: the /latest/ always points to the latest REC. REC is  
> > royalty free.
>  
>  
>  
> No, that is not guaranteed. Under the current process any substantive
> change between last call and Recommendation means, as far as I can tell
> from carefully reading what gets licensed, that the change cannot be
> assumed to be covered by the patent license unless it happened to be in
> the first public working draft. Even in that situation I can see a legal
> argument that the technology is not covered by any license. And of course
> *until* Recommendation, there is no license.


Again, that's why the /latest/ always points to the latest REC.  

> > > If the scope of one reference
> > > changes, there is a new risk. It is not only a problem of conformance
> > > testing.
> >  
> >  
> >  
> > Not if the /latest/ always points to a REC (or a periodical snapshot  
> > where IPR commitments to RF have been made).
>  
>  
>  
> If you always ensure you are pointing to a particular version of
> reference, over which you guarantee that a commitment has been made, then
> you are correct. I haven't heard that condition even suggested (let alone
> specified in a practical manner) for any "living standard" before.


It was eluded to as part of:
http://lists.w3.org/Archives/Public/public-w3process/2011Nov/0030.html
  
> > > Your vision of "fluid" standards is completely unmanageable in practice.
> >  
> >  
> > Yet, somehow, every browser vendor manages? Seems like an enigma.
>  
> Actually, I dispute that *any* browser vendor has achieved this. After
> years of effort and refinement, most browsers have pretty good coverage of
> CSS 2.1 - precisely because it stopped being a "living standard" and
> became a stable target.  

I think you are misunderstanding what a living standard is (or maybe I am?). A living standard for me is one that has defined boundaries/scope (e.g., DOM4, CORS, HTML5, XHR, Widgets). HTML5 does not define what is in DOM4. DOM4 does not define mouse events (which is in DOM3 events), etc. They all build on each other. Where new functionality is needed, a new spec is created.

So, by living standard I means one that can be fixed if there is a bug in it (and in extreme cases, new features added to it over time - but in a backwards compatible way).   

HTML5 is an exception because all previous versions of HTML were not fully specified (i.e., there was an absolute need to make a monster spec to tame the beast that is the Web platform). But look at how concise DOM4 is vs DOM3 Core. DOM4 lives to give us an basis on which to build other specs in a living way, which does not overstep its boundaries.  
  
> On the contrary, anyone who claims they conform to
> HTML5 is either stupid, since that is a meannigless claim while the
> definition of HTML5 is known to be in flux, or thinks we are stupid enough
> to believe it. I don't believe any browser makes, nor could make without
> being laughed at, such a claim.

Agreed. And they have been laughed at, for instance:  
http://forums.appleinsider.com/showthread.php?t=114401


> What browser vendors do is implement pieces of what they think HTML5 will
> be, changing them as the spec changes in an effort to find emergent
> patterns from what other browser developers, content producers with big
> enough market share to insist on people doing what they want, and hope
> (forlornly in many cases) that the developers who relied on what happened
> before will actually change their code, so the browser can minimise the
> multiple compatibility paths it has to maintain in order to work on the
> Web.

yes…   
>  
> By comparison, conformance to XML in its various editions is a
> well-defined state. It is true that for confromance to a given version of
> XML, some things are uncertain. The uncertainty is sufficiently annoying
> that people spend the money to fix the spec and make a new version, but it
> is also generally sufficiently bounded that people can build
> zillion-dollar businesses on top of the existing stack.

Yes, but this only happened because it was done well from the start (unlike HTML before 5)… when XML was going through the Rec track, businesses would have been developing in parallel (as they do with HTML5). If I was going to develop a competing product based on XML 6th Edition, then I would be tracking XML changes and updating my software daily so that when 6th Ed goes Rec, I have something on the market on that day!  

I would not sit around wanting for XML 6th Ed. to magically become Rec before implementing it (hence, there is no well-defined state for XML… and that is evident again but the fact that there are 5 (!!!!) editions): there is only XML, which continues to live and get fixed.  

Ok, so clearly we have some form of agreement here: both you and  I want what XML does; the only difference is I don't want "Editions". I just want XML (with a change log) that gets updates regularly to fix bugs.  

> I think it makes a lot of sense to recognise that many pieces of the
> platform need to be upgraded periodically until we decide to jettison
> them, or they are so deeply encrusted into the infrastructure that
> changing them is no longer feasible. But a model that provides fluid
> changeover of dependencies actually implies a pretty high cost of
> management, and I don't see that it is justified.


I think we all agree on that.  
  
> A model that does it in
> a more step-wise fashion is what we already have, and naturally
> re-examining the particular conditions and mechanisms for changing the
> steps is a sensible thing to do, but while I conclude from your premises
> that we should be doing that, it seems to me that you conclude we should
> simply abandon the idea of this crabwise march and let it all flow.
>  
> cheers
>  
> Chaals
>  
> --  
> Charles 'chaals' McCathieNevile Opera Software, Standards Group
> je parle français -- hablo español -- jeg kan litt norsk
> http://my.opera.com/chaals Try Opera: http://www.opera.com

Received on Wednesday, 21 December 2011 11:04:17 UTC