W3C home > Mailing lists > Public > www-ws-arch@w3.org > March 2002

Re: D-AG0007- reliable, stable, predictably evolvable - v0x1

From: Mark Baker <distobj@acm.org>
Date: Mon, 25 Mar 2002 21:14:38 -0500 (EST)
Message-Id: <200203260214.VAA02514@markbaker.ca>
To: Suresh_Damodaran@stercomm.com (Damodaran, Suresh)
Cc: www-ws-arch@w3.org
Hi Suresh,

> <sd>
> Couldn't this problem be solved by 
> (1) defining  "backwardly compatible" standard
> as the standard that will interoperate with components of earlier versions
> of C-sets (ok, need to refine this), and 
> [in the example you cite, HTTP 1.1, if backward compatible to HTTP1.0, then
> conformity will not suffer? Some relevant notes on this are in [1]] 

Right.  But this relates to a single entity, HTTP, not a set of
entities such as a C-Set.

> (2) requiring all standards define their backward compatibility status?
> Thus, in the example if HTTP1.1 standard could explicitly state its
> "backward compatible" status. In cases where backwards compatibility is
> sacrificed
> for better functionality etc. that can be stated too, to avoid confusion.

Excellent.  So what is the value of *us* defining a C-set if this is

By "us", I mean that somebody else might want to do this for their own
purposes.  But I wonder what value it adds to a reference architecture.
More on this below ...

> Practically, it also places a large burden on small ISVs or open source
> developers to develop code that implements these specifications in
> parallel so that they can be deployed at the same time.  IMO, this
> unnecessarily favours large corporations.
> <sd>
> I am not sure I understand the argument - does this mean it prevents open
> source developers etc. from implementing a "standard" that is not yet
> defined?
> </sd>

No, just that if some company of organization goes out and promotes
"C-Set #149" such that customers think they need it, then small or
independant developers will be at a disadvantage because they will be
forced to implement the whole thing, rather than just focusing on one or
two pieces of that C-set.

> <sd>
> Point well taken. 
> We cannot undo the harm - only prevent harm in the future!
> One solution - we may define mappings from W3C versions to RFCs or other
> standards
> that are deployed.
> </sd>

Sounds good.

> [snip]
> I think "backwards compatible" is a good testable requirement, so we
> could add that.  Also, re above, I think it would be useful to actually
> *exclude* any mention of C-sets in our work.
> <sd>
> We are used to thinking about individual standards
> that somehow work together in some products. To quantify and promote
> interoperable
> frameworks, and thus complex products, I think it is beneficial to version "
> a set of standards" (with necessary caveats on backwards compatibility, as
> you pointed out). 
> Besides, interoperability/conformance tests can be carried on multiple
> standards,
> and on products that implement multiple standards (and multiple versions of
> each standard too). I am not yet convinced that we should avoid thinking
> about "set of standards."

Well, I'm pretty convinced of the opposite.  So we'll have to leave it
at that until somebody else chimes in. 8-)

> I would argue that the very notion of a "reference architecture" is based on
> standards that are REQUIRED to interoperate. Without a means to define what
> it means
> to interoperate among multiple standards, how are we going to "conform" to a
> reference
> architecture? 

I agree that a reference architecture *starts* with what could be
considered a C-set.  What I disagree with is the claim that it should
*evolve* in that same manner.

> Sorry, it took this long to respond!

Not a problem.  At least you did! 8-)

Mark Baker, Chief Science Officer, Planetfred, Inc.
Ottawa, Ontario, CANADA.      mbaker@planetfred.com
http://www.markbaker.ca   http://www.planetfred.com
Received on Monday, 25 March 2002 21:09:28 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:40:55 UTC