W3C home > Mailing lists > Public > www-ws-arch@w3.org > March 2002

RE: D-AG0007- reliable, stable, predictably evolvable - v0x1

From: Damodaran, Suresh <Suresh_Damodaran@stercomm.com>
Date: Mon, 25 Mar 2002 17:52:10 -0600
Message-ID: <40AC2C8FB855D411AE0200D0B7458B2B07C593E0@scidalmsg01.csg.stercomm.com>
To: "'Mark Baker'" <distobj@acm.org>
Cc: www-ws-arch@w3.org

-----Original Message-----
From: Mark Baker [mailto:distobj@acm.org]
Sent: Wednesday, March 13, 2002 10:25 PM

My concern was that the granularity of interoperability would be a set
of specifications, rather than individual ones.

As an example, the Web evolved, and continues to evolve, with various
versions of protocols (HTTP 1.0, HTTP 1.1) and data formats (HTML 2.0,
HTML 3.2, HTML 4.0, CSS 1, CSS 2, etc..).  If special treatment was
given to demonstrating the conformance of, say, HTTP 1.0 + HTML 3.2 +
CSS 1.0 (i.e. if that were a C-set), without considering the needs of
those who want to use HTTP 1.1 + HTML 3.2 + CSS 1 (not a C-set), then to
me, that introduces an evolutionary problem.

Couldn't this problem be solved by 
(1) defining  "backwardly compatible" standard
as the standard that will interoperate with components of earlier versions
of C-sets (ok, need to refine this), and 
[in the example you cite, HTTP 1.1, if backward compatible to HTTP1.0, then
conformity will not suffer? Some relevant notes on this are in [1]] 
(2) requiring all standards define their backward compatibility status?
Thus, in the example if HTTP1.1 standard could explicitly state its
"backward compatible" status. In cases where backwards compatibility is
for better functionality etc. that can be stated too, to avoid confusion.

[ soap: I think we kind of do the above informally, though the pains the
of standards and implementations of standards have to go through with this 
informal process is just too much. We need to provide some means to reduce
this pain.]

Practically, it also places a large burden on small ISVs or open source
developers to develop code that implements these specifications in
parallel so that they can be deployed at the same time.  IMO, this
unnecessarily favours large corporations.

I am not sure I understand the argument - does this mean it prevents open
source developers etc. from implementing a "standard" that is not yet

Also, re versioning, consider that URIs aren't versioned, despite their
evolution over time with different RFCs (1738,1808,2396).  Should the
Web services architecture include such a key architectural component as
a URI, it should not necessarily be versioned in the traditional sense
of the word - it should be required to remain backwards compatible with
previous specifications, but more importantly, deployed software.

Point well taken. 
We cannot undo the harm - only prevent harm in the future!
One solution - we may define mappings from W3C versions to RFCs or other
that are deployed.


I think "backwards compatible" is a good testable requirement, so we
could add that.  Also, re above, I think it would be useful to actually
*exclude* any mention of C-sets in our work.

We are used to thinking about individual standards
that somehow work together in some products. To quantify and promote
frameworks, and thus complex products, I think it is beneficial to version "
a set of standards" (with necessary caveats on backwards compatibility, as
you pointed out). 
Besides, interoperability/conformance tests can be carried on multiple
and on products that implement multiple standards (and multiple versions of
each standard too). I am not yet convinced that we should avoid thinking
about "set of standards."
I would argue that the very notion of a "reference architecture" is based on
standards that are REQUIRED to interoperate. Without a means to define what
it means
to interoperate among multiple standards, how are we going to "conform" to a

Sorry, it took this long to respond!


Received on Monday, 25 March 2002 18:52:31 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:40:55 UTC