RE: D-AG0007- reliable, stable, predictably evolvable - v0x1

Mark,

-----Original Message-----
From: Mark Baker [mailto:distobj@acm.org]
Sent: Wednesday, March 13, 2002 10:25 PM
[snip]

My concern was that the granularity of interoperability would be a set
of specifications, rather than individual ones.

As an example, the Web evolved, and continues to evolve, with various
versions of protocols (HTTP 1.0, HTTP 1.1) and data formats (HTML 2.0,
HTML 3.2, HTML 4.0, CSS 1, CSS 2, etc..).  If special treatment was
given to demonstrating the conformance of, say, HTTP 1.0 + HTML 3.2 +
CSS 1.0 (i.e. if that were a C-set), without considering the needs of
those who want to use HTTP 1.1 + HTML 3.2 + CSS 1 (not a C-set), then to
me, that introduces an evolutionary problem.

<sd>
Couldn't this problem be solved by 
(1) defining  "backwardly compatible" standard
as the standard that will interoperate with components of earlier versions
of C-sets (ok, need to refine this), and 
[in the example you cite, HTTP 1.1, if backward compatible to HTTP1.0, then
conformity will not suffer? Some relevant notes on this are in [1]] 
(2) requiring all standards define their backward compatibility status?
Thus, in the example if HTTP1.1 standard could explicitly state its
"backward compatible" status. In cases where backwards compatibility is
sacrificed
for better functionality etc. that can be stated too, to avoid confusion.

[ soap: I think we kind of do the above informally, though the pains the
customers
of standards and implementations of standards have to go through with this 
informal process is just too much. We need to provide some means to reduce
this pain.]

Practically, it also places a large burden on small ISVs or open source
developers to develop code that implements these specifications in
parallel so that they can be deployed at the same time.  IMO, this
unnecessarily favours large corporations.

<sd>
I am not sure I understand the argument - does this mean it prevents open
source developers etc. from implementing a "standard" that is not yet
defined?
</sd>

Also, re versioning, consider that URIs aren't versioned, despite their
evolution over time with different RFCs (1738,1808,2396).  Should the
Web services architecture include such a key architectural component as
a URI, it should not necessarily be versioned in the traditional sense
of the word - it should be required to remain backwards compatible with
previous specifications, but more importantly, deployed software.

<sd>
Point well taken. 
We cannot undo the harm - only prevent harm in the future!
One solution - we may define mappings from W3C versions to RFCs or other
standards
that are deployed.
</sd>

[snip]

I think "backwards compatible" is a good testable requirement, so we
could add that.  Also, re above, I think it would be useful to actually
*exclude* any mention of C-sets in our work.

<sd>
We are used to thinking about individual standards
that somehow work together in some products. To quantify and promote
interoperable
frameworks, and thus complex products, I think it is beneficial to version "
a set of standards" (with necessary caveats on backwards compatibility, as
you pointed out). 
Besides, interoperability/conformance tests can be carried on multiple
standards,
and on products that implement multiple standards (and multiple versions of
each standard too). I am not yet convinced that we should avoid thinking
about "set of standards."
I would argue that the very notion of a "reference architecture" is based on
standards that are REQUIRED to interoperate. Without a means to define what
it means
to interoperate among multiple standards, how are we going to "conform" to a
reference
architecture? 

Sorry, it took this long to respond!

Regards,
-Suresh
</sd>

[1]
http://mosquitonet.stanford.edu/~laik/projects/wireless_http/publications/te
ch_report/html/node4.html

Received on Monday, 25 March 2002 18:52:31 UTC