- From: L.G. Meredith <lgreg.meredith@gmail.com>
- Date: Thu, 3 Aug 2006 11:05:39 -0700
- To: "Gary Brown" <gary@pi4tech.com>
- Cc: "Kohei Honda" <kohei@dcs.qmul.ac.uk>, "Marco Carbone" <carbonem@dcs.qmul.ac.uk>, "Steve Ross-Talbot" <steve@pi4tech.com>, "WS-Choreography List" <public-ws-chor@w3.org>
- Message-ID: <5de3f5ca0608031105v1b9bb28fjc424641e9799340c@mail.gmail.com>
Gary, Many thanks for your response; it is exactly the targeted, focused statement of purpose i sought. Below i have taken the liberty to amplify and elaborate what i hear you saying, to test that i have understood, and to lauch yet another probe of Choreography's role in the market. Best wishes, --greg The Other Gap Imagine a world where we only have 'endpoint' descriptions as you dub them. In this world there is also a gap. Current implementation technologies are not easily directly mappable to something against which you could check those endpoint descriptions. In the model-checking world this is akin to the notion of model extraction. You need to extract the model (forgetting some of the application detail) against which you will do analysis, e.g. check for conformance against endpoint descriptions, etc. Choreography technology aims at bridging this gap. It provides a way both to integrate legacy apps and sketch out an app framework via a model that maps upwards toward static analysis tools and downwards towards extant application code. More specifically, with a choreography in hand i can do analysis on the choreography, like conformance checking, or i can use the choreography to generate code, e.g. a bunch of Java or .net classes, that form a working application skeleton. The Next Probe Keeping Model and Application in Sync If we are on the same page, then let me launch the next probe. In model extraction (the dual of which is application skeleton generation) there is an issue relating to keeping the model in sync with the application as the application evolves. If the extraction is automatic, this is relatively straightforward. If, however, as is the case most of the time, it is by hand or a mix of automated and manual effort, then drift is a serious issue. Obviously, a similar issue is faced in generating code from a choreography. Further, if the target is a modern application framework like Java or.net, then this could be very serious very quickly (can you say inheritance-synchronization anomaly?). A natural way to avoid this issue is to tighten the relationship between model and application. i see two ways to do this. As mentioned above, one such tightening is to have model extraction be automated, but there is another intriguing possibility that i discuss below. The Role of Choreography in Model Extraction In the model extraction scenario an application developer writes in something like a C#++ or C## (= D?) and from this application we extract a model and check it against the endpoint descriptions. Now, in this scenario, what is the value of a public description of the specific model? As long as there is some algorithm derived from - a generic description of the modeling language and - the generic descriptions of the endpoint description language - and optionally a public description of the model extraction algorithm then there can be public confidence in a conformance check of an application against an endpoint description without ever exposing the model extracted from the application. The application logic remains behind the firewall, the endpoint description is posted to a service discovery repository, but the model is never published anywhere. The upshot of this analysis is that in this scenario i believe there is a question about the value of Choreography except as a public specification of the generic modeling language. No one ever need publish a specific choreography (= extracted model). The Role of Choreography in Executable Specifications A more radical approach to keeping model and application in sync is to have them be one and the same thing. That is, we devise a language that is at once - semantically derived from our modeling language - semantically rich enough to write real applications - with an execution model performant enough to run them in the commercial setting (e.g. enterprise, desktop, gaming, etc.) With apologies to the community that originally coined the term, i commandeer the phrase executable specifications for applications written in such a language and call the language itself an executable specification language. Terminology aside, with this approach there is no model extraction phase as such (unless you see it as part of the conformance analysis -- which is throwing away tons of application detail). The model is the application. In some sense we are back in the world developers know and love -- we don't need no stinking spec -- the code is the spec. Further, there are tremendous benefits from this approach, many of which have to do with using the static analysis to directly aid development as the code is being written, ala the way intellisense works, etc. This line of thinking is what has led many to try to develop a \pi-like language rich enough to do application development. But, the question is, what is the role of Choreography in this scenario? i submit to you that it must ultimately give rise to such a \pi-like language either by becoming one or by helping to open the way for such a language to develop and gain adoption in the market place. Note, however, that even in this scenario, application code (= model) will certainly not be published. A public description of the executable specification language certainly needs to be publish, this would be something like the ECMA spec for C#/.net or R^5 Scheme spec. But for specific service offerings only the endpoint descriptions will be published. On 8/3/06, Gary Brown <gary@pi4tech.com> wrote: > > Hi Greg > > Just to respond to your last point, I think there is definitely a place > for both 'full blown choreographies' representing a global view of a set > of interacting services, as well as a description of the specific > behavior associated with a service. They have different roles to play. > > The global model provides a design approach to enable the > responsibilities of each service to be understood with respect to a > particular business process. This global model can then be used to > derive the behavior of a new service, or ensure that existing legacy > services conform to the required behavior (using the service's > observable behavior description). > > Similarly, as you have described, if a service has its own endpoint > behavior description, then it would be possible to do a behavioral > equivalence lookup within a service registry. However, the required > service behavior could be derived from the choreography. > > At the moment Marco and Kohei's type system addresses both of these > aspects, it is only WS-CDL that is lacking an endpoint representation - > which as you have pointed out would ideally be a simple extension to > WSDL. But that was not within the charter of the working group. > > However, this is one area where potentially abstract BPEL may provide a > solution, but that does not preclude other more compact notations being > associated with the WSDL in the future. > > Regards > Gary > > > L.G. Meredith wrote: > > Kohei, > > > > Many thanks for the edifying remarks. i hope you will allow me to > > probe on a related front. > > > > Suppose we took a different approach to the architecture of WS-CDL in > > which what is published is something like (collections of) Kobayashi's > > usages, (an appropriately XML-ified version of) which could be seen as > > a simple and direct extension of WSDL, leaving more detailed > > descriptions of service logic (in which resides potentially > > proprietary business value) behind the firewall, so to speak. > > > > i emphasize publish in this context because i want to call attention > > to what's of general public interest in a service description. i argue > > that the primary function of a public service description is search. i > > submit that in a world where SOA is the norm there will be billions of > > services, and a service consumer will need to find services that > > > > * do what they need them to do (a semantic question) > > * do it in a way that is compatible with their own needs and > > practices > > > > Further, i argue that the former is far more important than the > > latter. Consider the case where there is only one provider. The > > consumer will adapt to any incompatibility. (Note that just because > > there is only one such provider does not alleviate the search burden. > > The consumer may not know of the provider, even when there is only one.) > > > > i want to argue that a great deal of semantic information can be > > gleaned from information about behavior. i can provide some > > interesting and illustrative examples, if you want them. One question, > > however, is how much information needs to public to help the consumer > > address their needs, and what needs to remain behind the firewall to > > protect hard won business value? > > > > Now, to bring this line of reasoning back to my question above. Does > > the market need a standard to capture full-blown choreographies? Or, > > is there another niche that is likely to see better adoption? For > > example, consider a small extension to WSDL, like Kobayashi's usages, > > which can -- in Marco's words more powerfully and more generally -- > > ensure both compatibility issues (like eliminating deadlock) and > > increase the information available by which to conduct semantic-based > > search one. > > > > Best wishes, > > > > --greg > > > > On 8/2/06, *Kohei Honda* <kohei@dcs.qmul.ac.uk > > <mailto:kohei@dcs.qmul.ac.uk>> wrote: > > > > Hi Greg, > > > > It is great to get reactions so quickly. > > > > L.G. Meredith wrote: > > > Marco, Kohei, > > > > > > Thanks for clarifying. i was refering to the EPP theorem on page > 85, > > > which i took as the central result of the work. Sorry, if i was > > a bit > > > eliptical. My thought was that the end point calculus is actually > a > > > type system and end point projection was like the calculation of a > > > kind of minimal type. Then, you can see the EPP theorem in the > > light > > > of subject reduction. > > > > I may rather say that the global calculus and the end point > > calculus are > > two different description > > languages, each with its own typing system, with respective notions > of > > minimum typing. > > > > The type structures are common, but the way descriptions are typed > are > > quite different. > > > > The idea is to project each "well-formd" global description to a > > collection of endpoint processes, > > which are the code for the participants involved. > > > > What the EPP theorem asks is: > > > > Do these projected participants interact following a scenario > > (choreography) which the > > original global description has laid out? > > > > It turns out that, if a global description satisfies certain > > well-structuredness, or "healthiness conditions", > > then > > > > (1) there is a very simple endpoint projection, and > > (2) the correspondence in behaviour is as exact as can be. > > > > There are three well-structuredness conditions we have > identified. We > > believe they offer a natural > > way to do a well-structured global description. > > > > I will post a brief discussion on the engineering meaning of this > > result, but for the time being let us > > say the EPP theorem offers a way to relate the global description > > languages and the process calculi. > > The latter gives rigorous theories of behaviours, their properties > > and > > composition, while the former > > offers a useful engineering medium. > > > > > > i know that you have a separate notion of typing laid out in the > > > paper, but i tend to think -- much in the way Kohei laid out -- of > > > towers of typing of increasing strength. Abramsky gives a good > > example > > > of such in his Marktoberdorff lecture with Simon Gay and Raja > > > Nagaranjan on types for concurrency. > > > > On this point I firmly agree: types and various analyses including > > process/program logics are great toos > > especially when we know how to integrate them consistently. > > > > > > > > > > That said, i was trying to draw an analogy between the EPP and > > > Kobayashi's usages. More specifically, in my mind there is > > connection > > > between the fact that EPP is a function and Kobayashi has a type > > > inference algorithm -- apart from the practical implication that > the > > > programmer doesn't have to write the type. > > > > So the EPP theorem is not so much about type discipline or program > > analysis but rather about a basic way > > to relate two distinct ways of describing interactions. It is like > the > > result on encoding of some calculi into > > the pi-calculus, saying the encoding fully respects the original > > dynamics, under certain conditions. > > > > These "under certain conditions" are the well-structuredness > > (healthiness) conditions. > > > > I will discuss on the general picture further in my coming post. > > > > Best wishes, > > > > kohei > > > > > > > > > > > > Best wishes, > > > > > > --greg > > > > > > On 8/2/06, *Marco Carbone* <carbonem@dcs.qmul.ac.uk > > <mailto:carbonem@dcs.qmul.ac.uk> > > > <mailto: carbonem@dcs.qmul.ac.uk > > <mailto:carbonem@dcs.qmul.ac.uk>>> wrote: > > > > > >> This looks like a lot of work. i may be misreading, because i > > >> have only skimmed, but it looks as though the main theorem > > is a > > >> subject reduction-like theorem in which the subject > > reduction is > > >> simulation-style (ala Kobayashi's type systems) as opposed to > a > > >> static subject reduction (ala Honda, et al's type systems). > > Have > > >> you seen, therefore, Kobayashi's 2006 types for concurrency > > paper ( > > > > > > > > > Just to clarify, the paper has three main theorems: > > > > > > 1) Subject Reduction for the global calculus type system > > (session > > > types) i.e. when the system evolves it is still well typed ( > > e.g. > > > I evolves to I' and I is well typed then also I' is well > typed. > > > > > > 2) Subject Reduction for the end-point calculus (similar to > > point 1) > > > > > > 3) EPP Theorem, i.e. > > > a) Type Preservation i.e. a well typed global description is > > > projected to a well typed global interaction. > > > b) Completeness i.e. if a global interaction I evolves to I' > > then > > > its projection evolves to the "projection" of I' (EPP(I') ) > > > c) Soundness i.e. if a projection EPP ( I ) evolves to N then > I > > > can evolve to I' and the projection of I' is "similar" to N > > > > > > 1) and 2) guarantee that working with typed programs is safe > > (you > > > never evolve to untyped i.e. unwanted things). > > > > > > 3) shows that given a global description (e.g. a WS-CDL > > > choreography), its end-point projection is good and respects > > what > > > the programmer wanted to specify in the choreography. > > > > > > Hope this clarifies the key points of the paper. > > > > > > Best, > > > Marco > > > > > > P.S. > > > I didn't read the paper you linked but I believe it is related > > > with another of his works i.e. having CCS processes as types. > > > Session types are related to this but Kobayashi's are much > more > > > powerful and lose. > > > > > > > > > > > > > > >> > > http://www.kb.ecei.tohoku.ac.jp/~koba/papers/concur2006-full.pdf > > <http://www.kb.ecei.tohoku.ac.jp/%7Ekoba/papers/concur2006-full.pdf> > > >> > > <http://www.kb.ecei.tohoku.ac.jp/%7Ekoba/papers/concur2006-full.pdf > >)? > > >> > > >> Best wishes, > > >> > > >> --greg > > >> > > >> On 8/1/06, *Steve Ross-Talbot* < steve@pi4tech.com > > <mailto:steve@pi4tech.com> > > >> <mailto:steve@pi4tech.com <mailto:steve@pi4tech.com>>> wrote: > > >> > > >> > > >> Is at: > > >> > > >> > > http://lists.w3.org/Archives/Public/www-archive/2006Aug/att-0000/ > > <http://lists.w3.org/Archives/Public/www-archive/2006Aug/att-0000/> > > >> workingNote.pdf > > >> > > >> Please read and comment as soon as possible. > > >> > > >> Cheers > > >> > > >> Steve T > > >> > > >> > > >> > > >> > > >> -- > > >> L.G. Meredith > > >> Partner > > >> Biosimilarity LLC > > >> 505 N 72nd St > > >> Seattle, WA 98103 > > >> > > >> +1 206.650.3740 > > > > > > --------------------------------------------------------- > > > Marco Carbone > > > > > > Dept. of Computer Science > > > Queen Mary University of London > > > Mile End Road > > > E1 4NS London > > > United Kingdom > > > > > > Phone: +44 (0) 207 882 3659 > > > Fax: +44 (0) 208 980 6533 > > > email: carbonem@dcs.qmul.ac.uk > > <mailto:carbonem@dcs.qmul.ac.uk> <mailto:carbonem@dcs.qmul.ac.uk > > <mailto:carbonem@dcs.qmul.ac.uk>> > > > home: http://www.dcs.qmul.ac.uk/~carbonem > > <http://www.dcs.qmul.ac.uk/%7Ecarbonem> > > > < http://www.dcs.qmul.ac.uk/%7Ecarbonem> > > > --------------------------------------------------------- > > > > > > > > > > > > > > > -- > > > L.G. Meredith > > > Partner > > > Biosimilarity LLC > > > 505 N 72nd St > > > Seattle, WA 98103 > > > > > > +1 206.650.3740 > > > > > > > > > > -- > > L.G. Meredith > > Partner > > Biosimilarity LLC > > 505 N 72nd St > > Seattle, WA 98103 > > > > +1 206.650.3740 > > -- L.G. Meredith Partner Biosimilarity LLC 505 N 72nd St Seattle, WA 98103 +1 206.650.3740
Received on Thursday, 3 August 2006 18:05:53 UTC