From: James Cheney <jcheney@inf.ed.ac.uk>

Date: Thu, 21 Jul 2011 23:57:41 +0100

Message-ID: <4E28AEE5.1040807@inf.ed.ac.uk>

To: Graham Klyne <GK@ninebynine.org>

CC: W3C provenance WG <public-prov-wg@w3.org>

Date: Thu, 21 Jul 2011 23:57:41 +0100

Message-ID: <4E28AEE5.1040807@inf.ed.ac.uk>

To: Graham Klyne <GK@ninebynine.org>

CC: W3C provenance WG <public-prov-wg@w3.org>

On 21/07/11 19:06, Graham Klyne wrote: > James, > > I took a look at the semantics paper you mentioned > (http://ecommons.library.cornell.edu/bitstream/1813/5828/1/2001-1841.ps). > > You mention the relationship to programming language semantics ... I > see a correspondence to *denotational* programming language > semantiucs, but less ure about axiomatic and/or operational approaches > there. I think that's right. As I said during the meeting, it's not clear that the reasons for wanting a semantics for a provenance language are the same as the usual ones for a programming language. > > The notion of "Interpretation" you use seems very similar to that used > in model theory (e.g. for FOL and DLs), but then I think you use it in > quite a different way. But I'm not sure if that's driven specifically > by the preservation scenario you address there. I need to think on that. > A little more background: We were specifically trying to untangle the subjective or domain-specific questions about preservation from some objective or domain-independent ones. For example, different communities or users have different expectations about what needs to be preserved (e.g. to preserve a book, do you need to preserve its atoms, or accurately scanned images of the pages, or just Unicode text?) We don't claim to prescribe answers to these questions; instead, our model abstracts over different possible answers, which we model using an interpretation function. incidentally, this seems related to the IVPof issue (as I understand it) though we did not consider the problem of whether some information is enough to "uniquely identify" an object. > The big uncertainty, for me, is what it is that populates the > "Information content space". For denotational programming language > semantics (as I understand them) you have a space of lambda > expressions, and I think there is a reasonable notion there of > reduction and equivalence-determination - at least for those that > correspond to computable functions (cf. Dana Scott work from the > 1970s?). But when your domain of discourse is expanded to things that > exist in the wider world, or descriptions of them, I'm not sure what > would populate this space. That's precisely why we didn't try to say what is in either the object or information spaces, in the paper - the point of the model is to try to capture what we mean by preservation independent of the (possibly distracting) details of a particular domain. I think the question of what mathematical structure to use is particularly unsettled in the setting of provenance, and that doesn't seem like a question we can solve within the context of a WG. instead, what I have in mind is to identify the components and the relationships and properties such a structure should have (including, possibly, properties/statements not easily expressible in OWL, if we feel they are important) and relate these to the concepts and relations in the conceptual model/ontology. > > Model theory takes an approach of using the interpretation to map > terms in some language to concepts in an unspecified domain of > discourse, and identifying those interpretations (i.e. "models") that > satisfy relations necessary for intended meanings of the language to > hold. But this assumes as a starting point a notion of a language > with wffs and variables, which is not the same as the bag-of-bits > "object state space" suggested by your paper. I don't know if such an > approach would help with the preservation scenario you address. > It's true that the term "interpretation" used in the paper is not necessarily the same as that used in model theory - there is no explicit notion of formulas and variables. One could instantiate the model in a contrived way so that the notions align, e.g. taking the "object state space" to be (closed) formulas and the "information content space" to be Boolean values. But this was not our aim. > Ultimately, I think we need to have a (clearer?) sense of what kinds > of questions we want any formal semantics to help us address. > Agree entirely - one entirely plausible outcome is that after some more discussion we decide that an additional formal semantics/mathematical model is *not* needed to complement the OWL and conceptual models. My feeling is that in some cases, the conceptual-model definitions/discussion is already driving us towards more precise definitions. So another plausible outcome could be that we consciously play the game of "can we make this more formal" - for just exactly as long as it seems helpful in clarifying the model. If the result turns out to be a useful complement to the ontology/schema and conceptual model description, great! If not, then I certainly don't want to put a lot of effort into something that isn't actually useful. --James > Just some fodder for discussion... :) > > #g > -- > > James Cheney wrote: >> Hi, >> >> Here: >> >> http://homepages.inf.ed.ac.uk/jcheney/pilformalsemantics.pdf >> >> are some slides I plan to use to structure today's brief discussion >> about the "formal semantics" (optional) deliverable during today's >> meeting. >> >> --James >> > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.Received on Thursday, 21 July 2011 22:58:28 UTC

*
This archive was generated by hypermail 2.4.0
: Friday, 17 January 2020 16:50:57 UTC
*