Re: Working without being ambushed by Ambiguity

I don't really want to get involved in this utterly pointless debate, but just to clarify a couple of issues:

On Jan 30, 2013, at 10:01 AM, Larry Masinter wrote:

>> For me, there are several intertwined issues here, in no particular order:
>> - context
>> - ambiguity
>> - vagueness
>> - sound inference
>> - modalities (? - I mean conflicting or differing interpretations in a common
>> discourse)
>> 
>> What we *have* in the present model theoretic approach is sound inference.
>> In particular, with RDF, the idea that the RDF merge of two (or more) graphs is
>> true under exactly the interpretations that make the original graphs true.  I
>> think this is a key necessity (but not sufficiency) for combining and remixing
>> data on the web through automated processing, and of itself represents an
>> important step forwards from what we had before.  I'm reluctant to let that go.
> 
> I think you can only keep "sound inference" after you've done some kind of
> Trust transformation, where the semantics of responses to requests are
> Initially posited to not be available for combining and remixing before they
> have been explicitly accepted as trustworthy.  

Wrong. inference and trust are orthogonal. In fact, one way to *detemine* whether or not some input is trustworthy is to try drawing sound inferences from it (or perhaps from it plus something that is trustworthy) and see if they make sense or not. Inference is just a tool for manipulating and squeezing data to see what it has in it, and "sound" in this context just means keeping your hands clean while you do this, to avoid contaminating it with something else. 

> I see no point in distinguishing between ambiguous assertions and untrustworthy
> ones, and I like having a model where trusting is an explicit part of the interface.
> 
> 
>> Along with this, I think vagueness is somewhat covered by a Quine-like appeal
>> to consideration of statements that people broadly accept as true, if one doesn't
>> get too hung up on exactly *what* is denoted by individual terms, just
>> accepting that they have denotations that satisfy certain properties.
>> 
>> I think that ambiguity of the kind that permits Herbrand style models is
>> something that we should just ignore - it seems to me that trying to exclude
>> this kind of ambiguity in the formal structures leads to the kind of tar-pit
>> we've been wading in.
>> 
>> I *think*, BICBW, the last two points somewhat reflect what Tim was trying to
>> say in his original "without being ambushed by Ambiguity" - so to that extent
>> we
>> may agree.
>> 
>> But what we don't have is a satisfactory, easy to follow story that covers
>> context and modality (if "modality" is the right word to use here).  Which would
>> (should) extend to topics like "slander".
>> 
>> Here, I fear we're being let down by the RDF working group.  They have agreed
>> a structure, RDF Datasets, that is capable of encoding such ideas, but seem
>> unable to come to a consensus on how to provide semantic underpinning for using this
>> structure.

To defend the RDF WG, it is hamstrung by its charter which is written so as to be very restrictive. The problem is that any really useful semantics for datasets will change the semantics for RDF graphs. 

>>  IMO, *any* semantic underpinning would be better than none -
>> without it, we're back in the mess we had figuring reification last time round.  (What I
>> was hoping for is *not* a definitive "this is what datasets mean", but a
>> framework within which one could construct semantics for datasets without
>> fear that the ground would later shift.)

Right, a kind of semantics erectorset. I would like that too, but it ain't easy to build. 

>>  There have been several proposals, and at
>> least two that I'm aware of in the life of the current RDF group - including
>> Pat's RDF as context logic - any (or most) of which could serve.
>> 
>> (Personally, I liked the proposal that was made, and apparently rejected, a
>> month or so ago
>> (http://www.w3.org/2011/rdf-wg/wiki/TF-Graphs/Minimal-dataset-
>> semantics).  I
>> have the impression, maybe wrong, that Pat's context logic approach was a bit
>> more constrained, but still flexible enough to support a useful range of
>> modalities.)
>> 
>> Given this much, we would have some basis for actually talking about (or
>> representing) some of the tricky issues that are so hard to discuss in the
>> current "one interpretation to rule them all" view of RDF (and URIs).  We could
>> propose structures that capture belief, provenance (which I come to see can
>> itself be highly contextual), disagreement, debate, conditionality, and so much
>> more.  Maybe then we also have a framework for encoding the theory of
>> speech
>> acts, etc?

Yes, we would have a more flexible tool that would have a lot of utility. No, I don't think it would be able to handle speech acts, except in a very simplistic way. 

>> If we have a way to represent and talk about contextualization, then I think the
>> whole issue of a URI having different interpretations in different contexts (or
>> applications) is something we can accommodate.  That is, it allows us to set out
>> without a presumption of global meaning, yet still exploit the commonalities
>> we can observe.   Within RDF as we currently have it, we're forced to go "out of
>> band", and that makes it hard to really understand each other's difficulties.
>> 
>> ...
>> 
>> As for "attrition", I don't think we're dealing with a belligerent enemy here.
>> 
>> But I do feel like I'm on the rough edge of the grindstone here.  For the most
>> part, I can ignore this stuff in my daily work with RDF:  99% of the time it
>> seems it just doesn't matter.  But I fear if we don't build on sound foundations
>> then sooner or later things will start to crumble.  I care if that's the case,
>> but a lot less than I care about a lot of other things, so my forays into this
>> arena will be of limited energy.  Maybe that's for the best.
>> 
>> #g
> 
> The problem with "for the most part, for 99% of the time, I can ignore
> trust" is that you don't know which 1% of cases you can't. And if you
> can't distinguish in advance between situations where you can trust
> the results and situations when you can, then you basically have to 
> distrust everything. 

That sounds rather like life. Paranoia is one strategy, indeed. An alternative strategy (look up "tit for tat" and the iterated prisoner's dilemma) is to trust as a default until you discover that you have been screwed, then distrust back. 

Pat

> 

------------------------------------------------------------
IHMC                                     (850)434 8903 or (650)494 3973   
40 South Alcaniz St.           (850)202 4416   office
Pensacola                            (850)202 4440   fax
FL 32502                              (850)291 0667   mobile
phayesAT-SIGNihmc.us       http://www.ihmc.us/users/phayes

Received on Saturday, 22 June 2013 17:36:06 UTC