W3C home > Mailing lists > Public > www-tag@w3.org > May 2012

Re: Yorick Wilks on Semantic Web & httpRange-14

From: Henry Story <henry.story@bblfish.net>
Date: Thu, 17 May 2012 00:50:56 +0200
Cc: Larry Masinter <masinter@adobe.com>, "ywilks@ihmc.us" <ywilks@ihmc.us>, "www-tag@w3.org" <www-tag@w3.org>, "Harry Halpin (hhalpin@ibiblio.org)" <hhalpin@ibiblio.org>
Message-Id: <9F3F5FEE-B0FC-4B57-AC8D-FA0B78651488@bblfish.net>
To: David Booth <david@dbooth.org>

On 16 May 2012, at 23:39, David Booth wrote:

> Hi Henry,
> The distinction I was making was not between syntax and semantics
> (though ultimately one person's syntax is another person's semantics),
> but between meaning and sets of assertions -- particularly the meaning
> of a URI.  It's awfully hard to nail down the *meaning* of something,
> but it's easy to talk about sets of assertions -- including sets of
> assertions that constitute URI definitions -- and assertions are the
> currency of semantic web applications.
> Fortunately, in engineering the architecture of the semantic web, there
> is no need to delve into the meaning of a URI.  That can and should be
> completely out of scope.  There *is* a need to talk about URI
> definitions -- how to provide them, where and how to find one for a
> given URI, etc.  But the question of what those definitions mean can be
> left to the discretion of the people and applications that use them,
> thus greatly simplifying our engineering task.
> That's what I was trying to get at.

I think we agree. URIs just refer to resources. Indeed that is what they
are said to do. (Uniform/Universal Resource Identifiers)

Sometimes those resources are documents, and those things can have
meaning (in David Lewis' semantics: the set of possible worlds in which
they are true). Sometimes those resources are 4 dimensional space time
things like people, or email boxes, or books (isbn urns) or concepts
such as http://www.w3.org/1999/02/22-rdf-syntax-ns#type 

URIs just refer. The things they refer to can have interpretation functions or not.
On the web we give hints as to the interpretation function required to parse
the bytes returned with a mime-type header.

The URI definition is very clear about #tag URIs (RFC 3986, section 3.5)

The fragment identifier component of a URI allows indirect identification of a secondary resource by reference to a primary resource and additional identifying information.  The identified secondary resource may be some portion or subset of the primary resource, some view on representations of the primary resource, or some other resource defined or described by those representations.  A fragment identifier component is indicated by the presence of a number sign ("#") character and terminated by the end of the URI.

So that is that if the mime type allows it, as RDF serialisations do, then the #uri can refer to resources that are not document parts but other things as defined in the document.

That is exactly how WebID works. If I have document <http://bblfish.net/> where the following appears

@prefix cert: <http://www.w3.org/ns/auth/cert#> .

<#me> foaf:name "Henry Story";
     cert:key [ cert:modulus "CCA5830A..."^^xsd:hexBinary;
                cert:exponent 65537; 


Then that is saying in effect that the referent of <http://bblfish.net/#me> is the person
who controls the private key for that given public key. If I connect to some service and prove
mathematically that i am the owner of the corresponding private key, then the service I connect
to knows that the person at the end of the connection is <http://bblfish.net/#me>, and if
it knows that

  <http://bblfish.net/#me> owl:sameAs <http://bblfish.net/people/henry/card#me> .

Then it can merge all properties from the one on the other. (Leibnitz' law: all identicals
are indistinguishable).

I think all this is pretty clear. It just build nicely on URIs, reference, and interpretation
functions for documents.

Now I am not sure if there is a disagreement here with anyone. :-)


> David
> On Wed, 2012-05-16 at 22:35 +0200, Henry Story wrote:
>> On 15 May 2012, at 18:14, Henry Story wrote:
>>> On 15 May 2012, at 17:42, David Booth wrote:
>>>> The biggest mistake ever made in the semantic web community was to call
>>>> it the *semantic* web, because that term misleads people into thinking
>>>> that it is about semantics or meaning, when in fact it is simply about
>>>> facilitating machine processing.  
>>>> As interesting as Yorick Wilks's talk sounds, I worry that these
>>>> discussions of "meaning" will again mislead people into thinking that we
>>>> need to solve such philosophical inquiries in order to properly engineer
>>>> the semantic web.
>>> I agree. LinkedData sells better, and it essentially captures what the logicians had missed about the importance of the semantic web. BUT, the pendulum having swung over to linked data, we can now go the other direction and explain why the semantic in "semantic web" is not a misnomer either.
>>> So it is not a misnomer to call it the semantic web. It is just that most developers have been trained to think syntactically (I was one of them, in the java camp),  leading them into these syntactic religious wars. The semantic web is clearly not the syntactic web, since I can express the same information in  rdf/xml as in turtle, or json-ld, or a GRDDLable XML. 
>>> [ If you think of this in terms of translation like that I think you move into the Donald Davidsonian manner of thinking (roughly, at least it's  an interesting thing to explore). If instead you then feel that one may as well just reify meaning and say all those translations have something in  common (meaning) then David Lewis has some great mathematics for you. ]
>>> In the presentation "Philosophy of the Social Web" (slide 48) I try to show how the network effect allows such systems to become metastable. Merging information leads to a lot more relationships to be formed than isolated information, and so allows one to make a lot more use of the information at one's disposal. The simplest of these is the inference on owl:sameAs i.e. identity stamtents. Slide 48 I point to P.F.Strawson who wrote in 1975
>>> "an identity sentence A=B is informative  by merging the contents of two information folders one with the word A, the other with the word B"
>> Google's just released video of its knowledge graph is an example of the power of this 
>>  http://www.youtube.com/watch?v=mmQl6VGvX-c&feature=youtu.be
>> this is the beginning of the pendulum moving back towards semantics - though with Google this is 
>> done because it applies statistical knowledge to linked data information: juse as it made
>> its name and beat AltaVista - the search engine I used to work at - because it used the linked
>> between pages where none of the other search engines deemed it possible, so here it can start using
>> structured information to dramatically increase its search results.
>> Statistically driven semantics does not mean of course that all semantics is working statistically.
>> When I go into the bank and ask for 200 I don't ask for it 20 times - I ask for it once. Search engines
>> work statistically because they must work at that levee, given the amount of information they have to 
>> process.
>>> This information merging is very valuable. It is best understood by thinking of the two names referring to the same thing. But you can also think about it operationally, as shifting strings around, It is easier, and I think correct, to think of it in terms of semantics as that helps bring the world into the picture, and allows us to make judgements of truth and falsity (operationally: do we want to merge the information with our beliefs, moving us then to act on those beliefs in ways that will have consequences we do not control)
>>>> David Booth
>>>> P.S. "Linked data" seems like a good alternate term these days.
>>>> On Mon, 2012-05-14 at 20:11 +0200, Henry Story wrote:
>>>>> On 8 May 2012, at 19:43, Larry Masinter wrote:
>>>>>> I saw the notice of a talk (abstract below) on the philoweb list.
>>>>> The issues raised seem quite related to the difficulties I have had
>>>>> with the use of URIs as the means by which assertions expressed in the
>>>>> semantic web are grounded in the world so that they become assertions
>>>>> about the real world; the difficulty is with " agreed meanings for
>>>>> terms".  These difficulties (IMO) underlie the controversies around
>>>>> previous W3C TAG "findings" on "the range of HTTP".
>>>>>> Lately, I've been trying to argue that we will make more progress on
>>>>> issues of pressing concern around web security, provenance, trust,
>>>>> certificates, and other issues, if we move away from talking about
>>>>> "meaning" and instead focus a model in which trust, belief, identity,
>>>>> persistence are explicit.
>>>>> I think those two are not at all incompatible.
>>>>> It is true that meaning is one of those concepts that many philosophers have had trouble with, not
>>>>> least Willard Van Orman Quine who thought talk of meaning was  talk of ghostly entities and who rejected
>>>>> such talk outright. A number of answers to his scepticism were presented, not least the by his very well
>>>>> known students Donald Davidson and David Lewis. Donald Davidson argued that sentences about meaning
>>>>> could be replaced by theories of truth conditions a la Tarksi, and the building of theories of interpretation
>>>>> for a Language.
>>>>> David Lewis' made meaning much more real by remapping them in terms of possible worlds (or if you feel
>>>>> those to be to weird, sets of coherent sentences). Possibilities are never far behind talk of meaning.
>>>>> I go into those in a bit more detail in my "Philosophy of the Social Web" 
>>>>> http://bblfish.net/tmp/2010/10/26/
>>>>> One can also just accept that we have some concept of meaning, and move on as you suggest to other
>>>>> themes such as provenance, trust etc... Those require one to take into account more carefully the 
>>>>> speaker (or the publisher) and so these bring in speech acts, for which Searle has recently produced 
>>>>> a book which I mention in the presentation mentioned above where he argues that speech acts are
>>>>> the corner stone of human civilisation.
>>>>> Provenance and Trust are indeed very important, and would be extremely useful for the Web. I put
>>>>> forward a presentation recently at the European IDentity Conference on how linked data can
>>>>> provide the tools to build this. "WebID and eCommerce" which had some very nicely positive 
>>>>> reactions from the IETF TLS mailing lists
>>>>> http://bblfish.net/blog/2012/04/30/
>>>>> Here trust is built by seeing:
>>>>> 1) that institutions form social networks (as explained by Searle)
>>>>> 2) that one can build such distributed nation/commerce/legal/institutional
>>>>> social networks with linked data
>>>>> 3) that one can anchors one's trust in such a social network in very
>>>>> flexible ways, without requiring a central Trust agency.
>>>>> Henry 
>>>>>> Thanks,
>>>>>> Larry
>>>>>> --
>>>>>> http://larry.masinter.net
>>>>>> ====================
>>>>>> from https://lists-sop.inria.fr/sympa/arc/philoweb/2012-05/msg00000.html
>>>>>> ==================
>>>>>> The Semantic Web: meaning and annotation
>>>>>> Yorick Wilks
>>>>>> Florida Institute of Human and Machine Cognition.
>>>>>> The lecture discusses what kind of entity the Semantic Web (SW) is, in terms of the relationship of natural language structure to knowledge representation (KR). It argues that there are three distinct views on the issue: first, that the SW is basically a renaming of the traditional AI knowledge representation task, with all the problems and challenges of that task. If that is the case, as many believe, then there is no particular reason to expect progress in this new form of presentation, as all the traditional problems of logic and representation reappear and it will be no more successful outside the narrow scientific domains where KR seems to work even though the formal ontology movement has brought some benefits. The paper contains some discussion of the relationship of current SW doctrine to representation issues covered by traditional AI, and also discusses issues of how far SW proposals are able to deal with difficult relationships in parts of concrete science.
>>>>>> Secondly, there is a view that the SW will be the WorldWideWeb with its constituent documents annotated so as to yield their content or meaning structure more directly. This view of the SW makes natural language processing central as the procedural bridge from texts to KR, usually via a form of automated Information Extraction. This view is discussed in some detail and it is argued that this is in fact the only way of justifying the structures used as KR for the SW.
>>>>>> There is a third view, possibly Berners-Lee's own, that the SW is about trusted databases as the foundation of a system of web processes and services, but it is argued that this ignores the whole history of the web as a textual system, and gives no better guarantee of agreed meanings for terms than the other two approaches. The lecture also touches on the basic issues of how the above viewpoints relate to the basic issue of how elements of the SW gain meaning, and the views of Halpin and others are discussed. There are also some reflections of the origins of the SW in Berners-Lee's own thinking and whether the SW was what he intended all along when the WWW was first set up.
>>>>> Social Web Architect
>>>>> http://bblfish.net/
>>>> -- 
>>>> David Booth, Ph.D.
>>>> http://dbooth.org/
>>>> Opinions expressed herein are those of the author and do not necessarily
>>>> reflect those of his employer.
>>> Social Web Architect
>>> http://bblfish.net/
>> Social Web Architect
>> http://bblfish.net/
> -- 
> David Booth, Ph.D.
> http://dbooth.org/
> Opinions expressed herein are those of the author and do not necessarily
> reflect those of his employer.

Social Web Architect
Received on Wednesday, 16 May 2012 22:51:33 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 16 May 2012 22:51:35 GMT