Re: Yorick Wilcks' paper "The Semantic Web as the apotheosis of annotation, but what are its semantics?"

On 17 May 2012, at 18:43, Yorick Wilks wrote:

> Henry
> Thanks for the critique, though that may be hard to follow for anyone who doesnt know what i said.

That's ok. They should read the paper first :-) I added the link below. I don't think I could summarise
it well myself.

> I cant quite see the link between what I said and any difference between google and AltaVista over links ---nothing I said really connected to the exact nature of hyperlinks or different approaches to them---I dont see the specific qualities of the SW (as opposed to the WWW) as having to do with a different approach to hyperlinks? Again, and as I reminded listeners, of course HTML/XML etc differ precisely in the grain of semantic information they annotate, but Id need a lot of persuading that that difference links to any particular philosopher's approach to meaning--although I agree such disputes as there are around the SW can indeed be seen as weak rehashings of classic positions in the 20C philosophy of meaning, and how!

My point was more that in your paper you don't really take hyperlinked into account, which I think is fundamental to using the semantic web as an interesting and novel tool. For example I don't think there is another way to building a distributed social web, than by using the semantic web as a linked data web. The semantic web is hyperdata. (All my talks on my home page are about this.)

My point is that this is an easy omission to make. 
    AltaVista and all the pre Google search engines made this mistake when they failed to take account of the graph of relations between the pages. It is also something that one should not expect Google to make a lot of noise about either - until they have something to show of course, as they are now doing with their "Knowledge Graph" [1] and its motto "things not strings".  AltaVista had language specialists, did work on markup, on n dimensional analysis of words, etc.. in order to help improve the rankings of the results. But without the aid of the links between pages, thought of as votes for pages, they could not squeeze that extra performance out of the web. 
    A lot of the logicians made that mistake when they thought of the semantic web as more an space for working on inference, as the DAML-DL then later OWL people did. This is going to be very useful, but it hides the hyperdata structure of the web. For them I think the idea of dereferencing a URI to GET its meaning seemed nearly counter-intuitive.
   
So it is more that I think the interesting part of the semantic web happens beyond markup, in this linked data space. When I say beyond markup, I don't mean to say that markup is not part of it. 

> You write "For example I don't see there to be such a big issue between machine readable, and human readable text. They are complementary: the semantic web shows how these can be linked."--this is either exactly what i was saying or there is something I am missing here. Yet  "Machine readable" , said of text , only has sense when we know exactly what a computer/NLP-algorithm can do with a text.

Well I am bringing the programmer of the software into the picture. He can read the definition of foaf:knows by for example reading the http://xmlns.com/foaf/0.1/knows page. The OWL semantics associated with it is very light weight and can help guide the developer, and can even with some simple inferences. But I don't think the machine by itself would read that ontology and know much what to do with it. It is the software developer who uses it to make useful tools that can then interact with others.
So here I am perhaps just saying that your point 4 is right: software developers are just pressing on using this stuff in novel ways :-)


> The whole issue--if there is one!!-- is what it means to say the SW, or any other non-human device--has or understands a meaning! We have some experience of what it is to say that of humans--though there is no single clear view, of course--my own interest in AL and NLP is that asking the same question of machines is both good in itself,a s leading to technology we can use, and also because it throws light, if were lucky, on the human question too.

Yes. I think the semantic web gives very minimal hooks into meaning which can make it easier to develop useful programs such as a semantic address book, such as the one I cobbled together 5 years ago 

   http://www.youtube.com/watch?v=a8UVxFp9SN4

The question is how does meaning stabilise in semantic vocabulary so that these applications end up working together. Here
I think it is two things that help: 

1. the URI definition is easy to find so there is at least this aspect that will tend to align people's understanding
  (but mailing lists such as the foaf-dev one are very important too)
2. applications lead users to have expectations of what they should be able to do, meaning that the vocabulary indirectly 
  through applications has an effect on users who can then work with software to develop language games such as: if I add you to my foaf file then my friends will give you more access to their resources and parties, or what not... 

So I just think that if we don't remove the human from the loop the meaning in the semantic web, it becomes less spooky. Or another way of putting it: we are extending the human - ĺ la Andy Clark, who also talked at PhiloWeb - with software that can read machine-readable speech acts, to essentially increase the power of the human agent. But this human-software symbiosis creates a Wittgensteinian form of life, which to his astonishment perhaps would be working on something pretty close to his early Tractatus ( a Douglas Adamesque turn of events ).

	Henry


> Best wishes
> Yorick


[1] http://www.youtube.com/watch?v=mmQl6VGvX-c&feature=youtu.be

> 
> 
> On 17 May 2012, at 17:16, Henry Story wrote:
> 
>> Hi,
>> 
>>  Last week at PhiloWeb in Paris [1] Yorick Wilcks presented "The Semantic Web as the apotheosis of annotation, but what are its semantics?" http://staffwww.dcs.shef.ac.uk/people/Y.Wilks/papers/IEEE.SW.untrak.pdf
>> 
>> We had a great conversation. My view is that in short perhaps Yorick does not take into account enough the hypertextuality of the semantic web. It sounds very much like the error AltaVista made when they thought that looking at the links between pages was too complex a task to be doable. Google then used that information, and it led them to the success we now know.
>> 
>> Search engines look at information statistically of course (as they have to given the volume of information they have to work with). But the basis on which they work (html) is not initially put together that way. We can think of html as giving some light weight semantics to the web (the <a href=""...>  and so on.) and the meaning of those tags was not creates statistically: one has to tell an evolutionary tale about its development, which can draw on work in the philosophy of language form thinkers such as Ruth Garett Millikan [2]. This is where I think the semantic web brings some key new elements to the discussion that are not clearly developed in this paper. 
>> 
>> For example I don't see there to be such a big issue between machine readable, and human readable text. They are complementary: the semantic web shows how these can be linked. The links in the RESTfully published ontologies (i.e. one can GET the meaning using HTTP on the URI) makes it easy for developers to produce software as they can quickly find the meaning of the terms - even if those meanings evolve). By making it easy for developers to find the meaning, they can produce useful software - if they are any good - which can then spread the vocabulary, making it more useful through the network effect. Software that reproduces itself and gains strenghty by duplicating the vocabulary it interprets (web browsers are the first example of this) then are where one finds the conceptualisers that solve the problem of Kant “concepts without percepts are  empty, percepts without concepts are blind” that Yorick quotes at the end of the paper.
>> 
>> I tried to show how this is possible in my talk "Philosophy of the Social Web"
>> http://bblfish.net/tmp/2010/10/26/
>> where describe this interdependence between data, software that readers it that provide value to users (a distributed social web for example). Given this we can then also understand how we can pop out of the problem of "markerese" and enter a referential semantics: markups are still essential, but meaning as use in a context of a form of life ( as Wittgenstein would have it), a form of life that now includes computers, the network, browsers, linked data readers, humans, and cats, can allow us to have markup AND semantics. So we just have to build it.
>> 
>> Henry
>> 
>> [1] http://calenda.revues.org/nouvelle22666.html
>> [2] http://www.philosophy.uconn.edu/department/millikan/
>> 
>> Social Web Architect
>> http://bblfish.net/
>> 
> 
> 

Social Web Architect
http://bblfish.net/

Received on Thursday, 17 May 2012 17:33:43 UTC