Re: RDF vs the rest of the world

> I have met a lot of this kind of problems. Seems to me that at the
> beggining the range of a property was a constraint on the property,
> but nowaday the new semantics says that if the property P has range
> X and (a P b) it imply that b is an istance of the class X. The
> rande/domanin are not constraint anymore, but just way to declare
> the type of a variable. I heard that it could be changed in the
> future by the W3C.

Yes, nowadays it's clear that it just says that b is an instance of X,
*but* if you know that b is not an instance of X, as you should in a
reasonably well-defined system, you get your constraints back, along
with the original type-checking behavior you wanted.

This does kind of mean that RDFS is pretty useless by itself.  With
RDFS you have to 80% of the work and you only get 20% of the payoff
(you can leave out some information for automatic inference, and you
have a standard for documentation).  OWL (or something like it) is
needed for the full 100% on both counts.


>  Has anyone realized that the Semantic Web - the real one - is just a dream
>  for the next 10 years.

That depends what you mean by it.   TimBL coined the term, and he
meant it (I think) in the rather simple form of properly separating
content from presentation.   We can certainly do this with RDF now,
and some people have done it.  Is it well understood?  No. Is much of
the data on the web available in RDF?  No.   But can you build
semantic web applications, which read several different RDF web pages
from different authors and produce useful results?  Yes.

>  Do you think that someone will start to rewrite all the pages that has done
>  in a very simple language (HTML)? Have never tryed to get a random page in
>  the web and thought how it would be translated for the semantic web?
>  
>  Personally I love the Idea of the Semantic Web and I think that an extra
>  ammount of information stored into a web page could be usefull, but is this
>  the way?
>  
>  Do not you think that machine learning, statistic will tools more pawerful
>  than structured information full of mistakes?
>  
>  In my personal opinion I think that the correct way could be try to join
>  immediately the most powerful approches in the word (logic, neural net, RDF,
>  KR, statistic) to try to attack a so hard problem, the semantic Web. And
> this  because considering that the extra information are writen by human and 
> the
>  software to catch them (spiders, serarch engine, agents...) are writen by
>  human as well, why we do not try to think how coul really work a web of
>  *cooperation*.
>  
>  sorry for the long letter, but sometimes I think that we have to see the
>  problem from 10,000ft high (as was said by Lee)... but now seems that we are
>  more concerd in small problems and we have lost the general picture of the
>  real problem. Do you think that with RDF we can beat google and its very
>  siple algorithm of ranking?

Recent Google co-founder Sergey Brin said (in speaking about RDF) "I'd
rather make progress by having computers understand what humans write,
than by forcing humans to write in ways computers can understand."[1]
And that's understandable -- that's Google's business model, and it's
a very reasonable stance for dealing with many people and many kinds
of information.  But there's already an enormous quantity of useful
information in forms that are perfectly understandable to computers
(eg SQL databases).  That information is now, however, available (in
computer-usable form) on the web!  In general we want the computers to
really know what we mean, not just make a good guess at it.  Google is
a rare and important exception....   (until computers get to be a lot
smarter than people, at least.)

   -- sandro

[1] http://weblog.infoworld.com/udell/2002/09/19.html#a415

Received on Friday, 20 September 2002 22:22:51 UTC