Re: Semantic Layers (Was Interpretation of RDF reification)

Harry,

Thanks for pointing to the "AI Winter" article, but the
timing details are way off.  The Minsky & Papert book
came out in 1969, and in the US, there was a recession
during Nixon's first term (around 1970), which caused a
lot of cuts in research funds.  And the Japanese 5th
generation project, which started in 1980, caused a big
boom in AI funding in the US.  So the Wikipedia article
saying that the AI winter came in the late 1970s was at
least 8 years off (for the US at least).

HH> Comments welcome - it's quite old and I'd like to revise
 > it at some point.

Your comparison to Hilbert's program is interesting, but
since the research was all done by mathematicians who
were using their own (or their universities') funding,
it didn't have much impact on the economy.  I'd quibble
with a lot of the details.  For example, the frame problem
is an artifact of the situation calculus, and most other
approaches (e.g., pi calculus, event calculus, Petri nets,
Pearl's belief networks, etc.) don't suffer from it.

HH> I think the 800-pound gorilla in the room is not
 > the comparison John's making to Future System...

FS started an IBM winter rather than an AI winter,
but the main point I was making is that putting all
your research eggs into a single basket with a single
technology is an open invitation to disaster.  The
lesson of FS is that the following strategy, which
has a lot of parallels to the SemWeb, is dangerous:

  1. Selecting a single technology base without any
     prior design competition and without any experience
     with the technology before it is edicted as the
     official foundation.

  2. Putting a management team/committee in place to guide
     and fund the research in lock step -- again without
     any design competition and without any practical
     experience before proposals are made into edicts.

  3. Ignoring prior R & D experience that has been well
     documented in the literature and failing to evaluate
     the new proposals against alternatives, both mature
     and innovative.

Many people have complained that the ANSI and ISO standards
efforts have been reactive rather than proactive -- i.e.,
they react by giving their blessing to de facto standards
with minor modifications instead of starting new projects
"proactively".  I have seen some proactive standards projects,
and they made me appreciate the reactive approach.  See my
law of standards:

    http://www.jfsowa.com/computer/standard.htm

HH> But right now I'd bet money on the Semantic Web Layer cake
 > due to their grasp of Web architecture.

My complaint about the layer cake is that it puts the emphasis
on syntax (bottom layers) rather than semantics (logic).  Even
then, it doesn't even recognize that there is something called
pragmatics.  I think that John McCarthy's Elephant proposal
(from the late 1980s) was a lot closer to what is needed for
the SemWeb than the layer cake.  See the following article:

    http://www.jfsowa.com/pubs/arch.htm

And by the way, we now have a very nice implementation of the
Flexible Modular Framework (FMF), which is described in that
article, and it really is a powerful tool for AI software
development and deployment.  The FMF supports any or all
languages for message passing, including RDF and OWL, but
also any variant of Common Logic, controlled English, or
any other language, natural or artificial.

HH> However, the Web 2.0 and all that jazz have to show,
 > people *want to share* not just web-pages but data
 > of all sorts.

I agree.  An approach like the FMF lets you put a wrapper
around any system, new or old, to make it into a module,
and translation modules can be inserted as needed to
convert any format into any other -- something like that
is necessary to support legacy systems with a smooth
migration path and coexistence.

John

Received on Tuesday, 28 March 2006 04:32:53 UTC