Re: Semantic Layers (Was Interpretation of RDF reification)

Again, I'm not sure if a Semantic-Web interest list is the place to be
arguing against the Semantic Web per se, but John brings up some
interesting points.  I think the 800-pound gorilla in the room is not
the comparison John's making to Future System, but the historic collapse
of the AI industry known as AI Winter. AI was based largely upon
knowledge representation, and its failure seemed to deal a lot with many
of them not having clear formal semantics, tractable inference, and the
inability for people to represent knowledge consistently -problems the
SemWeb has worked hard to fix. Similarly to the SemWeb, There was also
lots of hype, involvement from mostly academia, and the KR systems could
only, as "Wikipedia" put it, do "parlor tricks." Ouch.  Could the same
happen to the nascent SemWeb community?

 A few years ago when I was trying to understand the SemWeb, I wrote a
point by point comparison of "classical" AI and the Semantic Web here. I
think many people have historically undervalued AI's contributions
(Emacs anyone?) and the SemWeb has fixed many potential holes, mostly be
listening to the DL communtiy and AI people like Pat Hayes:

Comments welcome - it's quite old and I'd like to revise it at some point.

However, I think time is on the SemWeb's side. The Web 2.0 quasi-bubble
will burst at some point (especially when everyone realizes their data
is trapped in proprietary formats and being data-mined by companies who
would sell their grandmother for a quick buck, no matter how "hip" they
seem), and hopefully standards bodies will pick up the pieces, much like
with the original Web bubble, and make the Web a more open place. It
might be RDF, RDF 2.0, CGs, KIF, who knows? But right now I'd bet money
on the Semantic Web Layer cake due to their grasp of Web architecture.
It is possible the SemWeb is a case of premature optimization. If
anything, it seems like the SemWeb community should pay more care to
making the SemWeb useable by the desperate Perl hacker on the street,
who is currently really enjoying AJAX :)

Obviously there is a lot to learn from Conceptual Graphs, and the SemWeb
has paid attention to the DL community, and with the Rules working group
will probably be (hopefully!!) paying attention to CGs and other
formalisms. It would be great to have Sowa's experience, which is
considerable to say the least, guiding some of that effort. Again, every
AI formalism I've ever seen from Conceptual Graphs to OWL-DL all have
limits and specialisms as regards what they can accomplish, so to argue
even implicitly that one has the "one true answer to knowledge
representation" is probably ill-founded at best. However, the Web 2.0
and all that jazz have to show, people *want to share* not just
web-pages but data of all sorts. Maybe the devil's in the details, but
TimBL's intuition I think is being shown right.

John F. Sowa wrote:
> Adrian and Azamat,
> First a comment on Adrian's point:
> AW> ... languages and GUIs for people who use an application
> > are such that those people cannot *change* the application,
> > and cannot *write* new applications themselves.
> That's not what I assumed or implied.  The issue is much
> more complex than that, and the word "application" is
> rather old fashioned.  See the example of Sonetto, which
> I sent in a previous note to this list,
> Sonetto uses conceptual graphs under the covers, but the
> managers can update and modify the ontology and business rules
> without knowing anything about the underlying technology.
> AA> Indeed, the matter looks serious, both from the public and
> > scientific sides, beside the technical issues which Adrian
> > tries to point out for a long while. The first issue is
> > concerned with getting huge public funds, promising a sort
> > of magic technology as the Knowledge Society intellectual
> > technologies, without making foundational ontological
> > groundwork, like as SUO or ONTAC or USECS....
> The major flaw in the SemWeb was to plunge into standards
> without any preliminary experience on what kinds of things to
> standardize.  HTML, for example, was based on experience with
> GML and SGML from 1969.  Unicode was also based on decades of
> experience with various codes, and URLs were based on decades
> of experience with Unix-like file systems extended to Arpanet
> and the Internet.
> But RDF was developed by Tim Bray, an XML expert, working with
> Guha, a former associate director of Cyc, which was based on the
> vastly more complex Cycl language, of which triples were the
> tiniest of tiny subsets.  The result was an alpha-level prototype,
> which Tim said was a mistake.  Yet that mistake is now fossilized
> as the foundation for everything else.
> In my career at IBM, I saw some outstanding work and some total
> disasters.  One of the worst multi billion-dollar disasters was
> FS (IBM's Future System of the 1970s).  I am now in the process
> of scanning and posting some documentation about it.  See
> FS was chartered in 1971, and many of us could see right at the
> beginning that it was going to be a disaster, but management would
> not listen until (a) multi billions of dollars were spent on it,
> (b) the opportunity to do something much more significant was lost,
> (c) the lingering stench of FS made IBM management reluctant to do
> anything innovative for many years thereafter because it might turn
> into another FS, and (d) the world outside of IBM moved on in ways
> that caused IBM to lose its leadership in computer technology.
> What bothers me about the SemWeb is that I see a massive movement
> that has all the signs of another FS-like disaster.
> AA> In order to lay down the knowledge infrastructures of the upcoming
> > Information Society the EU’s Research Council and the European
> > Parliament allocated 3.8 billion Euro for Knowledge Technologies
> > within the 6th European Union Framework Programme (FP6) for Research
> > and Technological Development, with a total budget of 17.5 billion
> > Euro. Within the FP6 Programme, all the web-based knowledge technology
> > projects are largely concerned with ontology research, design,
> > learning, and management....
> If there is that much money going around, I would very *strongly* urge
> them to distribute it among a thousand *independent* projects with 3.8
> million each -- or perhaps a hundred projects with 38 million each.
> I stress the word *independent* because it's essential *not* to put all
> the eggs in one basket.  Perhaps two or three of the projects might be
> based on SemWeb technologies, but the other 97 (or 997) should *not*
> be tied to the SemWeb (although it would be OK to have import/export
> facilities to and from RDF and OWL -- however, the foundations should
> definitely be kept independent of any SemWeb technologies).
> The worst thing that happened to FS was that everything was tied
> to a single very seriously flawed base.  If IBM had funded three or
> four independent projects, one or two might have turned out to be
> outstanding.  But the manager at the top deliberately killed all
> competition.  When FS collapsed, IBM very quickly dusted off the
> older System/370 Model 168, mapped it into the circuit technology
> intended for FS, and shipped it to customers as the IBM 3033.  But
> it was not as good as the Amdahl machine (which was designed by
> Gene Amdahl, who quit IBM when his project had been killed to make
> way for FS).
> I am sure that some major new development will leapfrog the SemWeb
> during the next five to ten years.  There is a very slight chance
> that the new development might be based on current SemWeb projects,
> but I very seriously doubt that.  I would strongly urge any funding
> agency to support multiple *independent* projects -- not just one
> gigantic coordinated effort like FS.
> John



Harry Halpin,  University of Edinburgh 6B522426

Received on Tuesday, 28 March 2006 01:34:31 UTC