Re: [seweb-list] Re: A discussion: Is semantic web an old fashioned idea? Is it bubble, unworthy or an interesting research area - Post your comments

Before commenting on the comments, I'd like to comment
on the subject line by Miltiadis Lytras:

ML> Is semantic web an old fashioned idea? Is it bubble,
 > unworthy or an interesting research area.

First of all, there is nothing wrong with old-fashioned
ideas.  On the contrary, some of the oldest ideas are
some of the best and the most worthy of being developed,
either as commercial systems or as promising research.

My concern was about pumping too much hype into any
technology before it has been tested on practical
applications.  The hype raises expectations too high,
too soon.  Even if the results are useful, people tend
to dismiss them for their failure to meet expectations.
As a result, there is a backlash against anything else
that might use the same or similar buzzwords (even
when the other technologies have more meat than buzz).

Nick Gibbins wrote:

NG> A different characterisation of the growth of the
 > Web might be:
 >
 > 1. In 6 years (1989 to 1995) with some hype and not
 >    insignificant EU and US funding, the WWW evolved from
 >    Tim BL's original proposal to a widespread but simple
 >    system which is less advanced in certain ways than
 >    previous hypermedia systems, such as the Hypertext
 >    Editing System, Xanadu, NLS, OWL-Guide and Hypercard

In other discussions, I mentioned some of these earlier
systems, and I also mentioned the SGML technology, which
is a variant of GML, which was widely used at IBM since
the 1970s.  The maturity of these technologies was valuable
for giving the WWW adaptation an important head start.

Description logics have also been around since the 1970s,
and they have proved to be useful for many applications.
But they have never been as widely used as the SQL version
of logic, which is just as old.

I believe that a merger of DL-like plus SQL-like technology
could be extremely useful, but of the two, the SQL part
has already proven its value by its overwhelming commercial
success.  Without the SQL part, the DL part is just the
tail without the dog.

And I mostly agree with what Peter Crowther wrote:

PC> 1) Ontological and mixed ontology/rule systems have,
 > in my opinion, shown that they can provide an appropriate
 > basis for single-machine and intranet applications.  Public
 > examples include PEN&PAD and AT&T's exchange configuration
 > system (there are others).  Note that there is control over
 > the application domain and the ontology in use in both of
 > these applications.

I agree.  But one of my major complaints about the RDF/OWL
work is that it was done without any integration with (in fact,
without any recognition of) existing work on application tools,
such as SQL, UML, EXPRESS, etc.

PC> 2) Practical approaches for dealing with larger and more
 > complex ontologies (in particular) are being developed...
 > ... However, approaches to reasoning in the face of
 > conflicting information are not as well developed, and this
 > required part of the Semantic Web architecture is, in my
 > view, woefully lacking.

Yes, indeed.  The current approach does absolutely nothing to
accommodate or record what information is missing, uncertain,
conflicting, or unreliable.   And if any metalevel information
about the information were available, none of the proposed
reasoning methods could do anything with it.

PC> 3) Far too much faith has been placed and is being placed
 > on one architecture slide that Tim Berners-Lee created,
 > showing RDF as the substrate for the entire Semantic Web.
 > This slide is at the root of many of the political and
 > practical problems of the Semantic Web, and should have
 > been ceremonially burned long ago.  Unfortunately, it's
 > probably too late to change now - at least within the
 > W3C-supported Semantic Web initiative.  RDF (any version)
 > is far too limited in its expressive power to be a useful
 > substrate, and the idea of building all the other layers
 > on top of it is akin to trying to build a communications
 > framework on Morse code when you have dots but no dashes.

Yes.  It is painful to look at that layer-cake diagram and
realize that it is the primary guideline for the semantic web.

PC> 4) Whether or not the Semantic Web is itself a realistic
 > endeavour (and I think that depends on your definition of
 > 'Semantic Web' as I think almost everyone has their own
 > view of what this means), the technologies and techniques
 > developing for/around it are finding practical applications.
 > Commercial organisations with some very hard-nosed investment
 > policies are putting money on good bets in this space, and
 > there's increasing evidence that they'll get a good return.
 > *However*, typical applications at this time are intranet
 > applications where there's good control over the problem domain.

There has been a long-felt need for something like this since
the ANSI/SPARC conceptual schema report in 1978 and the ISO
TR 9007 in 1987.  Those reports recognized that everything
related to application development had to be integrated, but
for that reason, the standards groups bit off much more than
anybody could chew or digest.  As a result, they left a vacuum,
which the W3C filled with Tim BL's diagram.  People are now
grasping at whatever straws are thrown their way, but the mud
huts they're building with it have no relationship to the
megaliths that support the world economy.

Danny Ayers wrote

DA> btw, did anyone happen to save a copy of Ted Nelson's
 > "XML is Evil" piece?
 >
 > http://ted.hyperland.com/XMLisEvil.html

No, but the older report that Ted put in its place makes
at least one good point:

TN> I believe that embedded structure, enforcing sequence and
 > hierarchy, limits the kinds of structure that can be expressed.
 > The question we must ask is: What is the real structure of a
 > thing or a document? (And does it reasonably fit the allowed
 > spectrum of variation within the notational system?)
 >
 > You can always force structures into other structures and claim
 > that they're undamaged; another way to say this is that if you
 > smash things up it is easier to make them fit. Enforcing sequence
 > and hierarchy simply restricts the possibilities.

I have been a happy user of the GML-SGML-HTML-XML family of markup
languages for the past 30 years.  But I agree with Ted that they
are too limited in the kinds of structures they support.  Ted's
term "hierarchy" is another way of saying that they can only
support context-free languages.  But every programming language
since the original FORTRAN has context-sensitive constructions
that cannot be fully specified by just a context-free grammar.
Every compiler has to maintain a symbol table to support those
features, and XSLT does not support symbol tables.

The 1959 language called LISP was the most successful platform
for supporting new languages that anyone has ever invented.
Lex and YACC only support syntax, but LISP has been used to
implement any semantics anyone could imagine.  It is sad that
the W3C did not adopt LISP or at least something with equivalent
power for supporting language design.

John Sowa

Received on Thursday, 17 June 2004 22:08:52 UTC