W3C home > Mailing lists > Public > w3c-sgml-wg@w3.org > January 1997

Re: Relationship Taxonomy Questions

From: Murray Altheim <murray@spyglass.com>
Date: Thu, 23 Jan 1997 17:23:48 -0400
Message-Id: <v02140b06af0d7c1fbc15@[]>
To: cbullard@hiwaay.net
Cc: w3c-sgml-wg@www10.w3.org
Executive Summary: 1. digression on approach to XML/HTML language design/
generalized parser error handling, and 2. the XML target audience.

>Murray Altheim wrote:
>> Len Bullard <cbullard@hiwaay.net> writes:
>> >And one not embraced by the majority of web applications.  They
>> >may know something.
>> I believe a more accurate answer here is that the majority of Web
>> applications only lex the documents, and therefore don't build parse trees
>> that would enable more complex link relationships.
>I accept that as a fundamental.  The question is why not and since
>Spyglass was the company responsible for Enhanced Mosaic, you should
>be able to tell me.  On the other hand, ActiveCGM objects have an
>application layer which they use.  VRML has a very powerful link
>type and script nodes with parameters.

Well, I can't speak for that development team, as I wasn't with the company
at that time, nor even in the same location. I can say that for the
Stonehand HTML Browser (which was based on nsgmls), we'd never gotten as
far as creating a generalized error handler: sometimes errors would
literally blow out the application. Then again, it wasn't a finished app.
But ironically, such an error handler would basically be Mosaic. IOW, build
a parse tree until an error and then dump the tree and do the Mosaic thing.
Behavior had to match the rest of the market, so that was also an important
design decision. Our responses to broken markup had to be the same as
MSIE/Netscape, or we weren't competitive.

So, there wasn't much use for the nsgmls parser on the open Web, given the
miniscule number of valid documents. The market for the product was
corporate intranets and the SGML community (where validity is assumed).

Point being that there's no point in building a complex product that seldom
gets used, particularly given the catch-22 of document authors not even
being able to assume any real UA ability to handle any of that content. I
think Spyglass' original interest in Stonehand's SGML browser was somewhat
quelled when they figured out that they'd still need Mosaic. Pretty damned
big footprint, too.

>> This design decision may
>> have been the result of the assumption of broken HTML markup and/or the
>> inability of the programmers to create something as complex as nsgmls for
>> WWW that could also provide error recovery. A whole lotta error recovery.
>That is good.  Historically, Marc Andreesen is also on record as
>saying, and his company saying, "we don't believe in SGML".  Berners-Lee
>is on record as saying he didn't believe people would type in all
>those tags, and I believe, the original model target was RTF.

I think Spyglass' (or at least Stonehand's) view is more accurately stated
as a pragmatic assessment of the HTML world as we knew it. Although
sometimes I must admit feeling like one of the few in the company that does
believe in SGML, especially now that we've moved into the 'device space'.

>That said, it is irrelevant.  I think we all agree that XML will
>not be targeted to the HTML user.

I don't know why you make that statement. The current "HTML user" is an
enormously varied market, and I can think of enormous possibilities for XML
where HTML just isn't cutting it. My belief is that authoring tools, REAL
authoring tools will be required. Not PageMill-XML Lite.

>The problem is in defining
>the requirements of the user it is targeted at, and getting
>those requirements written in a clear language that can be
>used as a basis for implementation.

I don't think we can assume a target user, just as Tim BL couldn't possibly
know where the Web was headed in the beginning.

>My point is simple:  no normative linktypes.  A way to express a
>linktype is already available.  It is an element type.  Will these
>interoperate?  Only if the application programmer understands
>the behavior implied or noted.  But those are application conventions
>and do not belong in the normative parts of the XML specification.

Well, that is certainly what I'm trying to ascertain: where are we headed
with this discussion? I honestly like much of what's been discussed.
Eliot's recent description of relationship types (Thu, 23 Jan 1997 12:08:55
-0900 "Re: Relationship Taxonomy Questions"), the earlier discussion of TEI
pointers, etc. all seem like very interesting fodder for XML links. But I
wonder how much ends up in the spec. I still kinda like the idea of having
a core spec and then adding recommendations as a separate layer, with some
type of conformance mechanism for UAs to negotiate. In this way,
alternative models could be developed and used, the whole idea behind

>If they are there, and they are not procedurally defined (that is,
>the operations for the data structure are not defined by function
>or axiomatically), then their originators deserve the good horse laugh
>the implementors will give them, and return to writing server-side
>scripts for LiveWire.

When we see Microsoft thumb their nose, we'll know. I must admit my
ignorance of arcforms (it's on the reading list), but I thought we could
define links more abstractly rather than a set of normative ones.


    Murray Altheim, Program Manager
    Spyglass, Inc., Cambridge, Massachusetts
    email: <mailto:murray@spyglass.com>
    http:  <http://www.cm.spyglass.com/murray/murray.html>
           "Give a monkey the tools and he'll eventually build a typewriter."
Received on Thursday, 23 January 1997 17:19:38 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:25:06 UTC