Re: two failings of XLink

Hi Elliotte,

>>Just to be clear: you're advocating using XLink syntax (xlink:href
>>attributes etc.) but ignoring XLink semantics (namely the
>>distinction between simple and extended links).
>
> Not at all. I'm saying I do want to use XLinks, both syntax and
> semantics. However, I want to use simple link syntax and semantics
> rather than extended link syntax and semantics. I think multiple
> simple links are a good fit for XHTML's needs.

I'm not sure I agree with you, but perhaps I'm missing something.

Is it that you think that multiple simple links can be used instead of
an extended link in all cases (in other words that extended links are
superfluous in XLink)?

If not, could you give an example where you think that extended links
*would* be a good fit for a markup language's requirements, and
describe how your example is different from the example with the
<object> element in XHTML?

>>You're also advocating that XHTML is a "special case" as a markup
>>language and therefore should be treated differently from other
>>markup languages when it comes to tool support.
>
> Yes, I'm advocating that it's a special case, but not for the reason
> you cite. I think it's a special case because of the vast installed
> base of hypertext. Thus as a practical matter any reasonable tool
> will treat HTML first and generic XML second.

I heartily agree that there's a vast installed base of hypertext, and
that HTML will get special treatment by web browsers, spiders and so
on for many years to come.

At a theoretical level, it seems to me that if *X*HTML is to serve any
purpose at all, it should be treated in the same way as other XML
documents by those tools that are targeted at *generic* XML documents.
If I open up IE with XHTML, of course I expect it to be treated
specially. If I try to validate XHTML using W3C XML Schema, I expect
it to be treated in the same way as any other XML. If I try to
validate HTML using W3C XML Schema, I expect it to bail. If I write a
webcrawler then I expect it to treat HTML documents specially because
a webcrawler that bailed when it encountered an HTML document would be
pretty useless. But if I write an XLink harvester then I count that as
a *generic* XML tool and expect that it should treat all XML documents
(including XHTML) alike, that it should use the XLink semantics as
defined in the Rec. and should bail if it encounters something other
than XML.

At a practical level, I guess I'm inclined to agree that XHTML will
always be treated differently from other XML-based markup languages
because it so popular. I wouldn't be surprised if, for example, W3C
XML Schema validators had built-in schemas for XHTML.

However, the mismatch between what XLink does and what markup language
designers need it to do isn't actually local to XHTML. We can't expect
*every* markup language to be treated specially. So I do think we need
another solution, even if XHTML can get away without it.

Cheers,

Jeni

---
Jeni Tennison
http://www.jenitennison.com/

Received on Friday, 27 September 2002 11:21:18 UTC