Re: client side includes

On Wed, 24 Jan 2001, Daniel Hiester wrote:

>> 1. text/html is a full HTML document with title, sylesheet,
>> etc. and is not apporitate as a snippet of HTML

> If you don't like text/html, then, what, text/plain? 

Probably text/sgml (it would be nice if we could also attach a public
text class as a parameter to the MIME-type, in this case 'TEXT'.)  But
using <link> to point to fragments rather than complete documents is
semantically dubious, I'd say.

>> 2. Includes like this do not become part of the documents GROVE 
>> [...] Entities do not have the above problem.

> I don't understand the issues brought up [here]

I've already posted a reference to Dan Connolly's essay where he
distinguishes the processability (and validation) of documents from
programs.  The basic point is that the parsing process should be
viewed as a black-box or interface.  What you can't do is to feed
stuff back into the parser and get the parsed result out as part of
the *current* parsing context.  Among other things, this would defeat
the purpose of a *generic* parser (and thus a truly non-proprietary
format).  There is no such thing, for instance, as a generic parser
knowing the *meaning* of LINK or OBJECT (which is why all tag-based
proposals are broken).

There are actually two closely related but distinct issues here.  One
is compounding of the *results* of parsing, where at the application
level you put together these results in semantically meaningful ways.  
The other is assembling fragments beforehand and presenting this
*aggregation* as a *single* entity to a SGML parser.  The entity
mechanism that Russell and others are talking about is aimed at the
latter (as in the sense of C's #include directive).  The former is not
only a much more general exercise, there is also no reason to seek a
solution where yet another bunch of tags have to be fed back into the
parser (which is why document.write() has always been a brain-dead
kludge.)

> Is this the reason why client-side includes does not exist yet?
> Because of this conflict of interest?

No.  A bunch of jocks forgot to program the support, because for them 
RTFM was a firing offense.

> I don't know the iso specs, or sgml, but I think I understand how
> a modern web browser parses html.

In a fashion that applying the term "modern" to such savagery is
ridiculous. 

> Let me ask the structuralists on this list: how would /you/ do
> client-side includes? Let's say, how would you do it in either a
> "future version" of XHTML, or maybe as a module for XHTML 1.1?
> (Will modules seriously ever become an implemented reality? I'm
> really interested, but uninformed.)

I think you're missing the point here.  The markup (and hence
technique) for this has been known and standardized since 1986.  Lo
and behold, even XML couldn't get *rid* of it - not that TPTB wouldn't
love to see that happen, in the name of KTWSFN - so the only thing
that's left is for wowser vendors to get off their butts.

If that isn't clear, try this: we "can't" use entities - the mechanism
*designed* for this problem, ferchrissake! - only REPEAT ONLY because
everyone's beloved Netploder doesn't support it.

How wowser-whipped does one have to be to take Netploder's crippling
incompetence as a fact of life and exonerate - heck, perpetuate -
beer-and-pizza adhackery?

I really don't get it.


Arjun

Received on Wednesday, 24 January 2001 23:34:50 UTC