<base/>, @xml:base and DNS fallback (Was: Re: a recommendation...)

Hash: SHA1

Hi Bill,

Am Mittwoch, 3. Dezember 2003 19:43 schrieb Fastpitch Central - Bill:
> Christian,
> You sure do have a wonderful imagination. 
thanks :-)
Imagine, you're not the first one telling me :-)

> I guess the problem is you're
> the first person that's told me I'm coding all wrong.  I'm still using
> <base . . . > in every page I write.  And I thought I was in good company
> since a majority of the websites I visit still use <base . . . >.
I really wonder what you use <base/> for in *every page* you wirte. I'm using 
HTML since 1992. I didn't use <base/> very often and I never really needed 
<base/> at all.

Especially, when making much use of <base/>, moving pages can become very 

I'd really like to see where and how you use <base/> so often.
Could you show me some examples?

> I guess we're all wrong and backwards and will never amount to anything
> since, even though our web sites work, our pages are wrong on the inside -
> so we're baddd!
Do you expect me to agree or to disagree? *g*

Ask http://validator.w3.org/ wether you're pages are wrong.
And read about the semantic web and the wai to find out how to amount some 
Wether you're wrong and backwards I can tell you, just send me a photo of you 

> As far as your examples are concerned, although I've visited
> http://www.w3.org/ I'm sure that 99 out of 100 websites I visit have never
> heard of the organization. 
I hoped less, but yes, most people that "code" HTML on the road have no clue 
of what they do *g*

> When they started their website, they pulled up
> the old browser, copied a web page, similar to what they aspired to, into
> Notepad and began making modifications.

Yes. Or worse, they use FrontPage, NetObjects Fusion or similar programs that 
produce something that looks like HTML but isn't.

There's nothing that checks for wellformedness or validity in this process.

In contrast to a programming language, where the compiler, assembler or 
interpreter will kick their little asses on every syntactic mistake, there's 
no one telling them they're wrong.

At least as long as the last instance, the user agent, has such a damned fault 

That's where XHTML comes in.
That's why HTML is discontinued.

And the next version of XHTML has no <base/> element anymore. Instead it 
relies on XML Base which specifies an @xml:base attribute.
Discussing @xml:base is beyond the scope of this list. Look at the XML Base 
recommendation to find out where to discuss @xml:base.

> As far as HTML 4.0. and  XHTML are concerned, the last time I tried to
> conform to either was perhaps a couple years ago.  One day if I ever find
> the time perhaps I'll try Netscape and IE again and see if things work as
> expected.

Why you only _tried_?
What didn't work?
Ah, don't tell me.
It was Netscape 4.x. The worst thing that ever happened to the web so far.
The second worse thing probably will be Microsoft's refuse to continue IE's 
I haven't heard a single word about wether Internet Explorer will finally 
support XHTML (beyond parsing text/html sent XHTML 1.0 as tag soup) or not.

> If page one works then I can design a plan to convert the thousands of
> pages.
>  At least it won't be as tough a task as when I wrote a translator
> to convert from IBM 360 assembler to Univac 1100 series assembler.  But it
> will be more massive than the work to convert from Burroughs COBOL to
> Sperry COBOL.
Well, if you didn't think of a good concept of how to manage large sites that 
copes with new versions of XHTML as well as with old versions of HTML and has 
some quality assurence mechanisms... that's your problem, not mine. The W3C 
provides all technology needed for such a system: XML, XPath, XSLT...

Those copy&paste "web designers", well, they're not really bad. They're very 
cheap in the first place. And cheap are their results. As soon as there's a 
layout change or some other maintenance work like trying to get compatible 
with new user agents and their increasing market shares, e.g. Opera, Mozilla, 
Konqueror, they tend to become very expensive very quickly. So finally they 
are bad, yes, especially for the customers.

Perhaps you're good enough at regular expressions or similar tools to know how 
to write yourself some conversion tool with Perl, sed, awk or whatsoever to 
convert your 1500 pages from the old style to the new style.
On how many % of the 1500 pages will it fail?

Every page it fails on is 1 page too many.

Or perhaps you use tidy to convert your pages to XHTML, then use XSLT to 
safely extract the proper content without extracting the non-content-related 
layout, generate a content-only variant, and then use another XSLT to give 
them a new layout.

What I wanna say is: The W3C ain't stupid and they aren't livin' in an ivory 
They know what they're doing.
And it's really worth reading their specs, finding some tools that implement 
them and use them.

> Meanwhile, it appears obvious from your remarks that my idea on <base . . .
> > has a snowball's chance in hell.  So I'll stop wasting my time in trying
> to help.

Well, discussing it in the context of HTML has no chance, really.
Try discussing it in the context of XML Linking in general.
Maybe it has its chance there.

> If y'all ever come down from the ivory tower how about fixing a real
> problem - SPAM.
Sorry to tell you:
spam is not a topic of (X)HTML *g*

> And, next time DNS goes down and you want to tell someone how to get to
> page 2, look in your crystal ball and tell them when a fallback will be
> available during their future emergencies.

If I'd use it that often, I'd tell'em in an emergency how to modify their 
/etc/hosts file (which even exists on MS Windows), which is a far better 
fallback solution because it isn't only valid for HTML, but for everything 
that needs to resolve host names. HTML is just one small tiny little thingy 
in the internet: xml in general, dtds, schemata, svg, smil, irc, ftp, ssh, 
telnet, mail (smtp, pop3, imap), rsh, mathml, X-remote, databases, cdf, 
daytime, finger, talk, h.323 and many many many many more. In case of DNS 
failure, several of these (especially ssh and telnet) are far more important 
than the WWW and those gaudy web pages.

What do you provide on your homepage that's so important that it must be 
available during a DNS failure?
The emergency numbers 911 and 112 for those that can't remember?

I can only repeat, the problems of DNS are the problems of DNS and not of high 
level protocols like XHTML.
It's already always a strong dispute about wether to include something in 
XHTML that's related to HTTP.
The layers should be kept isolated and not intermixed whenever possible.

Trying to give HTML a DNS fallback solution will not solve the problems of 
E-Mails not being sent and everything requiring ips instead of domain names 
if communication shall be possible. It's just trying to fight one of the many 
symptoms a DNS failure would have. Fixing the cause instead of one of many 
symptoms is much better but beyond the scope of www-html. DNS fallback 
solutions should be discussed with the IRTF / IETF, not the W3C. The W3C is 
resposible for the WWW, not DNS. The DNS is not part of the WWW.

Also, a widespread DNS failure is far less likely to happen than another power 
failure in the US. My DNS configuration knows 17 root servers. It's quite 
unlikely all of them go down the same time. And after the last DNS failure, 
several providers reconfigured their DNS servers to do more caching.
Also, next time it's very likely that providers will switch on their mirrors 
and reroute DNS related traffic on the mirrors locally.

What I don't say is that DNS fallback solutions should not be discussed.
I only say they should be discussed with the right people at the right place.
And I say XHTML is the wrong place to discuss DNS fallback solutions.

(This reflects my personal opinion only, I'm not related to the W3C or 
moderator of this list)

- -- 
Christian Wolfgang Hujer
Geschäftsführender Gesellschafter (Shareholding CEO)
E-Mail: Christian.Hujer@itcqis.com
WWW: http://www.itcqis.com/
Version: GnuPG v1.2.2-rc1-SuSE (GNU/Linux)


Received on Wednesday, 3 December 2003 16:00:08 UTC