Hi, Martin ...

 

> For a description of the problem, please see

> http://lists.w3.org/Archives/Public/www-validator/2001JulSep/0476.html

 

With respect to the entire thread of the discussion cited above about the errant browser behavior, this level of threat doesn't appear to be a show-stopper, given all the conditions which would have to be fulfilled for an actual (rather than hypothetical) unauthorized access to occur on a second server after validation on a prior one (which I think Nick Kew attempted to point out). Not to say it could never lead to a breach, but ...

 

I would think the odds might be lower than for being struck stone dead by a meteor in broad daylight. And while that's also possible, and has actually happened, I know of very few people outside Hollywood who lie awake nights in mortal fear of deadly meteors raining down upon them.

 

> If you have a better idea of how to fix the problem, please

> send it to www-validator@w3.org.

 

Some issues are better solved by posted policy than by clever coding, and some fall straight-away into the "nobrainer" category:

 

------------------------------------------------

All your folks really had to do was offer a prominent notice informing local administrators of this negligible risk, LEAVING IT TO THEM to decide which protected pages, if any, should be included for validation based on local risk assessment.

------------------------------------------------

 

Taking global control into your own hands, without request for comments and without prior notice, represents precisely the kind of "ivory tower" thinking (and not a small measure of hubris) which has previously raised complaints. The broken code implementing this "security fix" has itself created a way larger problem pointed out not only by me, but also by Dom, the W3C Webmaster.

 

We don't normally discuss our internal security arrangements, but the present crisis compels a demonstration that for some sites, this rather silly "solution", even were it not broken, is irrelevant with respect to improved security:

 

In our own particular case, the danger of someone accessing a user account (each of which has a separate password, username, AND realm namei -- the latter two unique on the system) is very remote, at best, and any financial data associated with an account is independently stowed behind a separate challenge/answer and PIN number transmitted only by way of SSL connection. Individual actions within these accounts are authenticated by internal token-passing, and even in the unlikely event an account were breached and the work product defaced, it could be quickly restored from backups. So we'd be much more inclined to worry far more about meteor strikes than such a random security breach. And because our realm names are unique per-access and contain only one "hard" page each, the proposed solution, even if working correctly, would offer us no added security whatever. It's an academic exercise over a tempest in a teapot.

 

We (and presumably others) now see the realm name grossly garbled when it is presented to the Apache server -- so the server is throwing up an authentication window rather than silently serving the validation request, as it previously did. It isn't even possible to manually authenticate the page because the Realm name is garbaged, and doesn't actually exist!

 

We committed very significant time and effort to embrace and deploy validation of pages on our sites, and have widely advertised our VERIFIABLE commitment to W3C standards to our customers and visitors. We wired this feature in, from stem to stern, and make numerous references to it in our documentation. It was all working fine. Then it mysteriously went haywire. I spent half a day trying to figure out what had changed on our servers to cause such a problem, and only learned by accident that the problem was apparently external. And absolutely inescapable.

 

While W3C has recently be moved by an inspired impulse for "quality", it is plain to any aware observer that their nice folks, like most others in the IT industry, seem entirely removed from any actual experience with the real thing. Just for starters, here are a couple of imperatives to assess conformity (quality):

 

1. Reliability of standard

2. Reliability of measurement

 

The DTD provides reliability of standard for HTML. But where do we now obtain reliability of measurement? Certainly not by use of the Validator, which is apparently subject to change at the whim of a small cabal whose hit-and-run decisions are made on the fly, mandated by fiat, and subject to no independent review or appeal.

 

We are now confronted by a costly consequence resulting solely from just such an ill-considered decision -- hastily conceived, inadequately implemented, and rigidly inflexible.

 

I'm unsure who W3C perceives its constituency to be, but plainly we and others like us have been unceremoniously hung out to dry -- left to twist slowly, slowly in the wind. If this is a sample of how W3C operates, then it seems unlikely it will ever be taken seriously by serious people. Few adults, when informed by the experience of others, are fool enough to sign up for the dangers of a long voyage on a rudderless ship.

 

I'll gladly copy this to the email address you suggest above for whatever purpose it may serve before being routed to the bit bucket.

 

Regards.

---
Bud Hovell

"Let them eat cake." -- Marie Antoinette