W3C home > Mailing lists > Public > public-html@w3.org > April 2007

Re: legacy of incompetence? [was: a compromise to the versioning debate]

From: Preston L. Bannister <preston@bannister.us>
Date: Sun, 15 Apr 2007 13:42:37 -0700
Message-ID: <7e91ba7e0704151342j738b35a2vcb8795c7aeb753f5@mail.gmail.com>
To: "Dailey, David P." <david.dailey@sru.edu>
Cc: "Alexander Graf" <a.graf@aetherworld.org>, public-html@w3.org
On 4/15/07, Dailey, David P. <david.dailey@sru.edu> wrote:
> [snip]
> I thought the WG charter had language running counter to this perspective,
> but on reanalysis, the closest I could find was:
> The Group will define conformance and parsing requirements for 'classic
> HTML', taking into account legacy implementations;
> It would be a bit of a stretch to claim this means we have to support
> EVERY peculiar piece of HTML ever successfully rendered in some browser.

Speaking generally - seems there is a distinction that here that needs to be
made, clear to all involved, and muddies the discussion when missed.

One extreme interpretation of the proposed compatibility principles is that
HTML-next describes a parser and interpreter that can handle any past W3C
version or browser variant of HTML.  In this case, version specifiers become
unnecessary (and some of the prior discussion makes more sense).  This would
require a painfully large specification, a modest implementation effort for
mainstream browser vendors, a large effort for new browser implementations,
and offer a very messy model for someone trying to learn HTML.  Note that
larger specifications - just like larger programs - are more likely to
contain errors.

I suspect most folk in this discussion are not assuming the above extreme

After writing the above, I would like to suggest another principle:
The HTML-next specification should be as short as possible.

Strictly speaking, "Don't Break the Web" is a non-issue.  Existing web pages
will be interpreted just as they are today.  The HTML-next specification
only applies when the HTML-next version specifier is seen by the browser.
(Just like XHTML did so [cough] successfully.)

If we break higher education -- we'll lose not only faculty and their
> delightfully illiterate incompetencies in the art and science of web
> development -- but we will lose the students as well -- since the faculty
> will just find other ways of doing things. If HTML gets too modular (CSS =
> presentation; JavaScript = function; HTML = some oddball concept of
> semantics that does not map to the semantics of "semantics" at all well for
> most folks)

Fair warning:  I am one of those folk who are going to push hard for separation
of concerns as an underlying principle.

then faculty will find it too hard to do and will find other places to
> present their material (see http://blogs.law.harvard.edu/cyberone/ for
> example.) And those very students, some of you web developers and browser
> developers might like to be able to hire when it comes time to actually
> build browsers to do HTML6 (or will it be 7? ...  I've lost count) in 2015.
> And one of the problems the web now faces is that the same faculty who know
> rudimentary HTML have now (thanks to their wonderfully rich browsers) seen
> moving OWL diagrams for music proximities, and Flash demos that make them
> want to do animate their lectures and are beginning to say -- hey I can
> display four-dimensional red-shift data on galaxies with that, or I can let
> students simulate heart surgery with that. It'd be nice if they could use
> HTML for that. But if not, being a resilient lot they'll figure out some
> way.
> But (the clouds begin to move) -- as long as we don't make HTML too weird
> and difficult, then I think most faculty won't really care too much if they
> have 15 years worth of lecture notes begin to look funny in new browsers.
> They'll learn how to make their new pages conform to the new browsers and
> every couple of years they'll fix a dozen or two of their old pages,
> grumbling a bit about the computing industry as they do so. No great
> catastrophe will in fact occur.
> So I have had a bit of change of heart, perhaps. Should our concern for
> preserving "ill-formed" legacy content on the web really cause an impasse
> between the major browser developers? I suppose not. Such an impasse breaks
> a whole lot more than the 772 million sites that Google gives me in answer
> to the query "education." At least so I suspect.
> But it does make me wonder... Folks in the user-interface community (like
> ACM-SigCHI) often grumble about how the software developers build some darn
> collection of algorithms and then come to the interface specialist and say
> "here .. build us an interface for this." According to them, the interface
> design really ought to begin earlier, rather than being tacked on as an
> afterthought. One way that good development efforts often begin is through
> some sort of consitituency analysis. Who exactly are our prototypical web
> authors?

On the one hand, I believe that number of semi-skilled (at HTML) folk
writing HTML directly is going to diminish rather strongly.  In the early
days, end user written documents transcribed as HTML uploaded via FTP was
pretty much the rule.  Now we have lots of alternatives.  Google Docs,
Wikis, weblogs, CMS, Nvu, Amaya (when it works), and generic Word processors
that can output HTML or PDF ...  there are rather a large number of ways for
folk not intrinsically interested in HTML to write static documents for the
web.  Over time tools of this sort are only going to get better.

On the other hand, thinking about how to teach HTML for web applications may
prove useful. In teaching HTML we want to pick out a small number of
concepts, and ignore the legacy baggage.  Perhaps this exercise makes sense
as part of or parallel with the updated HTML specification.
Received on Sunday, 15 April 2007 20:42:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 9 May 2012 00:15:53 GMT