Re: W3C Roadmap

Please note, this is my personal observation, effectively made as a  
well-informed outsideer (sine while I do read this list I don't make W3C  
policy).

On Sun, 12 Sep 2004 16:38:30 +0100 (BST), David Woolley  
<david@djwhome.demon.co.uk> wrote:

>
>> that they all have their rightful place, and there is no redundancy
>> happening as a consequence of simple ignorance of the left hand not  
>> being aware of the right hand.

I don't think this is a practical appraoch to building a system on the  
scale of the Web. I would offer as evidence the various systems which are  
much cleaner in design (that is, don't have redundancies and overlapping  
developments), such as Xanadu or hypercard and their relative lack of  
acceptance on a global scale. I explaiin below how i think W3C manages to  
address the problem that its pragmatic approach can allow...

> My impression is that SVG, at least, is trying to go it alone, with the
> result that it is busily competing with Flash at the same time that there
> is still no usable way of including simple vector diagrams on universal
> web pages.

This paticular example is a bad one. about 5 years ago HTML became XHTML  
[1](XHTML 1.0 was published on 26 January 2000 as a Recommendation, but  
obviously the basis was available before that as working drafts). SVG was  
published about 18 months later later as a Recommendation [2], but was  
pretty much the same as the 1999 drafts [3] (certainly for basic vector  
grtaphics). The point of doing these things in XML was to work towards  
being able to include them in each other. SVG got pretty close pretty  
quickly with the foreignObject element [4], which could specify a required  
namespace - if you understood the namespace (such as xhtml's namespace)  
then you would render the content. There was a cascade mechanism, so if  
you didn't understand it, you could look for an alternative that you could  
understand. (This is how object was designed in HTML around 1997...).

At this point there were tools that could mix the two together, but you  
couldn't validate mixed HTML and SVG easily. Two solutions were provided  
for that. The first is the possiblity to use XML Schema [5] (May 2001),  
which allows slightly different validation to what is possible using DTDs.  
The second came about through the work on mdularisation of the XHTML DTD  
(first published as a Recommendation in April 2001 [6]). All well and  
good, you could now write a DTD or schema that allowed you to validate  
XHTML and SVG mixed together. (The same applies to MathML and XHTML by the  
way, which was another common use case). You could mix them - in SVG's  
case in Amaya or Xsmiles or DENG (I have never tried the SVG-enabled  
Mozilla so don't know if it handles this or not), in MathML with Amaya or  
Mozilla/Netscape or Internet Explorer (thanks to some nasty IE hacking  
 from IBM) or ... All that was missing was the actual DTD/Schema.

Thank you Masayasu Ishikawa for doing it [7] (in 20002). Amaya, for some  
time now, has been able to produce verifiably valid mixed SVG, XHTML and  
MathML with a wysiwyg interface. The basics were demonstrated in 1999  
(then using a slightly different vector graphics language), and the idea  
was not new. But standardisation is very slow work.

There is another piece to the puzzle - it's all well and good to have SVG  
and MathML mix with HTML, but what if I want to add ChemML? or a music  
language? (The one I wrote is called CAML - there are a number of others  
around). Or some other specialized markup language?

In 2001 the W3C first published work on how to handle compound documents  
in general [8]. It turns out that if it isn't impossible, it does require  
substnatial amounts of work - standardised interfaces that pass quite a  
lot of information, and it quite possibly means that no tool can expect to  
be "The" browser. This work does continue, albeit almost imperceptibly at  
times.

>> developed in ignorance of another already available technology which are
>> themselves designed to deliver what these others are also trying to  
>> address?
>
> It almost certainly is doing this.  This seems to be the fate of all
> technological standards.  Often some feature gets addressed at a low  
> level and layers of abstraction get added.  As the abstraction adds,  
> users
> insist that they can't understand the details, so the low levels get
> forgotten.  Eventually someone re-invents the requirement and layers a
> solution on top of the current abstraction level, even though there is
> support at a lower level.  (This sort of cycle also occurs in marketing,
> in that after a few years you can re-market old concepts as new, just
> by using a new name.)

The mathematics of this is obvious. If 10 people spend 10% of their time  
following the development of a new technology, and 5% of their time  
dealing with administrative things like paying taxes and publishing  
minutes, and 30% of their time sleeping, and 10% of their time having fun,  
then in order to make sure they know what the other 9 people are doing  
they have to follow 9 new technologies in 45% of their time. Given that we  
assume it takes someone 10% of their time to actually follow the  
development, it's pretty clear that a bunch of the development is just  
going to pass them by.

Actually W3C develops a lot more than 10 different things, and while most  
staff members can track 4 or 5 reasonably closely, nobody can follow the  
details of all of them if the work is to proceed at the pace of  
development. So ther eis a process that provides for continuous review,  
for harmonisation, and "reintegration". This can be seen, for example, in  
the development of Xpath, XSLT, and XQuery, which merged together the  
things they had in common as they developed (in some cases at version 2),  
or in the re-use SVG made of work from SMIL and CSS, or in other examples.

Sometimes it happens before a spec is published. Sometimes there is  
resistance in a developer community to things "not invented here" (note  
that there are a number of languages to write schemas for XML, several  
syntaxes for RDF, and various varieties of HTML in common use) - and if  
that is the case on both "sides" of a particular redundancy the choice is  
either to stop all development until everyone agrees (which is not always  
practical) or let it go along, and work to bring the developments together  
as it becomes possible.

> Although not standardised, you can see this in email signatures.  A lot
> of the RFC 822 headers carry information that people now put in  
> signatures,
> but people first started suppressing headers because they thought them
> too noisy, then started treating them as technical mysteries, even though
> they are actually designed to look like the headings on a military
> memo.

This is a good example of how the real world works. If military memos were  
an ideal way of carrying information for all purposes, we would still use  
email headers and not signatures. But it turns out they are not. So they  
are used "silently" to carry a lot of important semantic information that  
can be processed by email programs, and signatures are used to carry  
human-readable versions. (There are parallels here with HTML's head  
section, and the use of rich media in the presentation of a document - if  
we take away the possiblity to use multimedia, somehow, in HTML documents,  
it would die as a format more or less immediately.

[1]http://www.w3.org/TR/2000/REC-xhtml1-20000126
[2]http://www.w3.org/TR/2001/REC-SVG-20010904
[3]e.g. http://www.w3.org/1999/08/WD-SVG-19990812/
[4]http://www.w3.org/TR/SVG/extend.html#ForeignObjectElement
[5]http://www.w3.org/TR/2001/REC-xmlschema-0-20010502/
[6]http://www.w3.org/TR/2001/REC-xhtml-modularization-20010410/
[7]http://www.w3.org/TR/XHTMLplusMathMLplusSVG/
[8]http://www.w3.org/TR/2001/NOTE-CX-20011211

Hope this is worth 2 cents, Geoff :-)

cheers

Chaals

-- 
Charles McCathieNevile         charles@sidar.org
FundaciĆ³n Sidar             http://www.sidar.org

Received on Monday, 13 September 2004 08:54:46 UTC