Re: Question about forward and backward compatibility

Robert O'Callahan wrote:
> Chris Lilley wrote:
>> You should take a look at the switch element and the requiredFeatures
>> attribute to see how to do this.
> I know about that. My question is, doesn't the letter of the 1.1 spec 
> require 1.1 user agents to show error when they encounter 1.2 tags and 
> attributes, even if they're under a switch?

No, it would be an error if the unknown elements/attributes were the 
rendered branch of the switch. The non-rendered branches can contain all 
sorts of stuff that the implementation doesn't understand. Otherwise the 
implementation would have to know what it doesn't implement, which is 
obviously impossible when you factor in new versions of the language.

Question: would you be helped by a scheme in which unrecognized SVG 
elements and attributes would cause only the subtree to be in error and 
entirely not rendered, leaving the rest of the document alone? The 
entire document would not be considered in error unless a mustUnderstand 
attribute is set to true for an ancestor of the subtree in error. This 
would simplify the evolution of SVG versions, as well as incremental 

>> Implementing random subsets is not encouraged, no. Recognizing that a
>> given implementation might not go from 0% to 100% in its first release,
>> we recommend first implementing SVG Tiny 1.1, then gradually improving.
>> In addition, some classes of implementation (eg printers, server-side
>> transcoders) can usefully be static rather than dynamic (noanimation, no
>> interactivity).
> Unfortunately being SVG Tiny would force us to disable a bunch of 
> features but still do animation, which we won't be able to do before our 
> next release. But otherwise it sounds like a good idea to me.

In your case I don't think it's an option since IIRC you have everything 
except animation and fonts, both of which are Tiny features (for fonts 
it's only the basic subset but still). I agree with Chris that random 
subsets is not encouraged, but the union of complete feature-sets isn't 
completely random and better than a) totally random features and b) 
nothing. It's easy to tell users that a given implementation supports 
"everything except animation and fonts", it gets a lot harder to say 
that it "supports everything minus <insert long list of random deltas>".

>> I agree that things like -moz-opacity were a good implementation 
>> strategy. Batik took a similar
>> approach when implementing SVG 1.2 features on an experimental basis (it
>> required using a Batik extension namespace and a special build where the
>> extensions were recognized).
> Why is using an extension namespace better than using an extension MIME 
> type?

The extension namespace means that within the document itself it is 
clear which parts are extensions. It also means that on the file system 
you can call your document .svg instead of .mozsvg. In fact, in XML 
using different namespaces is the equivalent of using vendor prefixes 
like -moz- in CSS. You wouldn't want to have to call your CSS with a 
single Mozilla extension in it .mozcss and have to send it using the 
text/x-mozilla-css media type.

That being said that's a strategy that works well to implement features 
of a spec that's in development (which is the case both for the -moz- 
CSS properties and Batik's SVG 1.2 extensions). I don't think it flies 
for implementations in development of a stable spec. You don't wait to 
have 100% of CSS implemented to turn it on, I suggest doing the same for 

> We'd try to avoid dividing the market by advocating the special MIME 
> type just for Mozilla-specific application authors (e.g., XUL authors). 
> Think application/xml+moz-vg. We'd recognize the real MIME type (and 
> mixed documents) later this year --- before there's much usage, I guess 
> --- and encourage Web authors to use it. I understand the special MIME 
> type is still suboptimal but everything looks suboptimal.

Right but I think that a vendor-specific media type is the most 
suboptimal of all. It creates a high barrier to deployment in that 
people have to reconfigure their web servers, something that they can't 
always do (or know how to do). And content served with that media type 
will only work in Mozilla, which is silly because other implementations 
could use it too -- and they'll be tempted to recognize it as well.

A partial implementation with the right media type and the right 
namespace is IMHO the least suboptimal of all. People know how to deal 
with a common interoperable subset far better than they know how to deal 
with the alternatives.

Robin Berjon
   Research Scientist

Received on Monday, 17 January 2005 20:00:49 UTC