Re: XHTML 2.0 - 3.2 Conformance Requirements (PR#7651)

My replies are embedded below.  Note that while I speak with some 
authority on these topics, I have not vetted these comments with the 
HTML Working Group and they should not be considered a formal reply.

Bjoern Hoehrmann wrote:

>Could the HTML Working Group please either point out the sections in the
>draft under discussion that clearly answer such questions or explain in
>detail the changes that have been made in response to the comment? There
>is no point in simply answering questions as the reviewer is hardly the
>only one wondering about the specific item, or, if that is the position
>of the Working Group, this needs to be pointed out in the response.
>  
>
First, let's be clear.  I was and am answering the reviewer.  I am 
pleased that there are others interested in our specification, and 
imagine some of them might be interested in the answers to these 
questions.  But I can only answer the questions posed, and then only 
with information that is public.  Working group internal deliberations 
are private to the W3C until such time as a draft is published.

With regard to your question:  XHTML 2 and 1.1 are XML dialects, much as 
HTML 4 was an SGML dialect.  XHTML refers NORMATIVELY to the XML 
specification, and that specification defines the rules for WF checking 
and validation.    XHTML does not require that a user agent be a 
validating processor. XHTML does require that strictly conforming XHTML 
documents be valid.  And, by inference, that they by well formed.  XHTML 
does not define behavior of user agents in the face if non-conforming 
XHTML documents.  A user agent could, for example, attempt to render the 
document anyway as most browsers do today.  That is outside the scope of 
XHTML - the document in question is NOT XHTML.

>>From your response I understand that XHTML 2 does not define processing
>of non-compliant content at all, neither directly nor indirectly through
>normative reference. That's not acceptable, please define processing in
>case of error conditions in detail, otherwise user agents cannot easily
>interoperate.
>
>  
>
I am sorry you find this unacceptable.  Our task is to define 
interoperability of valid content, not invalid content.  Invalid content 
is not XHTML, therefore can be treated as anything.  The message to 
document authors is simple - write valid content and it will be 
portable.  Write invalid content at your peril.  This is true today, and 
it will be true in the future.

>>>Does the Fragment identifier constraint mean that with mixed namespace
>>>content, I cannot use the fragment identifiers of the other namespaces
>>>in an XHTML document to identify part of an SVG image say?
>>>      
>>>
>>Any attributes of type "ID" are legitimate fragment identifiers.  If SVG has
>>such an attribute, it would be a legal target.
>>    
>>
>
>The requirement only applies "When a user agent processes an XHTML 2
>document as generic XML", no XHTML 2 user agent will ever do that as
>it is required to "process" XHTML 2 documents as defined in XHTML 2;
>clearly, user agents that do not conform to XHTML 2 are out of scope
>of XHTML 2, please remove the requirement.
>  
>
I think you are misunderstanding the constraint. It is a hold-over from 
XHTML 1.0, which could be processed as either "XML" or "HTML", depending 
on the user agent.  All XHTML 1.1 and beyond conforming user agents 
should be generic XML processors, or should have such a processor at 
their heart.  However, since this constraint seems confusing to you, I 
am confident we can rephrase it (it has already been rephrased in the 
current draft).

>Semantics of fragment identifiers are defined in the XHTML 2.0 media
>type registration, no media type for XHTML 2.0 has been proposed yet,
>please propose a media type for XHTML 2.0 by following the relevant
>IETF procedures and define fragment identifier processing there.
>  
>
Thanks for reminding us that we have this task ahead.  It is not clear 
that XHTML 2 will need a different media registration than 1.1, but we 
are considering that as an option.

>>>Is processing children of unknown elements sensible - this is what led
>>>to the script cargo-cult of -- hide from old browsers gobbledygook.
>>>      
>>>
>>The purpose of child content rendering is to enable extensibility without
>>breaking compatibility with legacy browsers.
>>    
>>
>
>As Jim pointed out, it does not enable but rather prevent that. Please
>change the draft to enable introduction of elements with content that
>should not be rendered as text.
>  
>
This is readily achievable today through the use of a stylesheet, should 
you wish to suppress the PCDATA content of an element (e.g., display: 
none or visibility: hidden).  The working group has debated introducing 
an attribute that would mean "do not present the content of this 
element".  There was no consensus for this proposal, as far as I know.

>So the HTML Working Group's position is that SVG needs to define how
>XHTML 2 content is processed in SVG's foreignObject element? SVG 1.2
>does not do that and the HTML Working Group did not send comments on
>SVG 1.2 to this effect, so it seems neither SVG nor XHTML 2.0 will
>define how to use them together; that does not make much sense to me.
>Please define XHTML 2 processing when XHTML 2 content is included in
>SVG documents.
>
>The same goes for SVG content included in XHTML 2 documents of course.
>Clearly, per the latest draft 
>
>  <html ...>...<svg ...><script>script content ...
>
>XHTML 2 user agents that do not support inline SVG would render the
>script content, that makes no sense at all as pointed out above.
>  
>
Without the namespaces here, I am not sure what you are talking about.  
However, if script is the XHTML element "script", then it wouldn't be 
rendered.  The script element's content is not rendered by definition.  
The XHTML 2 draft says "it must continue to process the content of that 
element", not "it must render the content of that element".  You could 
easily embed SVG content and ensure that its content was not presented, 
as I have already outlined. 

The more important issue here is how hybrid (or so-called compound) 
documents are handled.  XHTML Modularization introduced the mechanism 
for defining hybrid markup languages so that disparate collections of 
elements and attributes could be merged into a single "language", 
usually with elements from multiple XML namespaces.  If you define a 
language using that mechanism, we attempt to guarantee that documents 
which are valid in that XHTML Family markup language will be processable 
by any XHTML Family user agent.  The onus is on the language designer to 
create a content model that makes sense given the various different 
technologies that are being merged.  The XHTML Working Group has done 
this for XHTML 1.1, MathML, XForms, and others.  We have also done base 
work on SVG, although I do not personally know the status that 
activity.  Regardless, if such a hybrid markup language is defined, you 
can be confident that valid documents are XHTML conforming.  And as 
such, that they will be processed correctly by XHTML Family User 
Agents.  That's the whole point.

Thanks for your continued interest in our work.  It is gratifying to see 
non-members put in this kind of energy.

-- 
Shane P. McCarron                          Phone: +1 763 786-8160 x120
Managing Director                            Fax: +1 763 786-8180
ApTest Minnesota                            Inet: shane@aptest.com

Received on Thursday, 26 May 2005 02:46:43 UTC