W3C home > Mailing lists > Public > public-bpwg-ct@w3.org > January 2009

RE: [minutes] CT Call 6 january 2009

From: Rotan Hanrahan <rotan.hanrahan@mobileaware.com>
Date: Tue, 13 Jan 2009 13:30:52 +0000
To: Luca Passani <passani@eunet.no>
Cc: "public-bpwg-ct@w3.org" <public-bpwg-ct@w3.org>
Message-ID: <58AFDE5C-CAC1-4B77-822F-419587CF4C93@mimectl>
Here's a temporary fix for the FireFox problem where it can't understand entities in XHTML Basic 1.1 documents...

Where you have FF installed in a directory/folder called (for example) "Mozilla Firefox", look for a sub-sub-directory called "Mozilla Firefox/res/dtd" wherein you will find a file called "xhtml11.dtd"

Copy this file to one called "xhtml-basic11.dtd" in the same directory. Now refresh the page that caused you the problem.

This enables FF to recognise any XHTML 1.1 entities in XHTML Basic 1.1 documents.

Hope this helps.


From: Rotan Hanrahan
Sent: Tue 13/01/2009 12:47
To: Luca Passani
Cc: public-bpwg-ct@w3.org
Subject: RE: [minutes] CT Call 6 january 2009


The document is valid. But it is XHTML Basic 1.1, which is new, and not "well known" to FF, which unfortunately doesn't process external DTDs, so it mis-parses the document, and then having missed the entity defs in the referenced DTD it breaks when it sees something it didn't expect (but should have expected, if it was processing the DTD). In this case the DTD is http://www.w3.org/TR/xhtml-basic/xhtml-basic11.dtd

This is my understanding of the problem.

Perhaps the FF insiders can give a better explanation.

This shows that fragility is not just in the creation of "better quality" markup, but also in its consumption. An adaptive solution would know about browser weaknesses and work around them pragmatically.


From: Luca Passani
Sent: Tue 13/01/2009 12:20
To: public-bpwg-ct
Subject: Re: [minutes] CT Call 6 january 2009

Please pardon my jumping into an old thread, but this is funny, because 
the point I was making in my post one-week old post below just 
materialised in front of me on the W3C website:


here is what I am getting (Firefox):

XML Parsing Error: undefined entity
Line Number 42, Column 518:            Comment from: pravin [Visitor] <a 
title="Send email to comment author"><img 
src="http://www.w3.org/blog/rsc/icons/envelope.gif" width="13" 
height="10"  class="middle" title="Send email to comment author" 
alt="Email"/></a>  &middot; <a 
rel="nofollow">http://localhost/wurfl/wurfl_php.php</a>            </div>

It is honorable that W3C tries to eat its own dogfood, but, as I was 
saying, XHTML breaks way too easily to be viable for the big web. The 
risk that someone somewhere injects a poisonous entity into your site is 
just too high....


Luca Passani wrote:
> Tom Hume wrote:
>> On 7 Jan 2009, at 15:27, Luca Passani wrote:
>>>>> >  sean: Sometimes there's content for high-end phones tagged as
>>>>> >  "mobile" that may not work on a low-end phone. We already have a
>>>>> >  method for keeping proxies away from content, "no-transform"
>>>> [snip]
>>>> Which bit of Seans comment do you disagree with here Luca?
>>> I disagree with the idea that who runs the network feels entitled to 
>>> know better than those who created the application and owns the 
>>> copyright. Can I?
>> Course you can :) I don't see any assertion to the contrary in the 
>> comment from Sean that you quoted.
> Sean's comment reveals that Novarra feels entitled to reformat mobile 
> content to make it better (for their definition of better). I disagree 
> with that notion. What's your problem?
>>> While I'm here, it still does not make sense that the XHTML MIME 
>>> type is not accepted as an indication that a site is mobile. This is 
>>> the situation with 99%+ of the content out there 
>>> (application/xml+xhtml == MOBILE), so there you have a perfectly 
>>> simple and effective way to detect mobile.
>> This is not universally true though - you and I discussed this back 
>> in March last year on my blog posting at
>>     http://www.tomhume.org/2008/03/guidelines-for.html
>> Where Russ Beattie popped up to point out that whilst this MIME type 
>> is a decent heuristic (and it's noted as such in CT), it's not absolute. 
> OK, so, since your ask for it, I will repeat all the arguments here 
> (and by the way, Russ wrote that comment when he was still trying to 
> make Mowser fly, so he was heavily biased at the time).
> The XHTML Mime type can be used  for web content only theoretically. 
> In practice nobody uses that MIME type for full-web content simply 
> because it would break way too easily on all browsers (save-as dialog 
> for MSIE users, catastrophic error messages and no content at all for 
> Firefox, Opera and Mozilla). Nobody uses XHTML for full web content, 
> not even those who think they are using XHTML (somewhere they'll be 
> doing something which will make all browsers reverse to quicks mode 
> and consider their xhtmllish mark-up as nothing more than tag-soup).
> Because of this, application/xml+xhtml is an excellent heuristics to 
> detect mobile content (the only place where the MIME type is adopted).
> Now, I can understand that W3C would find the idea of accepting that 
> MIME type as an indicator of mobile content embarassing (it could be 
> read as the admission that XHTML did not go very far on the web). On 
> the other hand, this is not my problem and it is simply not OK to 
> discard application/xml+xhtml  as a good heuristics for CTG because 
> the following holds in virtually all cases:
>     application/xml+xhtml  => mobile content
> Luca
Received on Tuesday, 13 January 2009 13:32:19 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:09:20 UTC