Re: [minutes] CT Call 6 january 2009

yes, sure. When you get back to reality, you will realize that 99,84% 
are HTML documents (some of which may have whatever DTD, but all of 
which will be invariably handled by the respective tag-soup parsers).
The remaining 0.16% are in great majority mobile-optimised sites, i.e. 
exactly the ones that the heuristics intends to positively identify 
(i.e. my point all the way).
What remains after you have taken 2% of that 0.16% is a handful of 
full-web site which are using xhtml+xml for demo purposes.  You can 
probably also find the site of some XML pasdaran who intends to make a 
point about XHTML in the face of the reality (someone who is intimately 
familiar with the complete W3C technology stack, I am sure, but does not 
need to run a website that profits on the popularity among large 
audiences...)

Luca

Tom Hume wrote:
>
> I don't agree I'm afraid. Looking at Eduardo's stats, at least 99.84% 
> of documents are XHTML but not advertised as such. This leaves 0.16% 
> of sites as XHTML and advertised as such.
>
> Combine with NetCrafts figures of and that's just shy of 300,000 sites 
> - a pretty significant number to discount, even if a small proportion 
> of the overall web IMHO.
>
> On 7 Jan 2009, at 22:20, Luca Passani wrote:
>
>> I'll try to find some data. But do we agree that if less than 1% of 
>> full-web-only adopt xml+xhtml as a MIME type, then we can call 
>> xhtml+xml an absolute heuristic for detecting mobile sites?
>>
>> (of course, I still believe that those sites are way less than 1% of 
>> the full web, but I say 1% to leave some margin for error in the 
>> unlikely event  that whoever collected the stats found two or three 
>> of those sites out of a pool of only a few hundreds)
>
> -- 
> Future Platforms Ltd
> e: Tom.Hume@futureplatforms.com
> t: +44 (0) 1273 819038
> m: +44 (0) 7971 781422
> company: www.futureplatforms.com
> personal: tomhume.org
>
>
>
>
>

Received on Wednesday, 7 January 2009 23:05:47 UTC