W3C home > Mailing lists > Public > www-style@w3.org > April 2011

Re: [css3] [css21] browser specific CSS

From: Boris Zbarsky <bzbarsky@MIT.EDU>
Date: Fri, 01 Apr 2011 14:18:44 -0400
Message-ID: <4D961704.7070002@mit.edu>
To: Glenn Linderman <v+html@g.nevcal.com>
CC: "Tab Atkins Jr." <jackalmage@gmail.com>, www-style@w3.org
On 4/1/11 1:51 PM, Glenn Linderman wrote:
> It is certainly true that UA string browser detection is hard, and error
> prone, and that began back when IE tried to pass itself off as Mozilla,
> if I recall correctly, so that web sites that were designed to take
> advantage of Mozilla features would work in the new version of IE also.
>
> To me, this says that it would be helpful to standardize on a new method
> of browser detection that can't be spoofed from Javascript, rather than
> UA parsing, and which returns the right answer, in a form that is easier
> to use correctly than to use incorrectly.

Glenn, I think you misunderstood my argument.

The problem is not that sites can't parse UA strings (though there's 
that too; try browsing around with a browser with the string "pre" 
somewhere in its user-agent, or try using CapitalOne's site with an 
Irish localization of a browser that is not MSIE).

The problem is that authors misuse the information they extract from the 
UA string.  And it's _hard_ to not do that.  For example, say it's 2010 
and you correctly extract that the user is using Firefox 3.6.3, which is 
based on Gecko 1.9.2.3, and their Gecko build was created on May 1, 
2010.  And this has a bug you want to work around.  Which of these do 
you use the workaround for?

* Firefox version 3.6
* Firefox versions 3.6 and later
* Firefox versions 3.6 and earlier
* Gecko 1.9.2
* Gecko 1.9.2 and later
* Gecko 1.9.2 and earlier
* Gecko builds created on May 1, 2010
* Gecko builds created on or after May 1, 2010
* Gecko builds created on or before May 1, 2010
* Firefox version 3.6.3
etc, etc

I've seen people using all of these and more.

The right one to use depends on the bug you're working around. 
Understanding which one to use involves understanding the relevant 
specifications and how stable they are, and sometimes knowing something 
about what the UA vendor's plans are.  Deciding which one to use also 
depends on your testing policies (e.g. whether you will test your site 
with the Firefox 4 betas when they appear).

So people get this wrong all the time, and I can't blame them!  But the 
problem is they _think_ this is a matter of a simple version check.

Now in this case the problem can be ameliorated by providing less 
information in the UA string; e.g. only providing "Gecko 1.9.2.3".  But 
that still leaves a number of the above options for how to apply the 
workaround, and authors will still guess wrong.

> No, no, no. You have _stated_ that browser detection is a bad thing, but
> not _explained_ why. Since this keep coming up over and over, perhaps
> there is a picture from the mile-high view that you could refer me to,
> I'd be glad to look at it, understand the issues that you and Boris seem
> to agree on, and possibly change my opinion.

Well, one problem with browser detection as practiced right now that Tab 
mentioned is that it raises a barrier to users using any browser that 
wasn't in use when the detection code was written, because sites break. 
  This includes both using new browsers and using new versions of 
existing browsers.

Now maybe you don't think this is a bad thing, of course.  I think Tab 
and I think it's a bad thing.

I'll let Tab speak for any other issues he's run into; the above is the 
big one for me.

> 1) it is hard to implement them well, given the available facilities for
> doing sniffing... this could certainly be improved, with boilerplate
> Javascript or CSS features to assist.

I don't think those would help the implementation difficulties I mention 
above.

> 2) some web site authors are bad coders.

I don't think the _coding_ is the main problem here (though it's 
certainly _a_ problem).  The main problem is that correctly doing UA 
sniffing just requires resources beyond what people are willing to put 
into it.  In particular, it requires continuous testing of new releases 
of things you're sniffing for and updating of your sniffing.  Most 
people just don't want to deal with that, and I don't blame them.

-Boris
Received on Friday, 1 April 2011 18:19:21 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:39 GMT