W3C home > Mailing lists > Public > www-style@w3.org > April 2011

Re: [css3] [css21] browser specific CSS

From: Tab Atkins Jr. <jackalmage@gmail.com>
Date: Fri, 1 Apr 2011 11:27:52 -0700
Message-ID: <AANLkTi=ghjRJ7YZZJhLJP23XSXVuLcyOiuXJSO0abUD5@mail.gmail.com>
To: Glenn Linderman <v+html@g.nevcal.com>
Cc: www-style@w3.org
On Fri, Apr 1, 2011 at 10:51 AM, Glenn Linderman <v+html@g.nevcal.com> wrote:
> On 4/1/2011 9:43 AM, Tab Atkins Jr. wrote:
>> Glenn Linderman wrote:
>>> It would be highly friendly if CSS required the browsers to "man up" to
>>> what
>>> version of what browser they actually are, so that as their deficiencies
>>> come to light they can easily be detected and compensated for rather than
>>> forcing the use of browser-specific CSS, Javascript, or ugly hacks.
>>
>> Boris answered this - browsers aren't going to do this, for the
>> reasons I outlined above.  Most people use this information in
>> incorrect ways, and this hurts current users of niche browsers and
>> future users of popular browsers.
>
> Sorry, you didn't explain it, and Boris didn't explain... you only stated
> that there were such reasons.

You're right; in an attempt to keep my email from getting too long, I
didn't go into a lot of detail.  Allow me to rectify that, then.

There is one, and only one, decent way to do browser-detection and use
that information.

First, one must craft a test that is sufficiently precise that it only
targets a single version from a single browser, or a well-defined
range of existing versions from a single browser.  One must *never*
attempt to detect future browser versions, or use a test that has a
decent chance of accidentally detecting such, or that similarly
detects new browsers.  Crafting this sort of thing requires a decent
bit of cleverness; the simple and commonly-used browser detection
hacks pretty much uniformly fail this metric.

Second, one must use this information only to deploy *exceptions* to
the default style and behavior, never to actually deploy new behavior.
 If you ever deploy new behavior based on a detection hack, then a new
version of a currently-bad browser, or a new/niche browser that has
the sufficient capabilities, won't get the sexy new behavior.  This
is, again, very commonly misused.

(Note that feature-testing gets around both of these issues - it's
totally fine to feature-test in a way that will detect future/unknown
browsers with the right functionality, and to deploy special sexy
functionality based on the results.)

Violating either of these rules has bad consequences.  If your
detection algo will fire on future versions of a browser, then fixing
the bug or adding the functionality that you're using a hack to get
around won't help them - users of the new version will still get the
old/sucky version of the code, despite being full-featured.

The same applies if your detection algo is insufficiently precise,
such that it will detect new/niche browsers: Opera, for example, has
run into this problem throughout its existence; Chrome did as well
when we did early experiments with radically simplifying the UA
string.  Don't even get me started on all the niche Linux browsers.
Again, users suffer by being fed a set of hacks that don't actually
apply to their browser, and probably screw things up worse.

Even if you *think* you're being precise, it's still easy to do this
badly.  For example, there's a lot of detection code on the web that
successfully finds Opera and extracts its version.  However, a lot of
this is badly made, such that it just grabs the *first digit* of the
version number.  This causes so much problem for Opera when they went
from version 9 to version 10 (detected as version 1!) that they had to
just give up, set the old version number to 9.80, and then list their
*real* version number in a new place on the UA string.  IE will
probably have the same problem with IE10.  Chrome, luckily, is young
enough that we were able to power through our own version of this
issue.  Again, users suffer from receiving the wrong set of hacks.

If your use the detection results to deploy new features, it has
similar obvious problems.  New versions of old browsers and new/niche
browsers get the sucky old version of the site rather than the sexy
new version, just because they weren't successfully detected as being
a "conforming" browser version.  You (luckily) don't see very many
"Please use IE6!" notices on the web these days, but that's just
because people are quieter about their hacks; instead, quite a lot of
sites still just work worse or wrongly due to this effect.

In general, crafting a good detection algorithm is hard.  Crafting
your site to be full-featured by default but gracefully degrading in
properly-detected old browsers is hard.  When you fail at either of
these, users suffer.

Does that help answer your question?


> If browser sniffing and resultant workarounds are implemented poorly, that
> either means that
>
> 1) it is hard to implement them well, given the available facilities for
> doing sniffing... this could certainly be improved, with boilerplate
> Javascript or CSS features to assist.

Boilerplate can make the first problem (accurately detecting) somewhat
better.  It can't solve things entirely, and it does nothing for the
second problem.


> 2) some web site authors are bad coders.  This is certainly true... there
> are many web sites that suffer from bad coder syndrome.  Lots of sites are
> authored by people by just whacking at the HTML until it works for them in
> one browser, one screen size, and then they claim it is done.  Others may do
> bad browser detection, and support two browsers, and make things worse for
> the third, and not care.

It's not "some".  It's a large majority.  Most people simply aren't
good coders in general; programming on the web brings its own unique
challenges that even more people simply don't understand.  Boris puts
it better - our definition of "good" and "bad" are a little unique
here; a "good" coder in this instance is someone who has fairly
intimate knowledge of the development of all the browsers.  Those
people are *very* few and far between; even being a highly skilled and
intelligent coder doesn't mean you're "good" for the purpose of doing
good UA detection.


> 3) If a single browser is used for web site development, and it has bugs,
> the site may depend on those bugs, and no other browser may even want to
> display that site properly, because to do so would require implementing bugs
> instead of standards.

Yup, though this can be true without any browser detection at all.


> Problem 1 could be cured, eventually, with appropriate features in the
> specifications.  Problems 2 and 3 will never go away, but if browser
> detection were easier/standardized, and available in CSS without resorting
> to Javascript (and in Javascript in an easier manner, and to CGI scripts in
> an easier manner), then it would be lots easier to test with multiple
> browsers, and good web site coders could benefit.
>
> Don't cater to the bad coders, but rather make it easy for good coders to do
> useful things in easy and effective ways, and provide documentation for
> doing it right.  If it is easy enough, even the bad coders might learn how.
>  But right now there is a huge barrier to using CSS: it doesn't work
> cross-browser, without heavy investment in learning arcane browser hacks.

We want to offer features that let good coders do awesome things that
help users.  We don't want to offer features that let bad coders do
things that hurt users.  Every feature has a tension here, because
everything can be misused.  Every feature, then, has to be evaluated
separately, to see if the gain from exposing it is worth the harm from
it being misused.  Browser detection has a long history of being very
bad, and there's no reason to think that the parts we can solve in the
browser will offset the parts that are still dependent on people
always doing the right thing, because doing the "right thing" is *very
hard*.

~TJ
Received on Friday, 1 April 2011 18:28:44 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:39 GMT