W3C home > Mailing lists > Public > www-style@w3.org > April 2011

Re: [css3] [css21] browser specific CSS

From: Glenn Linderman <v+html@g.nevcal.com>
Date: Fri, 01 Apr 2011 10:51:05 -0700
Message-ID: <4D961089.7060709@g.nevcal.com>
To: "Tab Atkins Jr." <jackalmage@gmail.com>
CC: www-style@w3.org
On 4/1/2011 9:43 AM, Tab Atkins Jr. wrote:
> On Thu, Mar 31, 2011 at 8:20 PM, Glenn Linderman<v+html@g.nevcal.com>  wrote:
>> Thanks for your response, Tab.  I was unsure where or how to raise this
>> issue.  If there is a better place or better technique, please let me know.
>
> Nope, this is just fine.


Thanks for the confirmation.


>> On 3/31/2011 4:24 PM, Tab Atkins Jr. wrote:
>>> Browser-specific hacks are a strong anti-pattern that we shouldn't
>>> propagate, or even worse, officially bless in CSS.  This does mean
>>> that us web authors have a somewhat more difficult job, either
>>> designing good CSS that still works in the presence of bugs, or making
>>> our code ugly with hacks on occasion, but I think that's an acceptable
>>> cost.
>>
>> I understand about the anti-pattern... but that is clearly a purist point of
>> view, with little practicality.  Forcing sites to use JavaScript just to
>> detect which buggy browser one is dealing with is a significant negative,
>> particularly when users turn Javascript off, and not all web hosts support
>> CGI scripting (or limit it to particular scripts they install).
>
> Sorry, but you're wrong here.  Using browser detection feels like the
> easiest solution from the author's POV - after all, you know what
> version is buggy, so why not just deliver a hack for that specific
> version? - but when you're looking at the problem from a mile up, you
> see the nasty effects that all these reasonable local decisions have
> on the global environment.
>
> Boris's latest response is spot-on in this regard.  The sad truth is,
> a large percentage, probably even a strong majority, of authors simply
> don't know how to use browser detection properly.  It's not that
> difficult, but there are some subtle issues to worry about when making
> the detection reasonably future-proof, and these are often skipped
> over since it doesn't make a difference *right now*.  We can see the
> results of this with UA-string detection, and the *horrifying* pain it
> causes browsers *and users* when badly-done UA detection breaks a site
> because some new browser, or a new version of an existing browser,
> comes out that is improperly sorted by the detection routine.
>
> In other words, browser detection is bad for the web.  It sounds good
> to you, the author, right now, but it hurts users in the future, and
> the standard ordering of constituencies puts users above authors.
> Thus, making browser detection continue to be somewhat difficult and
> painful is a Good Thing(tm), because it means that people will
> hopefully use it less.


That's an interesting statement, and Boris's too.  I'll not answer his 
separately to reduce the clutter in the thread.

It is certainly true that UA string browser detection is hard, and error 
prone, and that began back when IE tried to pass itself off as Mozilla, 
if I recall correctly, so that web sites that were designed to take 
advantage of Mozilla features would work in the new version of IE also.

To me, this says that it would be helpful to standardize on a new method 
of browser detection that can't be spoofed from Javascript, rather than 
UA parsing, and which returns the right answer, in a form that is easier 
to use correctly than to use incorrectly.


>>> The one area where we *do* want to do something, imo, is in making CSS
>>> capable of doing feature-testing.  This has been proposed in the group
>>> before, via an @supports rule.  This doesn't let you work around
>>> browser bugs (except insofar as you can abuse this to do browser
>>> selection similar to how you can abuse vendor-specific selectors), but
>>> it does let you work around less powerful older browsers that don't
>>> understand some new functionality.
>>
>> So this recognizes that full implementation of the standard in all browsers,
>> at best, will require some amount of implementation and migration delay, but
>> fails to assist in compensating for bugs and/or variant and/or faulty
>> interpretations of the standards in those implementations... which there
>> will no doubt be, based on historical trends and the state of software
>> development.
>>
>> Recognition of the problem without providing a convenient solution to it
>> does not make for a friendly standard.
>
> Indeed, it doesn't account for bugs.  You can't do so without browser
> detection, which I've already explained is a bad thing for users and
> thus is bad for the web.


No, no, no.  You have _stated_ that browser detection is a bad thing, 
but not _explained_ why.  Since this keep coming up over and over, 
perhaps there is a picture from the mile-high view that you could refer 
me to, I'd be glad to look at it, understand the issues that you and 
Boris seem to agree on, and possibly change my opinion.

However, bugs do exist, and I predict they will continue to exist.  So 
what I'm hearing is that the standards body is happy to produce a spec, 
for which any features that are implemented in different or buggy ways 
by different browsers, prevents that feature from being used effectively 
by web sites, by preventing web authors from compensating.  So even the 
users that choose a browser that got it right, cannot benefit from the 
feature, because web authors can't tell their browser from the one that 
got it wrong, and therefore have to skip using the feature altogether.


>> There is yet one more alternative, of course, and that is to feed
>> browser-specific CSS files based on the User Agent string... but spoofing
>> the User Agent is rampant in certain circles, which could be disastrous to a
>> well-designed site.
>
> Ironically, spoofing the UA string is rampant *precisely because*
> browser detection based on the UA string is usually done badly.


That may be ironic, and it may even be true in some cases; however, in 
the circles I travel it, UA string spoofing is done *mostly because* 
some web sites try to require the use of a particular browser, and try 
to enforce it using a UA string (that being the only way they know of to 
do such detection and enforcement), and they do that rather than to test 
their site with a sufficient selection of browsers to reduce their 
costs.  Happily, the number of sites that I frequent that "only support 
IE" and been reduced (at least %-age-wise) over the years, and most 
sites that enforce restrictions usually now support "both IE and 
Firefox".  But I've talked to a number of people recently that claim 
that "my school work can only be done using IE", or "my kids school web 
site can only be accessed using IE" (yep, my perspective is that it is 
mostly schools that are still in the backwoods... "we can't afford to 
upgrade the site to handle other browsers...").  I generally don't tell 
them to do UA spoofing, these days, although I used to.  I tell them to 
complain to the webmaster, and then use IE for that site if they have 
to, but to use anything else for all their other web browsing.

Whether they correctly parse the UA string is a different story.  As a 
user, I want the web site to work with my browser, I don't want to be 
told to switch browsers, particularly, I don't want to be told I can't 
use a site with my preferred browser... when I am, I simply skip to a 
different site.

As an author, though, I want to accommodate all browsers, with fallback 
solutions for those that are limited due to bugs.

And as an author, I'd like W3 to tell me how to best do that, using UA 
string parsing, or better facilities.  I'm not enamored of the idea of 
limiting the presentation of the site to the features available in 
Mobile-IE-for-Windows-CE although there are still a fair number of users 
using it; I want to be able to accommodate those users, but provide a 
better experience for users with better browsers.  But I can't see how 
providing new features that can't be used, because the web site can't 
detect whether the user's browser _correctly_ supports the feature or 
not, advances the state of the art.

Maybe that's because I don't see the mile high picture... show it to me, 
don't just tell me there is such a picture.


>> It would be highly friendly if CSS required the browsers to "man up" to what
>> version of what browser they actually are, so that as their deficiencies
>> come to light they can easily be detected and compensated for rather than
>> forcing the use of browser-specific CSS, Javascript, or ugly hacks.
>
> Boris answered this - browsers aren't going to do this, for the
> reasons I outlined above.  Most people use this information in
> incorrect ways, and this hurts current users of niche browsers and
> future users of popular browsers.


Sorry, you didn't explain it, and Boris didn't explain... you only 
stated that there were such reasons.

If browser sniffing and resultant workarounds are implemented poorly, 
that either means that

1) it is hard to implement them well, given the available facilities for 
doing sniffing... this could certainly be improved, with boilerplate 
Javascript or CSS features to assist.

2) some web site authors are bad coders.  This is certainly true... 
there are many web sites that suffer from bad coder syndrome.  Lots of 
sites are authored by people by just whacking at the HTML until it works 
for them in one browser, one screen size, and then they claim it is 
done.  Others may do bad browser detection, and support two browsers, 
and make things worse for the third, and not care.

3) If a single browser is used for web site development, and it has 
bugs, the site may depend on those bugs, and no other browser may even 
want to display that site properly, because to do so would require 
implementing bugs instead of standards.

Problem 1 could be cured, eventually, with appropriate features in the 
specifications.  Problems 2 and 3 will never go away, but if browser 
detection were easier/standardized, and available in CSS without 
resorting to Javascript (and in Javascript in an easier manner, and to 
CGI scripts in an easier manner), then it would be lots easier to test 
with multiple browsers, and good web site coders could benefit.

Don't cater to the bad coders, but rather make it easy for good coders 
to do useful things in easy and effective ways, and provide 
documentation for doing it right.  If it is easy enough, even the bad 
coders might learn how.  But right now there is a huge barrier to using 
CSS: it doesn't work cross-browser, without heavy investment in learning 
arcane browser hacks.

Until you show me the mile high picture, what I'm hearing is that in the 
future, it will be more difficult to do browser detection, and therefore 
the barrier to using advanced CSS will be even higher.  The perception 
will be that the spec has lots of nice features, but because there is no 
way to work around browser bugs (which will exist), the payback for 
learning a new feature will be to rip out uses of it, if not all 
browsers support it correctly.
Received on Friday, 1 April 2011 17:51:46 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:39 GMT