W3C home > Mailing lists > Public > www-style@w3.org > April 2011

Re: [css3] [css21] browser specific CSS

From: Boris Zbarsky <bzbarsky@MIT.EDU>
Date: Sat, 02 Apr 2011 00:15:24 -0400
Message-ID: <4D96A2DC.301@mit.edu>
To: Glenn Linderman <v+html@g.nevcal.com>
CC: www-style@w3.org
On 4/1/11 8:25 PM, Glenn Linderman wrote:
> The "big boys" will do adequate testing of the features in a number of brands and versions of popular
> browsers

I would be interested in your definition of "big boys".

So far we have encountered issues in Firefox 4 due to broken browser 
detection (where the site correctly detected the browser and version but 
then misused the information, not where the detection was just 
incorrect) on live.com, amazon.com, and bankofamerica.com.  That's just 
the sites I remember offhand that are clearly "big" and clearly didn't 
test in Firefox 4 betas or RCs much.  For example live.com broke things 
with a change on their end two weeks before the first RC of Firefox 4, 
reintroducing a problem that we'd pointed out to them a few months 
before that and which they had already fixed once.  It's clear that they 
did no testing of this change adequately.

My general experience is that sites for the most part don't start 
testing in a new browser version until after it's released; many have 
have explicit policies to this effect and will ignore feedback from 
their users that the site is broken in a browser which is in RC and will 
be shipping in two weeks.

Fixing after the new browser has shipped, of course, means that users 
are loath to update because updating means broken sites.

> Any attempts to predict the future are foolishness. Bug workarounds
> should be applied to versions known to have the bugs, no more, no less.

This is _very_ rarely done.

> This does require testing when new versions arrive to see if the bugs
> have been fixed or not, so the check can be updated. That is annoying

Annoying to the point that no one does it, actually.  I wouldn't either, 
if I were a web author....  It's just too much work.  Then again, maybe 
you don't sprinkle browser-conditioned things in your code as much as 
some sites do, so the burden is less for you.

> I'll admit it is not clear to me when to draw the line between Gecko
> versions and Firefox versions. I rather doubt that there are multiple
> versions of Gecko used in the same version of Firefox, though

There are not.

> so checking the Firefox (it is the brand) version seems to provide more
> precision.

That's more or less the wrong answer.  ;)

> Whether checking Gecko version allows the same checks to be
> used for multiple brands of browsers that all use (different) versions
> of Gecko

It does, yes.

> And whether the bug
> encountered is in Gecko, or pre- or post- Gecko processing by the
> branded browser is also not made real clear when something works
> differently.

True, but non-Gecko stuff affecting the rendering area is _very_ rare.

Point is, this is the sort of question to which very few people know the 
answer in spite of years of outreach efforts (see 
http://geckoisgecko.org/ for example).

> If they attempt to predict the future, they are playing the fool.

Yep.  Many do just that.

> This seems obvious to me, but may not be obvious to everyone. If I'm
> wrong, please correct me in detail; if I'm right, it is an educational
> issue, and some good blog posts and developer best practices
> documentation would probably help some.

You're right that the right way to sniff is for the current version, but 
that means updating your browser checks, which is time-consuming.  As 
another data point, there _are_ sites we've encountered that sniff for 
what they think is the current version of Gecko by conditioning on the 
build date string (which is conceptually similar to your proposed best 
practice).   They used to all break on the first security update of the 
year and take an average of 3-4 months to get fixed, by adding another 
line in their sniffing code for the new year.  In theory, if they 
retested with a beta build as soon as a new year started, they would 
have been fine, but in practice they didn't do that.

I say "used to" above, because we've removed this particular footgun by 
just lying about the build date in release builds of Firefox... it's 
always the same now.

> I'll agree that buggy browsers are a problem. I'll agree that poorly
> coded sites are a problem. I'll agree that poorly coded browser sniffing
> is a problem.

OK, good.  Common ground so far.

> I don't think the CSS committee can solve the buggy browser problem. I
> don't think the CSS committee can solve the poorly coded sites problem.
> It would be nice if, in the maelstrom of buggy browsers and sites, the
> CSS committee could look to see where it could help reduce the
> complexity and confusion.
>
> I think it could help solve poorly coded browser sniffer problem... if
> it wasn't so hard to figure out how to detect the browser, the site
> coders would have more brainpower left to figure out the best way range
> of versions for which to use a fallback CSS...

I think this is where we disagree.  If it were easier to detect the 
browser, I think site coders would think even less than they do now 
about the problem, more of them would use detection without really 
understanding what they're doing, and we would have less "poorly coded" 
browser sniffing but a lot more poorly coded sites.

> OK, I don't blame you for not wanting to blame them, and the minute you
> code a browser sniff, you do open up the requirement to continuously
> test new releases (or have a good feedback system for users to report
> things like "Hey, I upgraded to version ABC of browser XYZ, and
> encountered <description of problem>". Of course, the site author has
> seen that description before, and can then go tweak the check to now
> include version ABC in the fallback case, and the problem can be solved
> in as little as minutes.

Empirical evidence suggests months if not more.  See above.  That sort 
of timeframe seems pretty typical for large commercial sites in my 
experience; small sites tend to be more nimble for obvious reasons.

> And if you decide not to browser sniff, or if it becomes impossible to
> browser sniff, then web authors simply aren't going to use features that
> don't work in any one of the browsers they have chosen to support.

This may not be the worst thing in the world, actually...  It'll delay 
adoption of those features by a year or two maybe, right?  Depending on 
how long browsers take to get their act together.

-Boris
Received on Saturday, 2 April 2011 04:16:01 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:39 GMT