W3C home > Mailing lists > Public > www-style@w3.org > April 2011

Re: [css3] [css21] browser specific CSS

From: Glenn Linderman <v+html@g.nevcal.com>
Date: Sat, 02 Apr 2011 01:59:10 -0700
Message-ID: <4D96E55E.9070309@g.nevcal.com>
To: Boris Zbarsky <bzbarsky@MIT.EDU>
CC: www-style@w3.org
On 4/1/2011 9:15 PM, Boris Zbarsky wrote:
> On 4/1/11 8:25 PM, Glenn Linderman wrote:
>> The "big boys" will do adequate testing of the features in a number of
>> brands and versions of popular
>> browsers
> I would be interested in your definition of "big boys".

"big boys": noun, slang.  The well-financed corporations that have a 
have good development processes and practices, consider their web 
presence an important and significant part of their business, and 
realize that browser selection is a user choice.

> So far we have encountered issues in Firefox 4 due to broken browser
> detection (where the site correctly detected the browser and version but
> then misused the information, not where the detection was just
> incorrect) on live.com, amazon.com, and bankofamerica.com. That's just
> the sites I remember offhand that are clearly "big" and clearly didn't
> test in Firefox 4 betas or RCs much. For example live.com broke things
> with a change on their end two weeks before the first RC of Firefox 4,
> reintroducing a problem that we'd pointed out to them a few months
> before that and which they had already fixed once. It's clear that they
> did no testing of this change adequately.

OK, so I'm surprised at such a report regarding amazon.com.

> My general experience is that sites for the most part don't start
> testing in a new browser version until after it's released; many have
> have explicit policies to this effect and will ignore feedback from
> their users that the site is broken in a browser which is in RC and will
> be shipping in two weeks.
> Fixing after the new browser has shipped, of course, means that users
> are loath to update because updating means broken sites.

It also means that those sites don't use the new features until after 
they are shipped, making it hard to fix bugs during the browser 
development cycles, because they weren't detected.  And that exacerbates 
the variations later detected in shipped browsers, and increases the 
need for browser sniffing, as in: "This CSS feature has been out for 3 
years now, maybe we should use it... oh, it works different in different 
browsers when we lean on that side of the feature?  Stupid CSS.  Has 
someone figured out how a workaround?"

>> Any attempts to predict the future are foolishness. Bug workarounds
>> should be applied to versions known to have the bugs, no more, no less.
> This is _very_ rarely done.

I posit this is _very_ rarely done because browser brand detection is 
hard enough without having to code version detection too.  "Oh well, 
browser Q is screwed up anyway, just screw the users that use it" 
probably seems like an adequate justification to avoid the added 
complexities of a more precise detection.  Making sniffing hard 
encourages this sort of attitude. "Our site works best in browser R.  If 
browser Q can't hack the code, screw it, just use the fallback solution 
we use for the handhelds."

>> This does require testing when new versions arrive to see if the bugs
>> have been fixed or not, so the check can be updated. That is annoying
> Annoying to the point that no one does it, actually. I wouldn't either,
> if I were a web author.... It's just too much work. Then again, maybe
> you don't sprinkle browser-conditioned things in your code as much as
> some sites do, so the burden is less for you.

I try to isolate browser-specific things to a small number of places, 
yes.  And I don't always jump on the latest browser the day I hear it 
has been released, and test everything.  But eventually, and especially 
if someone reports it.

>> I'll admit it is not clear to me when to draw the line between Gecko
>> versions and Firefox versions. I rather doubt that there are multiple
>> versions of Gecko used in the same version of Firefox, though
> There are not.
>> so checking the Firefox (it is the brand) version seems to provide more
>> precision.
> That's more or less the wrong answer. ;)
>> Whether checking Gecko version allows the same checks to be
>> used for multiple brands of browsers that all use (different) versions
>> of Gecko
> It does, yes.
>> And whether the bug
>> encountered is in Gecko, or pre- or post- Gecko processing by the
>> branded browser is also not made real clear when something works
>> differently.
> True, but non-Gecko stuff affecting the rendering area is _very_ rare.
> Point is, this is the sort of question to which very few people know the
> answer in spite of years of outreach efforts (see
> http://geckoisgecko.org/ for example).

I looked at that site once :)  And my current sniffer totally ignores 
the Gecko build date.  So I'm a culprit of lack of knowledge here.  On 
the other hand, if I do a proper job of checking Firefox versions, that 
is just as good for Firefox users.  But it means I've not leveraged the 
Gecko engine to give equal support to other Gecko browsers.

>> If they attempt to predict the future, they are playing the fool.
> Yep. Many do just that.

"There's a sucker born every minute."

>> This seems obvious to me, but may not be obvious to everyone. If I'm
>> wrong, please correct me in detail; if I'm right, it is an educational
>> issue, and some good blog posts and developer best practices
>> documentation would probably help some.
> You're right that the right way to sniff is for the current version, but
> that means updating your browser checks, which is time-consuming. As
> another data point, there _are_ sites we've encountered that sniff for
> what they think is the current version of Gecko by conditioning on the
> build date string (which is conceptually similar to your proposed best
> practice). They used to all break on the first security update of the
> year and take an average of 3-4 months to get fixed, by adding another
> line in their sniffing code for the new year. In theory, if they
> retested with a beta build as soon as a new year started, they would
> have been fine, but in practice they didn't do that.
> I say "used to" above, because we've removed this particular footgun by
> just lying about the build date in release builds of Firefox... it's
> always the same now.
>> I'll agree that buggy browsers are a problem. I'll agree that poorly
>> coded sites are a problem. I'll agree that poorly coded browser sniffing
>> is a problem.
> OK, good. Common ground so far.
>> I don't think the CSS committee can solve the buggy browser problem. I
>> don't think the CSS committee can solve the poorly coded sites problem.
>> It would be nice if, in the maelstrom of buggy browsers and sites, the
>> CSS committee could look to see where it could help reduce the
>> complexity and confusion.
>> I think it could help solve poorly coded browser sniffer problem... if
>> it wasn't so hard to figure out how to detect the browser, the site
>> coders would have more brainpower left to figure out the best way range
>> of versions for which to use a fallback CSS...
> I think this is where we disagree. If it were easier to detect the
> browser, I think site coders would think even less than they do now
> about the problem, more of them would use detection without really
> understanding what they're doing, and we would have less "poorly coded"
> browser sniffing but a lot more poorly coded sites.

Yes, probably this is where we yet disagree.  I don't think site coders 
would think less than they do now about the problem.  I don't think 
there would be more poorly coded sites, but I certainly can't guarantee 
there would be fewer.

I think the developers of browser sniffer hacks, having been put out of 
a job by decent syntax and APIs in CSS, Javascript, and HTML, would turn 
their attention to producing better solutions to the various browser 
discrepancies that turn up, and the code that would be available to 
copy-n-paste would be, in general, better quality.  Right now, it is so 
hard to do the browser detection, that the actual workarounds get short 

>> OK, I don't blame you for not wanting to blame them, and the minute you
>> code a browser sniff, you do open up the requirement to continuously
>> test new releases (or have a good feedback system for users to report
>> things like "Hey, I upgraded to version ABC of browser XYZ, and
>> encountered <description of problem>". Of course, the site author has
>> seen that description before, and can then go tweak the check to now
>> include version ABC in the fallback case, and the problem can be solved
>> in as little as minutes.
> Empirical evidence suggests months if not more. See above. That sort of
> timeframe seems pretty typical for large commercial sites in my
> experience; small sites tend to be more nimble for obvious reasons.

I was careful to say "can be", realizing that large sites probably have 
a requirement to do a month of internal testing before shipping a new 

>> And if you decide not to browser sniff, or if it becomes impossible to
>> browser sniff, then web authors simply aren't going to use features that
>> don't work in any one of the browsers they have chosen to support.
> This may not be the worst thing in the world, actually... It'll delay
> adoption of those features by a year or two maybe, right? Depending on
> how long browsers take to get their act together.

No, it'll delay adoption of the features by many years, because lots of 
users don't upgrade their browsers until they get a new computer, 3-5 
years down the road.  So those older browsers that can't be sniffed will 
delay adoption of new CSS features.  Particularly problematical are 
older browsers that implement a new feature with a bug.  That feature 
becomes dead until that browser is dead.  Feature detection can help 
some, if all the browsers get it right the first time (ROFL).
Received on Saturday, 2 April 2011 08:59:49 UTC

This archive was generated by hypermail 2.3.1 : Monday, 2 May 2016 14:38:44 UTC