Re: [css3] [css21] browser specific CSS

I'm concatenating the three most recent responses.  I have to say that 
for 5 days after my first posting, I wondered if there would be any 
response, and I do appreciate the current responses.  If this has come 
up multiple times, probably you are all tired of trying to educate each 
person one by one.  And I'm probably harder to educate that the previous 
ones :)

I'll summarize here, and spew details inline below.

* Web sites will use any facilities at hand to achieve a site that is 
"fancier" than the site next door, because they hope it will attract 
more eyeballs, to keep them in business.  The "big boys" will do 
adequate testing of the features in a number of brands and versions of 
popular browsers, but may not bother with the bottom 5% of users using 
archaic or alternative browsers.  Making standard facilities to do 
browser detection, allows innovation to continue among the leading edge 
of site designers, because they can use the new features as they become 
available in various browsers.  Sadly, a new feature is likely to be 
buggy, even after it advertises itself to be "available" so feature 
detection is insufficient to determine the ability to use a feature 
correctly and successfully.

* browser manufacturers risk their browser being sidelined if they ship 
a buggy browser, unless it is named IE or ships with Windows.  If users 
encounter numerous sites for which it doesn't work, then that encourages 
bug fixes releases, but when there are 14 million downloads in one day, 
a _lot_ of users are going to get a bad taste from some of the bugs that 
show up.  And then it is going to be hard to get all those copies 
updated regularly, so the bugs live in the wild for years.  Automatic 
updates are nice, but users with limited connectivity turn them off, for 
good reason... the updates consume bandwidth at inopportune times.  And 
so, they finally decide they are far enough behind that they should 
upgrade, they do the big download at an opportune time, and find their 
bad taste, and either go back to their previous version, or back to IE.

* new features in the standard risk being unusable if a major browser 
produces a buggy implementation.  That's not to say that the buggy 
implementations are, themselves, unusable, just non-standard in ways 
that become important to site developers.  Dealing with multiple bugs 
and deficiencies is hard today.  New features in the standard should 
make that easier.  So far I've not gotten any responses about how to 
work around the problems, other than "recode to use different CSS", 
which either indicates that there is a lot of redundant features, or 
forces a site redesign because the features needed aren't uniformly 
available.


On 4/1/2011 11:18 AM, Boris Zbarsky wrote:
> On 4/1/11 1:51 PM, Glenn Linderman wrote:
>> It is certainly true that UA string browser detection is hard, and error
>> prone, and that began back when IE tried to pass itself off as Mozilla,
>> if I recall correctly, so that web sites that were designed to take
>> advantage of Mozilla features would work in the new version of IE also.
>>
>> To me, this says that it would be helpful to standardize on a new method
>> of browser detection that can't be spoofed from Javascript, rather than
>> UA parsing, and which returns the right answer, in a form that is easier
>> to use correctly than to use incorrectly.
>
> Glenn, I think you misunderstood my argument.


That's always a possibility.


> The problem is not that sites can't parse UA strings (though there's
> that too; try browsing around with a browser with the string "pre"
> somewhere in its user-agent, or try using CapitalOne's site with an
> Irish localization of a browser that is not MSIE).


Um. Yes, CapitalOne's site is probably one of the most recent ones that 
I've had specific problems with in recent years, that can be attributed 
to browser detection.  I was using Mozilla then SeaMonkey at the time I 
had problems... couldn't log in. Spoofed with a Firefox UA string, it 
worked for a time.  Eventually, it quit working.  Had to use a real 
Firefox.  I finally switched from SeaMonkey to Firefox for most 
browsing, and haven't had problems since, partly because Firefox is one 
of their supported browsers (apparently), and partly because I don't use 
their site much anymore...

It is pretty clear that site authors are going to do browser sniffing as 
a workaround to the problems of buggy browsers.  Making it easier to do 
correct browser sniffing would be helpful to avoid sites being created 
with bad detection checks like CapitalOne has or had.


> The problem is that authors misuse the information they extract from the
> UA string. And it's _hard_ to not do that. For example, say it's 2010
> and you correctly extract that the user is using Firefox 3.6.3, which is
> based on Gecko 1.9.2.3, and their Gecko build was created on May 1,
> 2010. And this has a bug you want to work around. Which of these do you
> use the workaround for?
>
> * Firefox version 3.6
> * Firefox versions 3.6 and later
> * Firefox versions 3.6 and earlier
> * Gecko 1.9.2
> * Gecko 1.9.2 and later
> * Gecko 1.9.2 and earlier
> * Gecko builds created on May 1, 2010
> * Gecko builds created on or after May 1, 2010
> * Gecko builds created on or before May 1, 2010
> * Firefox version 3.6.3
> etc, etc
>
> I've seen people using all of these and more.
>
> The right one to use depends on the bug you're working around.
> Understanding which one to use involves understanding the relevant
> specifications and how stable they are, and sometimes knowing something
> about what the UA vendor's plans are. Deciding which one to use also
> depends on your testing policies (e.g. whether you will test your site
> with the Firefox 4 betas when they appear).


Any attempts to predict the future are foolishness.  Bug workarounds 
should be applied to versions known to have the bugs, no more, no less. 
  This does require testing when new versions arrive to see if the bugs 
have been fixed or not, so the check can be updated.  That is annoying, 
but once fixed, that check need no longer be updated, but just remain 
for future use of the buggy browser versions.

I'll admit it is not clear to me when to draw the line between Gecko 
versions and Firefox versions.  I rather doubt that there are multiple 
versions of Gecko used in the same version of Firefox, though, so 
checking the Firefox (it is the brand) version seems to provide more 
precision.  Whether checking Gecko version allows the same checks to be 
used for multiple brands of browsers that all use (different) versions 
of Gecko is not something I've figure out.  And whether the bug 
encountered is in Gecko, or pre- or post- Gecko processing by the 
branded browser is also not made real clear when something works 
differently.


> So people get this wrong all the time, and I can't blame them! But the
> problem is they _think_ this is a matter of a simple version check.


If they attempt to predict the future, they are playing the fool.  If 
they test a number of versions of the browser and figure out the range 
of versions that fail, that is best.  If they code for "now and earlier" 
versions, it is somewhat reasonable, because the feature they are doing 
is new (or they'd have encountered it sooner), and the number of 
feature-diminished older browsers they encounter going forward will 
decrease as people upgrade.

This seems obvious to me, but may not be obvious to everyone.  If I'm 
wrong, please correct me in detail; if I'm right, it is an educational 
issue, and some good blog posts and developer best practices 
documentation would probably help some.


> Now in this case the problem can be ameliorated by providing less
> information in the UA string; e.g. only providing "Gecko 1.9.2.3". But
> that still leaves a number of the above options for how to apply the
> workaround, and authors will still guess wrong.


If the Gecko version is precise enough, and the various Gecko-using 
browsers that expose only the Gecko version cannot somehow introduce 
other bugs in pre- and post- processing, then that may be a good thing 
in reducing the number and complexity of the version checks.  As soon as 
Gecko-using browsers introduce bugs unrelated to Gecko, it is a problem.


>> No, no, no. You have _stated_ that browser detection is a bad thing, but
>> not _explained_ why. Since this keep coming up over and over, perhaps
>> there is a picture from the mile-high view that you could refer me to,
>> I'd be glad to look at it, understand the issues that you and Boris seem
>> to agree on, and possibly change my opinion.
>
> Well, one problem with browser detection as practiced right now that Tab
> mentioned is that it raises a barrier to users using any browser that
> wasn't in use when the detection code was written, because sites break.
> This includes both using new browsers and using new versions of existing
> browsers.
>
> Now maybe you don't think this is a bad thing, of course. I think Tab
> and I think it's a bad thing.
>
> I'll let Tab speak for any other issues he's run into; the above is the
> big one for me.


I'll agree that buggy browsers are a problem.  I'll agree that poorly 
coded sites are a problem.  I'll agree that poorly coded browser 
sniffing is a problem.

I don't think the CSS committee can solve the buggy browser problem.  I 
don't think the CSS committee can solve the poorly coded sites problem. 
  It would be nice if, in the maelstrom of buggy browsers and sites, the 
CSS committee could look to see where it could help reduce the 
complexity and confusion.

I think it could help solve poorly coded browser sniffer problem... if 
it wasn't so hard to figure out how to detect the browser, the site 
coders would have more brainpower left to figure out the best way range 
of versions for which to use a fallback CSS... especially if, in the 
documentation for the new browser detection feature, some buggy browser 
workaround best practices were documented.

The state of the art today, is that there is a jumbled mass of sites 
trying to explain all the issues, each of which starts from a different 
browser being declared "correct", each of which uses different detection 
mechanisms to sniff the browser, each of which proclaims their own 
practices as best practices, none of which (that I have found) that 
mention how best to code the appropriate range of versions.




>> 1) it is hard to implement them well, given the available facilities for
>> doing sniffing... this could certainly be improved, with boilerplate
>> Javascript or CSS features to assist.
>
> I don't think those would help the implementation difficulties I mention
> above.


I think it could, but only indirectly, as hinted at above.  Reducing 
complexity of detecting the browser would allow more time to analyze 
what sort of check should actually be made, instead of spending lots of 
time figuring how to effectively parse the UA string.


>> 2) some web site authors are bad coders.
>
> I don't think the _coding_ is the main problem here (though it's
> certainly _a_ problem). The main problem is that correctly doing UA
> sniffing just requires resources beyond what people are willing to put
> into it. In particular, it requires continuous testing of new releases
> of things you're sniffing for and updating of your sniffing. Most people
> just don't want to deal with that, and I don't blame them.


OK, I don't blame you for not wanting to blame them, and the minute you 
code a browser sniff, you do open up the requirement to continuously 
test new releases (or have a good feedback system for users to report 
things like "Hey, I upgraded to version ABC of browser XYZ, and 
encountered <description of problem>".  Of course, the site author has 
seen that description before, and can then go tweak the check to now 
include version ABC in the fallback case, and the problem can be solved 
in as little as minutes.  But if the site author says, sorry, I can't 
tell if you are using version ABC or browser XYZ, then the user has to 
complain to the browser vendor, who isn't nearly as interested in the 
problem web site as the user or site author if it isn't Google or 
Facebook, and who takes days or weeks to research the issue, weeks or 
months to come out with a new version, and meantime lots of people are 
upset.)

And if you decide not to browser sniff, or if it becomes impossible to 
browser sniff, then web authors simply aren't going to use features that 
don't work in any one of the browsers they have chosen to support.


On 4/1/2011 11:27 AM, Tab Atkins Jr. wrote:
 > On Fri, Apr 1, 2011 at 10:51 AM, Glenn Linderman<v+html@g.nevcal.com> 
  wrote:
 >> On 4/1/2011 9:43 AM, Tab Atkins Jr. wrote:
 >>> Glenn Linderman wrote:
 >>>> It would be highly friendly if CSS required the browsers to "man 
up" to
 >>>> what
 >>>> version of what browser they actually are, so that as their 
deficiencies
 >>>> come to light they can easily be detected and compensated for 
rather than
 >>>> forcing the use of browser-specific CSS, Javascript, or ugly hacks.
 >>>
 >>> Boris answered this - browsers aren't going to do this, for the
 >>> reasons I outlined above.  Most people use this information in
 >>> incorrect ways, and this hurts current users of niche browsers and
 >>> future users of popular browsers.
 >>
 >> Sorry, you didn't explain it, and Boris didn't explain... you only 
stated
 >> that there were such reasons.
 >
 > You're right; in an attempt to keep my email from getting too long, I
 > didn't go into a lot of detail.  Allow me to rectify that, then.


Thanks.


 > There is one, and only one, decent way to do browser-detection and use
 > that information.
 >
 > First, one must craft a test that is sufficiently precise that it only
 > targets a single version from a single browser, or a well-defined
 > range of existing versions from a single browser.  One must *never*
 > attempt to detect future browser versions, or use a test that has a
 > decent chance of accidentally detecting such, or that similarly
 > detects new browsers.  Crafting this sort of thing requires a decent
 > bit of cleverness; the simple and commonly-used browser detection
 > hacks pretty much uniformly fail this metric.


This "clever crafting" requirement certainly contributes to the 
frustration and complexity of any solution.  Yet a simple standardized 
API could completely eliminate the need for it.


 > Second, one must use this information only to deploy *exceptions* to
 > the default style and behavior, never to actually deploy new behavior.
 >   If you ever deploy new behavior based on a detection hack, then a new
 > version of a currently-bad browser, or a new/niche browser that has
 > the sufficient capabilities, won't get the sexy new behavior.  This
 > is, again, very commonly misused.


Total agreement with this point.  If point one weren't so hard, more 
effort could be spent correctly addressing this isuse.


 > (Note that feature-testing gets around both of these issues - it's
 > totally fine to feature-test in a way that will detect future/unknown
 > browsers with the right functionality, and to deploy special sexy
 > functionality based on the results.)


Feature testing does not circumvent both of these issues.  It is a much 
better way of deciding if the feature is available, and choosing the 
implementation of code to that uses the feature.  But for any feature, 
there is still the exact same issue of whether the feature is 
implemented correctly for all browsers that advertise it.  If not, then 
only browser sniffing allows rapid response to fix the site.

Without feature testing, new features could be detected by 
forward-looking tests, reliant on vendor assurances of 
backward-compatibility.  While I stated above that predicting the future 
(regarding bugs and bug fixes) is foolishness, relying on vendor 
assurances of backward-compatibility, can allow future-looking version 
checks to substitute for feature tests.  But feature tests are a better 
solution... but need to be augmented with browser sniffing to compensate 
for bugs.


 > Violating either of these rules has bad consequences.  If your
 > detection algo will fire on future versions of a browser, then fixing
 > the bug or adding the functionality that you're using a hack to get
 > around won't help them - users of the new version will still get the
 > old/sucky version of the code, despite being full-featured.


Agreed.


 > The same applies if your detection algo is insufficiently precise,
 > such that it will detect new/niche browsers: Opera, for example, has
 > run into this problem throughout its existence; Chrome did as well
 > when we did early experiments with radically simplifying the UA
 > string.  Don't even get me started on all the niche Linux browsers.
 > Again, users suffer by being fed a set of hacks that don't actually
 > apply to their browser, and probably screw things up worse.
 >
 > Even if you *think* you're being precise, it's still easy to do this
 > badly.  For example, there's a lot of detection code on the web that
 > successfully finds Opera and extracts its version.  However, a lot of
 > this is badly made, such that it just grabs the *first digit* of the
 > version number.  This causes so much problem for Opera when they went
 > from version 9 to version 10 (detected as version 1!) that they had to
 > just give up, set the old version number to 9.80, and then list their
 > *real* version number in a new place on the UA string.


I wondered why Opera had two versions.  I correctly handle that in my 
browser sniffer, because the guy I borrowed it from had figured it out. 
  But I did notice his code, the Opera UA string, and the multiple 
versions, and wondered :)

Again, this whole issue could be sidestepped in the future with a 
browser sniffer API.  Would have helped in the browser vendors had 
documented their version scheme they planned to use for the UA string 
also, that it could be multiple digits, etc.  I've never found any 
documentation that tells best practices for parsing UA strings.

There are a bunch of requests for some, mostly unanswered, on StackOverflow.

Mozilla documents what it uses at
https://developer.mozilla.org/en/User_Agent_Strings_Reference

Microsoft documents what it uses at
http://msdn.microsoft.com/en-us/library/cc817582.aspx#Y337
and even has some sample Javascript... but it is an IE pass/fail rather 
than a complete sniffer, so is useless in practice.

Microsoft also documents "version vectors" at
http://msdn.microsoft.com/en-us/library/cc817577.aspx
and makes the hilarious comment that their conditional comments are not 
a form of scripting.  Well, it is not Javascripting, anyway.


 > IE will
 > probably have the same problem with IE10.  Chrome, luckily, is young
 > enough that we were able to power through our own version of this
 > issue.  Again, users suffer from receiving the wrong set of hacks.
 >
 > If your use the detection results to deploy new features, it has
 > similar obvious problems.  New versions of old browsers and new/niche
 > browsers get the sucky old version of the site rather than the sexy
 > new version, just because they weren't successfully detected as being
 > a "conforming" browser version.  You (luckily) don't see very many
 > "Please use IE6!" notices on the web these days, but that's just
 > because people are quieter about their hacks; instead, quite a lot of
 > sites still just work worse or wrongly due to this effect.
 >
 > In general, crafting a good detection algorithm is hard.  Crafting
 > your site to be full-featured by default but gracefully degrading in
 > properly-detected old browsers is hard.  When you fail at either of
 > these, users suffer.
 >
 > Does that help answer your question?


This all helps with understanding that because the standards assume 
conformant implementations that the provide no help or assistance to the 
site author trying to navigate the two hard problems you mention just 
above... but blindfolding the users and the site authors isn't going to 
promote use of CSS in general or new features of CSS.  Providing 
APIs/syntax and documenting best practices would.



 >> If browser sniffing and resultant workarounds are implemented 
poorly, that
 >> either means that
 >>
 >> 1) it is hard to implement them well, given the available facilities for
 >> doing sniffing... this could certainly be improved, with boilerplate
 >> Javascript or CSS features to assist.
 >
 > Boilerplate can make the first problem (accurately detecting) somewhat
 > better.  It can't solve things entirely, and it does nothing for the
 > second problem.


Correct.  And simpler APIs/Syntax and documentation would all help with 
the first problem.  It only helps the second problem by allowing the 
site author more time to address it because it is then much easier to 
address the first problem. One hard problem is easier to think about 
than two, concurrently.  And it makes it less likely that the solutions 
to the problems are inappropriately mixed.


 >> 2) some web site authors are bad coders.  This is certainly true... 
there
 >> are many web sites that suffer from bad coder syndrome.  Lots of 
sites are
 >> authored by people by just whacking at the HTML until it works for 
them in
 >> one browser, one screen size, and then they claim it is done. 
Others may do
 >> bad browser detection, and support two browsers, and make things 
worse for
 >> the third, and not care.
 >
 > It's not "some".  It's a large majority.  Most people simply aren't
 > good coders in general; programming on the web brings its own unique
 > challenges that even more people simply don't understand.  Boris puts
 > it better - our definition of "good" and "bad" are a little unique
 > here; a "good" coder in this instance is someone who has fairly
 > intimate knowledge of the development of all the browsers.  Those
 > people are *very* few and far between; even being a highly skilled and
 > intelligent coder doesn't mean you're "good" for the purpose of doing
 > good UA detection.


But the detection could be reduced in complexity by adding a couple APIs 
or pieces of syntax, and the documentation could encourage best 
practices in this area.  Making the first problrem insoluble doesn't 
help solve problem 1 or problem 2 in your list... it prevents solutions, 
instead of encouraging solutions.


 >> 3) If a single browser is used for web site development, and it has 
bugs,
 >> the site may depend on those bugs, and no other browser may even want to
 >> display that site properly, because to do so would require 
implementing bugs
 >> instead of standards.
 >
 > Yup, though this can be true without any browser detection at all.
 >
 >
 >> Problem 1 could be cured, eventually, with appropriate features in the
 >> specifications.  Problems 2 and 3 will never go away, but if browser
 >> detection were easier/standardized, and available in CSS without 
resorting
 >> to Javascript (and in Javascript in an easier manner, and to CGI 
scripts in
 >> an easier manner), then it would be lots easier to test with multiple
 >> browsers, and good web site coders could benefit.
 >>
 >> Don't cater to the bad coders, but rather make it easy for good 
coders to do
 >> useful things in easy and effective ways, and provide documentation for
 >> doing it right.  If it is easy enough, even the bad coders might 
learn how.
 >>   But right now there is a huge barrier to using CSS: it doesn't work
 >> cross-browser, without heavy investment in learning arcane browser 
hacks.
 >
 > We want to offer features that let good coders do awesome things that
 > help users.


That's a very laudable goal.  And achievable.


 > We don't want to offer features that let bad coders do
 > things that hurt users.


That's a very laudable goal.  But I'm not sure it is achievable, except 
by throwing out the baby with the bathwater.


 > Every feature has a tension here, because
 > everything can be misused.  Every feature, then, has to be evaluated
 > separately, to see if the gain from exposing it is worth the harm from
 > it being misused.


So, your statement "everything can be misused" is in direct conflict to 
"We don't want to offer features that let bad coders do things that hurt 
users".  Can't do both.  Just withdraw the CSS spec, it can't meet its 
goals!  :)


 > Browser detection has a long history of being very
 > bad, and there's no reason to think that the parts we can solve in the
 > browser will offset the parts that are still dependent on people
 > always doing the right thing, because doing the "right thing" is *very
 > hard*.


The above paragraph highlights the thing that is still missing from your 
"long form" explanation... while browser detection has a long history of 
being very misused, you have as yet offered nothing as an alternative 
for working around the bugs that are to be expected in new features in 
future browsers.

So that leave site authors to avoid new features in which they find bugs 
or inconsistent implementations.




On 4/1/2011 12:39 PM, Anton Prowse wrote:
 > Hi Glenn,
 >
 > Most authors also tend to assume that when their page looks wrong in
 > Browser X it's because Browser X is wrong.  Often, however, Browser X
 > isn't wrong; instead Browser X is exercising free choice given to it via
 > the CSS spec, either granted explicitly or more usually through making
 > the rendering undefined when certain conditions are satisfied.  (Search
 > for the string "define" in CSS21 to catch the instances of "is not
 > defined", "does not define", "undefined" etc to see just how many there
 > are!)


In the standards committees I've been involved in, such specifications 
of "free choice" were usually ways to paper over preexisting variations 
in extant implementations so that variant implementations could all 
conform to the standard.

I'm unaware of any case where that actually benefits users, except for 
backward compatibility with a particular vendors proprietary features.

Do the validators point out (with warnings) the places where CSS is used 
that leads to "free choice", so that users can avoid them?


 > Making browser sniffing easy is just begging for less knowledgeable
 > authors to "fix up" Browser X instead of reviewing their understanding
 > of the language (eg via tutorials, books) and switching to different CSS
 > which does render the same cross-browser.  This is a problem, because
 > unbeknown to the author, Browser Q also (legitimately) renders the page
 > in the same way as Browser X... but the author didn't test in Browser Q
 > so they didn't notice.  Users of Browser Q now see broken pages.


Users of Browser Q should report the problem to the site author.  He can 
choose to support Browser Q or not.  Whether he makes good or bad coding 
decisions along the way will determine how easy or hard it is for him to 
support Browser Q in addition to the others.

So is there a list of which CSS is expected to be cross-platform and 
which is not?  The point of the a standard is to achieve conformance. 
It should be clear from reading the standard what conforms and what 
doesn't, and where the "undefined" parts are.  A lot of site authors are 
going to cut-n-paste without much understanding, and without reading the 
standard.  They'll no doubt get things wrong a lot of the time.


 > Alternatively, perhaps Browser X is correct and in fact it's Browsers W,
 > Y and Z that are wrong.  This doesn't happen so often any more, but back
 > in the era of Firefox 2, for example, IE6 got some common aspect of
 > stacking contexts correct when other major browsers all got it wrong.
 > Most authors assumed it was IE6 up to its usual tricks, and hacked to
 > "fix" that browser.  They were pretty mystified when the other browsers
 > updated their rendering in later versions, resulting in page breakage in
 > all modern non-IE browsers!  Again, if the author had researched the
 > problem instead of opting for a quick fix for Browser X, they would have
 > realized that their chosen technique isn't yet reliable and would have
 > changed to a different one.


So they had to recode.  Since the discrepancy existed, they had to code 
two ways anyway.  So all the "spots" were identified, and it was a 
matter of changing the conditions under which the two paths were taken.

Better had they read and understood the standard.  But at least they had 
the option of detecting the browser, and doing conditional coding, 
although it was (and still is) extremely cumbersome.


 > Hence browser sniffing makes it really easy for authors to
 > unintentionally give certain users a bad experience.


It also makes it really easy for authors to intentionally give certain 
users a better experience, if they are using a decent browser brand and 
version.

Making it hard to leverage the good browsers is not going to advance the 
state of the art in site development.  Why would anyone bust their butt 
to implement some new CSS feature, only to discover that (1) it doesn't 
work on the 10th of 10 browsers in the required compatibility list, so 
they have to discard it completely because they can't differentiate that 
browser from the others?


 > It also makes CSS much more difficult to maintain because the CSS is
 > forked; the consequence of some later change elsewhere in the stylesheet
 > has to evaluated in multiple ways, once for each fork.  Even at the best
 > of times authors seem to find it rather hard to understand from a global
 > perspective the many many interactions going on inside their CSS.  I'm
 > not hopeful that expecting them to hold /multiple/ global perspectives,
 > one of each fork, is realistic.


Every conditional does, indeed, make for harder understanding.  So, like 
Tab, you don't offer an alternative to achieve the desired functionality 
in the presence of some browsers that act differently than others.

Received on Saturday, 2 April 2011 00:26:05 UTC