Re: Proposal of @ua

On Nov 22, 2007, at 7:29 AM, Stewart Brodie wrote:

> capable of browsing real world web sites.  Our browser has to lie  
> in both
> its HTTP User-Agent identity strings and in Navigator.userAgent in  
> order to
> get sites to function - sites that our customers tell us work fine in
> Firefox, but don't work in our browser, and therefore our browser is
> "broken".  I'd rather not have to develop yet another set of lies  
> that I
> have to configure to make CSS "work" on different sites. `

I feel your pain. I just feel that CSS is a fundamentally different  
situation, with a different sort of error handling, and a different  
history of how it is written.

Is not the primary reason you have to lie in those two places because  
of Web authors that are trying to exclude incompatible browsers,  
usually because of some sort of JavaScript incompatibility? OK, so  
they are doing it the wrong way; I completely agree. But their  
motivation at least is valid. In the early days, JavaScript written  
for Netscape would often fail miserably in IE, and vice versa. So the  
authors would branch between the 2 browsers they were aware of, to  
either make it work in both, or to display a "sorry, your browser is  
not compatible" message.

In CSS, you should be much less motivated to lie. CSS is designed to  
skip past the parts it doesn't understand, and authors don't need to  
branch between major browsers for most of there code, but rather just  
for a few lines here and there. The only time time branching is for  
large sections of code is when dealing with lE. If you are using the  
Trident engine (for instance, as I really don't know anything about  
Fresco) and you get branched because the page asked if you are using  
Trident, then this isn't a problem. Everyone else gets the standards- 
based code. No need to lie so far.

There are some people who have argued against me because they think  
that as a designer I should not worry about "pixel perfect" code, and  
let anyone who can't understand some CSS I've written end up with  
overlapping text or button links that are behind some other element  
as a result. Well, if that is true, then this outcome (which they  
don't consider important) is the worst that would happen to your  
browser if the author used CSS UA detection to inappropriately deny  
you rules you can deal with fine. Whereas lying would be more likely  
than not to cause you to see rules that deal with browser-specific  
shortcomings that your browser may not share.

In other words, I should not want UA detection because getting the  
inappropriate rule for your browser is not that bad, but I should  
also not want UA detection because if it isn't perfect then you might  
receive an inappropriate rule for your browser. The worst case for  
honestly reporting your rendering engine in CSS (getting the wrong  
rules) is no worse than not having any detection (and getting rules  
that don't into account the idiosyncrasies of your software). Whereas  
the best case is that problems a layout may have with your browser  
can be corrected by adding a rule for your engine that does not risk  
messing up other UAs.


>> Even with HTTP User-Agent and navigator.userAgent, for all its abuses
>> and infuriating use to white-listing browsers on sites, and as much
>> as I too hate that, I would still not do away with it. It is still a
>> valuable tool on the server and in JavaScript.
>
> If it is used in the right hands, that might be true.  However, far  
> too
> often it is used as a blunt instrument to avoid having to consider  
> writing
> correct HTML, JS and CSS by limiting the clients that are permitted to
> interact with the service.
>
> Have you ever tried talking to, for example, an IIS server when it  
> doesn't
> have one of those browser capability descriptors for your user  
> agent?  We
> have customers complaining that their services and third party  
> services do
> not work with our browser which turn out to be because the server is
> deciding that it hasn't ever heard of our browser, and thus it  
> obviously
> doesn't support JavaScript or CSS and it sends us content  
> containing simple
> unscripted, basically non-functional widgets.  That is just one  
> example
> which has caused me the most pain recently, but it is by no means an
> isolated example.

I do not deny that it is often used inappropriately, and that when it  
is, it can be especially infuriating. Yet if there are some special  
issues that a site might have with Fresco that cannot be dealt with  
via simple object detection in JavaScript, then how else can the  
author deal with them without detecting your UA?

But this is an aside. This sort of all-or-nothing server-based  
approach will not be changed by my proposal. If the page never shows  
you its CSS then you will never see its CSS UA query either.

>> What will happen in that case is that author will create a site that
>> sucks in some browsers, and people will complain about it. The
>> company that receives those complaints will either do something about
>> it, or they will not, depending on their level of customer service.
>
> The company owning the badly-authored website will not receive the
> complaints, though.

Don't be so sure. You know about about the complaints you get, but we  
get them too. And the larger the company, the more likely that the  
smaller percentage of people using the less well-known browser will  
amount a noticeable amount of calls regularly coming in to the IT or  
Marketing department.

And the more likely that they will have resources to deal with it. If  
the fix involves changing how the Web app has to parse through all  
the lies in the UA string without negatively affecting other  
browsers, then the fix is less likely to be done immediate, or ever.  
But an easy fix, like adding a Fresco-specific (or Webkit-specific or  
Presto-specific) CSS rule based on @ua or a media query would be more  
likely to be quickly actionable (especially if testable).

> Our customers will complain to *us* that the sites do
> not work properly in our browser, on the basis that it functions on  
> IE or
> Firefox (and it's only recently that Firefox has even come into the
> equation, tbh).  Of course, sometimes the very large browser makers  
> may have
> enough clout to get companies to fix their websites.

Hopefully, over time, more and more will realize the folly of "white- 
listing" browsers and driving all others away. Since it is self- 
defeating for the most part, I have at least as much hope for that  
trend as I do for all browsers to magically support all of the same  
CSS rules in exactly the same way at the same time, even as new ones  
are being added to the list of recommendations.


> I think it's completely different.  I believe that the most common  
> use of
> @ua would be to send different stylesheets based on the authors'  
> opinion of
> current capabilities of the browsers that they can be bothered to test
> with/have ever heard of.

Well, perhaps we will have to agree to disagree on that point. i  
don't think current practices in CSS support your point of view,  
though. Those who would do that are doing so already by parsing the  
HTTP header, and that would not change  if there was a way to address  
individual lines of CSS to individual UAs.

I think that most current authoring of CSS shows a tendency towards a  
different behavior, and that is to only branch CSS files between  
standards-based browsers and IE (with IE sometimes further branched  
between IE6 and IE7, especially if a doctype higher than 3.2 is  
used), with no UA-detected required for the most part.

Every non-IE UA is assumed to be very capable of handling CSS  
standards in about the same way, with only small tweaks needed. The  
only reason authors write CSS rules specific to IE is because of its  
large market share. If there is a UA out there that renders CSS worse  
or more inconsistently than IE (no offense, MS folks) but has a tiny  
market share, then they can not reasonably expect pages to look good  
on their browsers anyway.

So, given that, most authors that write for non-IE browsers do so by  
writing rules that any user agent can be assumed to understand. Only  
IE-specific parts are dealt with as a separate case. This is either  
via conditional comments in the HTML, which only IE can understand  
(presumably other trident browsers too), or with IE vs. Standards  
prefixes on the rules, such as the valid but non selecting "*html"  
that IE6 and non-doctype IE7 sees, and the valid html>body that more  
standards-oriented browsers see.

Sure, there are always those who will detect the UA in the HTTP  
header, and branch based on that. But they are a declining exception  
to the rule. Once any CSS file loads, there is really no need to do  
major branching of strongly-standards-following agents based on what  
they report in their UA string. As a designer/author, I have no  
motivation to do so. It would still be easier to NOT include @ua  
where it isn't needed, than to surround my default rules with one  
that limits their use to FireFox. And those just learning about the  
new rule can be strongly warded off of improper use from a message on  
the same w3 page that describes it.

Even if I as an author did decide to branch to 2 separate files,  
using @media, it is very likely to look something like this:

@media screen and (renderer: trident) { @import url(http:// 
www.example.com/my_styles_ie.css)  }
@media screen and not (renderer: trident) { @import url(http:// 
www.example.com/my_styles.css)  }

If I learned about the feature by reading about it, then I would  
likely have also read that this is the preferred way to use it. If I  
learned about the feature by viewing the examples of other people  
that had read about the preferred way to use it, then I also would  
have a good example.

Any other branching would be only for those occasions where some  
rendering engine needs something different from the general non- 
trident file. Personally, I wouldn't want that in a separate file,  
due to difficulty in maintenance, but if others felt it was easier to  
maintain that way, then they could add other line like this, and let  
it override rules in the general non-trident file:

@media screen and (renderer: webkit) { @import url(http:// 
www.example.com/my_styles_safari.css)  }

If they didn't want to override styles, but have a completely  
different branch for Webkit, they could also change the "not  
(renderer: trident)" to "not (renderer: trident) and not  
(renderer:webkit)".

Why would I possibly use something like "and (renderer: gecko)"  
INSTEAD OF "not (renderer: trident)" to branch the CSS into 2 files?  
The only reason would be because my company did not want me to  
support anything other than trident and gecko (maybe influenced by  
lawyers or the call center's desire to limit their "support"). In  
which case, one way or another, I would block other browsers from  
using the site, to the best of my ability (much as I would not want  
to). Not having rendering-engine-detection in CSS would not stop me  
from doing so.


> Consumers blames their suppliers, who are our customers.  Our  
> customers then
> blame us (the browser maker) if our browser cannot browse sites of  
> interest
> to them and their customers.  Generally, end consumers are only  
> interested
> in interacting with sites successfully, on whatever device they are  
> using at
> the time.  As these consumers obtain more and more different types  
> of device
> around the home and office that are web-enabled, they expect the same
> services to be delivered on all the devices.  If it works on one  
> but not
> another, then they will deem the device to be faulty, not the website.

Believe me, we get the calls too. It works both ways. While the user  
may be used to their browser having problems at some sites, they do  
not hesitate to call us if it is our site they are having problems  
with and other less ambitious sites in the same field don't have  
similar problems. And when it happens, it is up to me to fix it  
quickly, in order to relieve pressure on our call center. And usually  
we are more likely to hear from minority browser users than majority  
browser users, due to something not being tested enough or because of  
some more advanced capability of the browser (which in the past has  
involved pop-up blockers and settings for third party cookies, for  
instance).

>> If an author does take that "Only BrowserABC version 1.234 and later
>> implement this new standard feature XYZ, so I'll check for that
>> browser" tact, then here is how that happens today:
>>
>> 1. Server detection of HTTP User-Agent, and serving different style
>> sheets based on browser detected. I've seen this as a way to "warn"
>> people using browser software that was not officially supported
>> (white-listed), to tell them that their experience on the "advanced
>> design" site might suffer as a result. It was pretty much a slap in
>> the face, since they supported Netscape 4.x but not Safari or other
>> Webkit-based browsers. They used BrowserHawk for detecting. I was
>> unable to convince them to change, and they will continue in their
>> misguided ways regardless of what this group decides. But even so,
>> after the user closes the warning, they can continue to use the Web
>> app with Safari, because they allowed it to use the default  
>> standards-
>> based CSS and JavaScript. There were a few places where it didn't
>> look quite right (due more to problems with their horrid tables than
>> anything else), but it worked.
>
> A lot of the time, the problems being solved like this are perfectly
> solvable with standard CSS.  The commonest problem I come across is a
> failure to understand how the cascade works.  Specificity is not well
> understood in general.  It is disappointing to see so much content  
> that
> could actually be written in a completely valid, standards- 
> compliant manner
> and that would provide the same functionality, including bug  
> workarounds.

I wouldn't disagree with that statement.

>> 2.  Server detection of HTTP User-Agent, and serving different lines
>> of CSS within a dynamically generated style sheet based on what's
>> detected. I have done this myself in order to show a platform-
>> appropriate progress indicator. I also made sure that platforms I
>> didn't detect would get a default. In my case I didn't actually use
>> the browser name or version number, but it could be done and probably
>> is.
>
> I consider this very similar to point 1.  *You* have taken care to  
> provide a
> (functional?) default for unrecognised UAs.  I still think good  
> designers
> are in the minority.

I got that. With what I was proposing, I believe it would be easier  
to do the right thing than it is today, and more obvious what the  
choices are, because it would be a very clearly spelled out, easy-to- 
use CSS rule, rather than server-side black-magic parsing of UA  
strings which are filled with lies. if we limited the rule to  
rendering engine identifying and not software name, then it would  
further encourage doing the right thing. it would not guarantee it,  
but then, we don't have that now either.

>> 3. JavaScript: I've seen JavaScript that does a "document.write" of a
>> style sheet in order to create separate styles based on either the
>> "navigator.userAgent" or on whether or not "document.all" was
>> detected. I don't think I need to go into the fact that there are
>> obvious difficulties with both approaches.
>
> We had to remove support for document.all because it was causing so  
> many
> websites to detect us as Internet Explorer and feed us all sorts of
> Windows-only content.  We have to claim to be Mozilla/5.0 in order  
> to get
> standards-compliant content sent to us.  :-(

Exactly why I think this typical current way is not a better way than  
what I proposed.

>> The DHTML JS itself also sometimes has to detect IE as part of  
>> what it
>> does, in order to work around its specific foibles, but treats all  
>> other
>> browsers equally. If there is something that is known to not work  
>> in a
>> particular version of FireFox (for instance), then it might apply  
>> separate
>> styles based on the parsing of that string. If it is a current  
>> version
>> then the author will need to update the JavaScript later.
>
> This "update later on" doesn't happen very often in real life,  
> IME.  Quite
> the opposite, as the cruft just comtinues to accumulate as hack  
> upon hack of
> different browser detections are added and our browser has to be  
> extremely
> careful to tell the server the correct set of lies in order to get the
> website to function at all.

I'm not really arguing with that, just listing the options people  
currently use. In this case, applying a class name instead of  
applying the styles directly can avoid the need to look at  
navigator.userAgent for special cases in DHTML, assuming that the  
appropriate rules are being assigned to the class.

>> 4. CSS hacks & filters: Sometimes they can be written in valid ways,
>> sometimes not, but they almost always require coming up with newer
>> creative hacks when a new version of a browser comes along. Yet we do
>> it anyway, because it is generally less cumbersome than the options
>> listed above.
>
> Some of these are intriguingly creative.
>
>
>> 5. IE's conditional comments: At least these attempt to address  
>> the need,
>> and I  am thankful for that.
>
> They can certainly help isolate the non-standard content that needs  
> to be
> sent to IE.  That might be a good thing - provided that they bother  
> to build
> a standards-compliant version of the content too.

Exactly. The author is either going to create some standards- 
compliant CSS or not. There's not much we can do about the ones that  
choose "not".

>> They don't help that much when you want to change the CSS used by  
>> hundreds
>> of HTML pages that already access a single CSS file, and when IE7  
>> came
>> out, that was one of the primary criticisms in the comments  
>> section of the
>> IE blog that suggested using them for IE7-specific CSS
>
>> It is a reasonable expectation to have to update some CSS files  
>> when a
>> major browser gets an update.
>
> I consider it totally unreasonable, but alas unavoidable given the low
> quality of many of the stylesheets deployed today.  Properly written
> stylesheets do not require updates any time one of the major  
> browsers gets
> an update.

Its unavoidable as long as there is a changing standard and changing  
levels of support for those standards in the browsers. Bert Bos  
estimated 4-5 years before all the current modules are implemented  
consistently, but somehow I doubt that all CSSWG activity will cease  
at that time and there wont be new modules continuing to flow in.  
Meanwhile, I KNOW that when IE8 comes out that I will have to either  
update my style sheets or else have pages that don't look right in  
it. I knew it when IE7 was coming out and the developers were saying  
they would fix all the selectors without fixing very many of the  
rendering bugs, of which there are legion.

By reasonable expectation, I mean that it is an expected  
responsibility of my job, just the same as the fact that I test my  
pages in various browsers when I make a design change that involves  
possibly complex CSS, complex inheritance, or page elements that have  
to work together harmoniously even if a particular browser doesn't  
understand some rule that another one does. Of course I am going to  
test when a new version of IE, FireFox, Safari, or Opera comes out.

> How do determine which browsers are the "major browsers" anyway?
> Can you trust the server's agent log stats given that most browsers  
> are
> compelled to lie about what they are?


I trust them well enough. I use what information I have available to  
me.  I look at my Google Analytics, which I believe does its best to  
parse the unique parts of the User-Agent identity strings. The  
percentages are not that far off from what is reported for general  
populations in Wikipedia, for instance.

I have to prioritize and concentrate my efforts where I see the most  
need, and one of the big 3 or 4 that account for 99% of the traffic  
comes out with  a new version then I know that more and more of the  
thousands of page views on my company's sites will be using the newer  
version.

I suppose its possible that ANT Fresco is actually 25% of our traffic  
and that its User-Agent identity strings are just very well disguised  
as FireFox or something, but if so, well, that's what you get. I did  
not mean to slight your company or Fresco in any way, just because it  
seems to represents a less significant portion of the traffic to my  
company's site.

>> It is less reasonable to have to update every HTML file, or to have a
>> whole separate sheet for every UA or even for just one browser  
>> (rather
>> than just a few extra lines in a single file).
>
>> So perhaps you can explain again why it would be worse than what I've
>> described, to have a reliable, implementor-supported way to supply
>> rendering-engine-specific lines of CSS from within my CSS file? It is
>> clear to me, at least, why it would be better. I am sure I am not
>> alone amongst Web designers. And don't tell me that I should just
>> avoid using any CSS that is not supported by all the UAs out there.
>
> You cannot do that, because you can't possibly be aware of all the  
> UAs out
> there.

Exactly right, but at least I can do what I can in order to mitigate  
problems with dominant UAs that I can test on.

> If you could guarantee that authors would write standards-compliant  
> CSS, and
> then that @ua would only ever be used to single out specific  
> versions of
> specific broken browsers (or engines) to provide additional rules  
> to work
> around them, then that would be less harmful.  However, based on prior
> experience with User-Agent and navigator.userAgent, I simply don't  
> believe
> that authors will use it like that.

It would be a tool that  authors would use well or not, to their own  
benefit or harm.

Received on Saturday, 24 November 2007 02:52:15 UTC