W3C home > Mailing lists > Public > www-style@w3.org > March 2004

Re: Proposal: useragent at-rule

From: Chris Moschini <cmoschini@myrealbox.com>
Date: Tue, 30 Mar 2004 10:47:33 -0500
To: www-style@w3.org
Message-ID: <1080661653.9810f07ccmoschini@myrealbox.com>

Dave Shea put it rather elegantly:
> So we can't query for the user agent string, because it's unreliable.
> We can't query for CSS support levels, because manufacturers bend the truth.
> We can't query for CSS features because support may be flawed.
> What about asking nicely and/or buying it flowers?

I agree that feature-detection in CSS isn't reasonable (but if someone has a
brilliant way to do it, please say so). Failing that, UA-detection is the next
best bet, and although browsers can lie, it isn't useless. If it were, the entire 
browser statistics discussion that's just gone on in this thread would be moot!

If you look for the right things, you can figure out what browser you're dealing 
with. This changes if the user intentionally tells their UA to lie through its 
teeth (click Opera>UserAgent>LieThroughTeeth). But as a Web Developer, if I catch 
this sort of statement, it's a braggart comparable to "I develop .Net applications 
in VI on OSX" or some other technical backwardsness for the sake of backwardsness 
- cool challenge maybe, but never popular.

In fact, this ironic statement from Felipe Gasper sums it nicely:
> I still just think it's a bad idea to start coding for particular UAs.
> ...
> I use server-side scripting to identify UAs and then modify the CSS using PHP -
> not the prettiest solution, but it works.

Certainly. So you think it's best not to write CSS for specific UAs, but you use 
an elaborate server-side sniffer to write CSS for specific UAs. In fact, you're 
one of many who use this tactic, which is proof it both works and is effective.


So UA strings, while not wonderful, *are* usable.

But Christoph Paper contends:
> but adding new features to CSS will not help avoiding bugs and bad
> implementations of existing browsers.

We're not trying to fix bugs in browsers. I think you're implying that writing 
certain features into the spec will make browsers less buggy. I can't see that, 
and no browser will ever be perfect - at least as long as NASA isn't shipping a 
browser. There will always be new browsers with new bugs slowing CSS authors down 
and cluttering up our CSS, primarily slowing the maintenance of CSS and 
steepening the learning curve for beginners who learn by example.


I'll continue to ride Dave Shea's coattails:
> Simply: we need a way to gracefully degrade our style for flawed and older
> browsers, so as not to hinder usability/accessibility for our end users.

Adding new features to a spec to degrade gracefully is *not* a new or heretical 
idea. Going back to the Javascript example used earlier, Netscape 2.0 had a bug 
where anything it found in Script tags, it would just barf out all over the page 
as if it were text in the body. So what happened? ECMAScript (the JS standard) 
provided for the usage of an HTML comment tag inside of the script tags, even 
though that comment tag is invalid JS, technically speaking. When Netscape 2.0 
was living large, this was very important for the adoption of Javascript, and the 
spec made it easy for *authors* (not purists) to do what they wanted to do with 
their pages, and make them work the way they want.

A UA test would be even better as the spec would not target any one browser, but 
instead give authors full control of what browser gets what CSS - and the syntax 
could be much clearer.

-Chris "SoopahMan" Moschini
http://hiveminds.info/
http://soopahman.com/
Received on Tuesday, 30 March 2004 10:48:43 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 27 April 2009 13:54:27 GMT