- From: Stewart Brodie <stewart.brodie@antplc.com>
- Date: Thu, 22 Nov 2007 15:29:34 +0000
- To: www-style@w3.org
Brad Kemper <brkemper@comcast.net> wrote: > > On Nov 21, 2007, at 2:36 AM, Stewart Brodie wrote: > > >> Any UA spoofing another in a media query, in order to pick up CSS > >> meant for dealing with another UA's rendering bugs (or differences of > >> interpretation), would be doing so to their own detriment. I just > >> don't see it happening. The implementors in this WG could keep it > >> from happening in the prevalent browsers. > > > > Of course it will happen. Implementors may not want it to happen, but > > stylesheet authors will do it anyway. Then we'll be in the same total > > mess that we're in with HTTP User-Agent and navigator.userAgent. > > Stewart, > > Thanks for taking the time to respond. I realize I am fighting an > uphill battle against deeply set prejudices regarding the risk of > repeating the messiness of HTTP User-Agent and navigator.userAgent. They are experiences gained from implementing browser technologies that are capable of browsing real world web sites. Our browser has to lie in both its HTTP User-Agent identity strings and in Navigator.userAgent in order to get sites to function - sites that our customers tell us work fine in Firefox, but don't work in our browser, and therefore our browser is "broken". I'd rather not have to develop yet another set of lies that I have to configure to make CSS "work" on different sites. ` > Even with HTTP User-Agent and navigator.userAgent, for all its abuses > and infuriating use to white-listing browsers on sites, and as much > as I too hate that, I would still not do away with it. It is still a > valuable tool on the server and in JavaScript. If it is used in the right hands, that might be true. However, far too often it is used as a blunt instrument to avoid having to consider writing correct HTML, JS and CSS by limiting the clients that are permitted to interact with the service. Have you ever tried talking to, for example, an IIS server when it doesn't have one of those browser capability descriptors for your user agent? We have customers complaining that their services and third party services do not work with our browser which turn out to be because the server is deciding that it hasn't ever heard of our browser, and thus it obviously doesn't support JavaScript or CSS and it sends us content containing simple unscripted, basically non-functional widgets. That is just one example which has caused me the most pain recently, but it is by no means an isolated example. > Even PPK, a stanch proponent of object detection in JS, admits that there > are times when it is needed, and even published a function for parsing the > string. Well, in CSS there is a clear and present need for being able to > include some rendering-engine-specific code. Today that need is only being > met by oft fragile hacks and filters, which is what I was trying to show > in my examples. There is nothing to show that having a more sanctioned way > of dealing with the situation is going to cause more problems than the way > people are dealing with it today. > > > > What will happen is that an author will think to themselves "Only > > BrowserABC version 1.234 and later implement this new standard feature > > XYZ, so I'll check for that browser", and it'll be downhill from there. > > What will happen in that case is that author will create a site that > sucks in some browsers, and people will complain about it. The > company that receives those complaints will either do something about > it, or they will not, depending on their level of customer service. The company owning the badly-authored website will not receive the complaints, though. Our customers will complain to *us* that the sites do not work properly in our browser, on the basis that it functions on IE or Firefox (and it's only recently that Firefox has even come into the equation, tbh). Of course, sometimes the very large browser makers may have enough clout to get companies to fix their websites. > While it is appropriate to publish guidelines of how to use what I am > proposing, I would rather see this group address authors' needs with > useful tools, rather than to try to police best practices or to only > include things that can't be used in ways we don't like. > > What will happen when an author will think to themselves "I want to > have bright green text on a pink background"? It'll be downhill from > there. Should we therefore also limit the ability of authors to > choose colors for text or backgrounds that would make a page > unbearable? In my mind, this is just an extreme example of the same > sort of thinking. I think it's completely different. I believe that the most common use of @ua would be to send different stylesheets based on the authors' opinion of current capabilities of the browsers that they can be bothered to test with/have ever heard of. Like any feature, there will be intelligent, high-quality individuals who use features like this in a responsible manner like you suggest. I believe they will be in a small minority. However, I also believe that that small minority would encompass most, if not all, the authors on this mailing list, as subscribers here do tend to care about getting things right. > When I author something that doesn't work in a particular browser, I > am the one that is called upon to fix it. There may have been a time > when people asked "How come site X renders in this browser, but not > site Y?" and concluded that there is something wrong with the > browsers. Nowadays that is a question more likely to be asked by the > author than by the user. It is much more likely now, at least among > the general population, to ask "How come all the other sites works in > my browser, and not yours?" In other words, the consumer blames the > site author, not the browser-maker. Consumers blames their suppliers, who are our customers. Our customers then blame us (the browser maker) if our browser cannot browse sites of interest to them and their customers. Generally, end consumers are only interested in interacting with sites successfully, on whatever device they are using at the time. As these consumers obtain more and more different types of device around the home and office that are web-enabled, they expect the same services to be delivered on all the devices. If it works on one but not another, then they will deem the device to be faulty, not the website. > If an author does take that "Only BrowserABC version 1.234 and later > implement this new standard feature XYZ, so I'll check for that > browser" tact, then here is how that happens today: > > 1. Server detection of HTTP User-Agent, and serving different style > sheets based on browser detected. I've seen this as a way to "warn" > people using browser software that was not officially supported > (white-listed), to tell them that their experience on the "advanced > design" site might suffer as a result. It was pretty much a slap in > the face, since they supported Netscape 4.x but not Safari or other > Webkit-based browsers. They used BrowserHawk for detecting. I was > unable to convince them to change, and they will continue in their > misguided ways regardless of what this group decides. But even so, > after the user closes the warning, they can continue to use the Web > app with Safari, because they allowed it to use the default standards- > based CSS and JavaScript. There were a few places where it didn't > look quite right (due more to problems with their horrid tables than > anything else), but it worked. A lot of the time, the problems being solved like this are perfectly solvable with standard CSS. The commonest problem I come across is a failure to understand how the cascade works. Specificity is not well understood in general. It is disappointing to see so much content that could actually be written in a completely valid, standards-compliant manner and that would provide the same functionality, including bug workarounds. > 2. Server detection of HTTP User-Agent, and serving different lines > of CSS within a dynamically generated style sheet based on what's > detected. I have done this myself in order to show a platform- > appropriate progress indicator. I also made sure that platforms I > didn't detect would get a default. In my case I didn't actually use > the browser name or version number, but it could be done and probably > is. I consider this very similar to point 1. *You* have taken care to provide a (functional?) default for unrecognised UAs. I still think good designers are in the minority. > 3. JavaScript: I've seen JavaScript that does a "document.write" of a > style sheet in order to create separate styles based on either the > "navigator.userAgent" or on whether or not "document.all" was > detected. I don't think I need to go into the fact that there are > obvious difficulties with both approaches. We had to remove support for document.all because it was causing so many websites to detect us as Internet Explorer and feed us all sorts of Windows-only content. We have to claim to be Mozilla/5.0 in order to get standards-compliant content sent to us. :-( > The DHTML JS itself also sometimes has to detect IE as part of what it > does, in order to work around its specific foibles, but treats all other > browsers equally. If there is something that is known to not work in a > particular version of FireFox (for instance), then it might apply separate > styles based on the parsing of that string. If it is a current version > then the author will need to update the JavaScript later. This "update later on" doesn't happen very often in real life, IME. Quite the opposite, as the cruft just comtinues to accumulate as hack upon hack of different browser detections are added and our browser has to be extremely careful to tell the server the correct set of lies in order to get the website to function at all. > 4. CSS hacks & filters: Sometimes they can be written in valid ways, > sometimes not, but they almost always require coming up with newer > creative hacks when a new version of a browser comes along. Yet we do > it anyway, because it is generally less cumbersome than the options > listed above. Some of these are intriguingly creative. > 5. IE's conditional comments: At least these attempt to address the need, > and I am thankful for that. They can certainly help isolate the non-standard content that needs to be sent to IE. That might be a good thing - provided that they bother to build a standards-compliant version of the content too. > They don't help that much when you want to change the CSS used by hundreds > of HTML pages that already access a single CSS file, and when IE7 came > out, that was one of the primary criticisms in the comments section of the > IE blog that suggested using them for IE7-specific CSS > It is a reasonable expectation to have to update some CSS files when a > major browser gets an update. I consider it totally unreasonable, but alas unavoidable given the low quality of many of the stylesheets deployed today. Properly written stylesheets do not require updates any time one of the major browsers gets an update. How do determine which browsers are the "major browsers" anyway? Can you trust the server's agent log stats given that most browsers are compelled to lie about what they are? > It is less reasonable to have to update every HTML file, or to have a > whole separate sheet for every UA or even for just one browser (rather > than just a few extra lines in a single file). > So perhaps you can explain again why it would be worse than what I've > described, to have a reliable, implementor-supported way to supply > rendering-engine-specific lines of CSS from within my CSS file? It is > clear to me, at least, why it would be better. I am sure I am not > alone amongst Web designers. And don't tell me that I should just > avoid using any CSS that is not supported by all the UAs out there. You cannot do that, because you can't possibly be aware of all the UAs out there. If you could guarantee that authors would write standards-compliant CSS, and then that @ua would only ever be used to single out specific versions of specific broken browsers (or engines) to provide additional rules to work around them, then that would be less harmful. However, based on prior experience with User-Agent and navigator.userAgent, I simply don't believe that authors will use it like that. -- Stewart Brodie Software Engineer ANT Software Limited
Received on Thursday, 22 November 2007 15:29:53 UTC