- From: Brad Kemper <brkemper.comcast@gmail.com>
- Date: Mon, 11 Aug 2008 08:28:54 -0700
- To: Mikko Rantalainen <mikko.rantalainen@peda.net>
- Cc: CSS 3 W3C Group <www-style@w3.org>
On Aug 11, 2008, at 2:58 AM, Mikko Rantalainen wrote:
> should be enabled for selected versions only. That is, one should
> not write
>
> if (isOpera) { ... }
>
> but instead
>
> if (ua.brand == "Opera"
> && ua.version >= 9 && ua.version <= 9.50) { ... }
>
> Note the difference: I'm targetting the versions I've detected the bug
> in. I'll not *guess* that this browser will contain the same bug in
> the
> *future*, too. I'll not guess which version had the problem the first
> time. I assume that if I've seen the problem in 9.0 and 9.50 the the
> problem exists in every version in between. (Might not be always
> true.)
> This way the vendor does not get a hit for *fixing* the bug in the
> future and only users of this vendor's future *broken* releases will
> suffer. That is, until I have time to test that future release, the
> users of that version will suffer even though I've enabled workaround
> for the very same bug for older releases.
Well, that could be your choice as an author, if you've got the
ability to target a UA or rendering engine and its version. But this
is when it actually does become more of a maintenance nightmare,
because you then have to check after each minor update, no matter how
insignificant, and update your filter for 9.51, 9.511, 9.52b, and so on.
>>> While it may seem like a quick and simple shortcut to work around a
>>> bug in the short term, browser sniffing creates a maintenance
>>> nightmare further down the road.
>>
>> What maintenance nightmare? Do people even consider what they are
>
> The maintenance nightmare that is the UA released by the same vendor,
> that works to the spec, *will not work correctly with the content you
> provide*. I think it's a maintenance nightmare if a perfectly
> working UA
> cannot show the content correctly because of a bug workaround inserted
> in the content. However, the fact that incorrectly working UA
> (released
> in the future) shows the content incorrectly is not a maintenance
> nightmare. It's only about supporting that incorrectly working piece
> of
> crap if that's really desired. That's something one could call
> maintenance as regular.
>
> In short: maintenance nightmare is something that is done today that
> prevents doing stuff correctly in the future without extra work to
> undo
> the work that has been done today.
That's very simple, not a nightmare. The "extra stuff" is changing or
adding a version number, something that is much simpler than the bug
detection presented in the article. I'm not against bug detection, but
sometimes a quick fix is more practical or even possible.
Your choices are as follows:
1. Do nothing until the bug is fixed. Have your site screwed up in
that UA until it is fixed. This might be years until people seeing
your page see it as you intended.
2. Write something that allows that UA to render your page correctly
now (or at least good enough). When the bug is fixed then you update
your site to no longer filter for that browser. The change you make
only takes a few minutes to make, and if it takes you a week to get to
it then that is only one week during which people seeing your page are
not seeing it as you intended.
I'd rather see my site messed up for a week in Browser X, than for
years. And if Browser X represents thousands of visits per day to my
site, then it will more likely be no more than a day. And if I was
running a big site like Apple or Yahoo, or if I am just on the ball,
then I would probably have the fix ready for the beta versions and
beyond, with no down time. And the fact that I considered Browser X at
all in my original code implies that I am aware of it and pay
attention to how it works on my site.
> The maintenance nightmare content is the stuff that *prevents* users
> from using the latest, *correctly* working releases of the UAs. Only
> because the content is *intentionally* authored to work
> *incorrectly* on
> said UAs!
>
>> The bug detection routine that the article proposes is much more
>> complex, assumes the author would have the time and inclination to
>> investigate the problem to the degree needed to write something like
>> that, and also assumes that all UAs that might need this routine
>> would flawlessly support other modern JavaScript methods like
>> document.implementation.createDocument. Its impractical to test for
>> possible flaws in every built-in method you might use, but a little
>> browser sniffing for known problems in known rendering engines can
>> provide work-arounds that help more than they hurt.
>
> I agree. It is not realistic to start normal scripting with
> conformance
> test suite and then trying to detect any random bug in the UA and then
> trying to workaround that previously unknown bug in the possibly
> previously unknown UA. How do you do that? Will you write tests to
> make
> sure that the workaround will not hit some *other* bug in the said UA?
>
> The bug detection routine is good for enabling the workaround
> automatically for browsers with exactly the same bug.
Right. And in many cases the only browser that is likely to have
EXACTLY the same bug is one based on the same rendering engine.
> Assuming, of
> course, that the bug detection is perfect and that the original bug is
> *correctly* diagnosed in usually closed source UA. Often you cannot
> assume that last part.
Exactly. Just because some other UA may share the same bug doesn't
mean your "fix" will actually fix it anyway, since it might have other
bugs or slightly different versions of the bug.
But a UA detection works for the problem you do know about, in the UA
you do know about.
>
>
>>> Whenever you feel tempted to solve a problem with the inelegant
>>> browser sniffing hack, take a moment to ask yourself if there is a
>>> simple way to detect the bug instead.
>>
>> In the case of CSS, the answer is always "no". The only method
>> currently available is by using parsing hacks (or IE's comment hacks,
>> if you want to detect IE specifically). And those are going away
>> mostly in newer and newer versions of modern (i.e., non-IE) User
>> Agents. Proposals for feature detection have much worse problems than
>> UA detection (the browser will say it supports something, even if it
>> is buggy). We do have media queries, which would be a natural place
>> to query for a particular rendering engine, UA, and/or version
>> number. But implementors refuse to implement it.
>
> To prevent the problem we already have e.g. with user agent strings.
>
> For example, MSIE says that it is "Mozilla/4.0" and provides the
> actual
> version in the comment part. Other browsers say "Mozilla/4.0" at the
> start of the user agent string, then repeat magic letters "MSIE" in
> the
> comment part and then follow with "just kidding, not really"....
>
> If the implementors could enforce that sniffing could be used for
> workarounds only and a *correctly* working UA can display *everything*
> correctly if it doesn't execute any workarounds, I guess they would
> happily implement it. However, content authors have repeatedly
> demonstrated that the actual content will be surrounded with those
> safeguards and the only way to get the content is to claim to be THE
> ONE
> "supported" browser for that content.
This is the part where you are basing your suppositions on the
situation of JavaScript in the 90s, which lead to a continuing
situation. However, it doesn't happen with CSS today. Today IE has the
largest market share by far, and authors can use IE conditional
comments for blocking out all but IE. But they don't. They either put
non-IE content in a regular style block where all UAs can see it, or
they just use the conditional comments for IE version control.
In the latter case, the authors don't care about having their site
work with any other browsers anyway, and so other UAs are not taking
responsibility for trying to render code that is clearly not for them.
Nor should they. Nor would they need to if authors used some other
method to block them out of the CSS (hacks or @ua or whatever). That's
the author's choice, no matter how wrong-headed it might be.
FireFox has the second largest market share. There are a couple of CSS
hacks that target Gecko only, yet no author I know of use them for all
of their non-IE content. And no UA that I know of has duplicated the
Gecko selector differences for the express purpose of being able to
read those CSS-hack rules.
> After saing that, I do support allowing browser engine sniffing in CSS
> but I'm afraid that the syntax must be made so hideously hard to use
> that a casual web author would not ever use it.
We already have the hideously hard part, via CSS hacks. And it keeps
getting harder. In fact I watch this list for ideas of new ones when I
learn that there are still some differences of interoperation of
selectors, because I know that one day I will need them.
> Perhaps require that the
> CSS embedded in the workaround part must be inserted inside a string
> base64 encoded or something. And even then, some authors would
> probably
> try to use that mechanism for "copy protecting their style".
That kind if obfuscation is more likely to lead to misuse, because
people would be copy and pasting something that is not easy to
understand.
>
>
> --
> Mikko
>
Received on Monday, 11 August 2008 15:29:34 UTC