- From: Boris Zbarsky <bzbarsky@MIT.EDU>
- Date: Mon, 17 Nov 2008 18:48:40 -0500
- To: "Roy T. Fielding" <fielding@gbiv.com>
- CC: Jonas Sicking <jonas@sicking.cc>, HTML WG <public-html@w3.org>
Roy T. Fielding wrote: > I disagree with your logic. When Firefox came out with a more > standards-based parser, a lot of our customers were happy to switch > to it. Note that significant work on the Gecko parser more or less stopped significantly before Firefox 1.0 got released. It's been in maintenance mode only. > But now that Firefox is getting just as buggy and complex as > the other major browsers It's an easily verifiable fact that the rendering engine part of Firefox has gotten significantly less buggy since the bad old days of Firefox 1.0. I suggest trying to run Firefox 1.0 on the regression test suites that Firefox 3 passes, for example. I won't speak to complexity, though the parser hasn't changed much in that respect. Other parts have (some got simpler, some got more complex). It seems to me that you have preconceived notions of what browsers are how "buggy" and "complex" that doesn't necessarily have much basis in reality, and you're basing your decisions on those preconceptions. > Firefox usage hasn't increased since it decided to be no better > than the others. The usage data I've seen don't support that claim. > Instead, the original firefox team has moved on If you're talking about the rendering engine, then it either happened before Firefox 1.0 was released (when most of Netscape's remaining employees got laid off) or hasn't happened at all. The folks who worked on the original pre-1.0 Firefox UI, have moved on, true. What does that have to do with web standards? > and, in a year or two, there will be other fresh ideas on browsing > implementations. I have yet to see "fresh ideas" on HTML _parsing_. What I see instead is that anyone who actually cares about compat with existing content has to reverse-engineer. > Because "current browsers" change every six months. In order for me > to design my content for testing on current browsers, I'd have to > regenerate it every six months (more frequently during the cycles > when competition between browser vendors is relevant). You're assuming that new browser releases will break your content? That's something browsers try very hard NOT to do for reasonable content. Of course if you do UA sniffing for a particular release date, you might lose. > If there is nothing to differentiate your software from others, > then there is no reason to build the software in the first place. Hold on. Who said there is nothing to differentiate? There can and should be differentiation in user-facing functionality (performance vs memory usage tradeoffs, user-centric features like history handling, search, spell-check, etc, etc). There can and will be differences in the extra web-exposed features, while those are in the process of being experimented with on the way to standardization. Heck, there are differences in licensing, support lifetimes, and other such details. But the whole point of standardizing things is to eliminate differences in a certain area between all the implementors of a standard. That's why standards exist. Standards that permit differences in the area they cover are simply poorly written. In my opinion, of course. -Boris P.S. Note the work that's been happening recently with JavaScript performance. While the various ECMAScript implementations are all implementing the same standard, one that clearly defines behavior in all sorts of cases, they are certainly differentiating themselves while still behaving the same way from a correctness standpoint on identical input. You seem to think this is not a competitive landscape?
Received on Monday, 17 November 2008 23:49:33 UTC