W3C home > Mailing lists > Public > whatwg@whatwg.org > June 2011

[whatwg] several messages (fwd)

From: Ian Hickson <ian@hixie.ch>
Date: Tue, 14 Jun 2011 07:03:55 +0000 (UTC)
Message-ID: <Pine.LNX.4.64.1106140700370.14203@ps20323.dreamhostps.com>

During a minor audit of some of my work on this list, I discovered that 
the e-mail below from 2009 never made it to the mailing list; it likely 
got stuck in moderation due to the size limit. In the interests of 
completeness, I'm forwarding it now. Apologies for the quite ludicrous 
delay in publicly replying to some of the e-mails in this batch reply!

Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

---------- Forwarded message ----------
Subject: Re: several messages
From: Ian Hickson <ian@hixie.ch>
To: WHAT WG List <whatwg at whatwg.org>
Date: Mon, 7 Sep 2009 07:51:44 +0000 (UTC)

On Wed, 10 Nov 2004, Jim Ley wrote:
> On Wed, 10 Nov 2004 01:34:28 +0000 (UTC), Ian Hickson <ian at hixie.ch> wrote:
> > On Wed, 10 Nov 2004, Jim Ley wrote:
> >> Equally, there's no problem at all achieving what you want by having 
> >> 2 extra em elements surrounding the quotes themselves.  (the size 
> >> presumably emphasising the quote mark, or perhaps a span if you don't 
> >> agree that it's emphasis)
> > 
> > As I said earlier in the thread, I'd rather drop the entire <q> 
> > element than introduce that kind of verbosity. I disagree that there 
> > is no problem here. I think it is quite horrible.
> It's semantically clean, backwards degradeable, and imposes no burden on 
> implementators, I'm not sure what's horrible about it.  Your use case is 
> a very rare one, it's not something I've seen on the web more than about 
> once - it may be because of the difficulty in doing it but 2 extra 
> elements is rather simple, so I can't believe this.

Assuming this is talking about quote marks, we ended up requiring the UAs 
to include them in the rendering, since that's what all browsers do now.

On Sat, 13 Nov 2004, Jim Ley wrote:
> On Fri, 12 Nov 2004 22:17:10 +0000 (UTC), Ian Hickson <ian at hixie.ch> wrote:
> > On Fri, 12 Nov 2004, Jim Ley wrote:
> > > Could you please explain how you arrived at this conclusion?  It's 
> > > not supported by HTML or WCAG specifications.
> > 
> > On the other hand, content which is key to the application -- such as 
> > the logic behind a calculator -- clearly can't be optional.
> The logic behind a javascript calculator is predicated on the choice of 
> javascript - a calculator does not need to use javascript, you're 
> presupposing a technology, just as I am pre-supposing a technology in my 
> "CSS Map" - the applications in each case is "calculator" and "map" 
> neither rely on javascript or CSS, yet by your argument the calculator 
> is allowed, yet the map is not.  You've not explained what is different 
> about the two technologies.

The difference is a design difference: CSS is intended to be an optional 
layer above the logic to change the presentation, whereas scripting can be 
used both for that purpose or for defining the actual logic. HTML is just 
for the logic part.

Or to put it another way: CSS is media-dependant; script can be both 
media-dependant or independant, and HTML(5) is media-independant.

On Tue, 4 Jan 2005, Jim Ley wrote:
> On Tue, 4 Jan 2005 10:23:54 +0200, Henri Sivonen <hsivonen at iki.fi> 
> wrote:
> > Shipping FooML over the network is not more Semantic Web friendly, 
> > since software written by others are not aware of the semantics of 
> > FooML.
> Yet there are a huge number of known XML formats that could be used 
> instead of FooML that do have well defined and well known semantics, 
> these can be very sensibly used.  Masahide Kanzaki and Morten 
> Friedrichsons work on XSLT transformations of RDF/XML shows how possible 
> this is.
> > Eh? WF 2.0 is adding more declarativeness compared to WF 1.0 + JS.
> Yes, but it's not adding enough to really make a difference, and is 
> actually lengthening the life of the javascript mess, now I'm happy with 
> that, I generally get paid for sorting out just such messes, but really, 
> I'd still like to see it go away.
> WF2 still needs it, in fact, it's almost certainly going to increase the 
> need of it, as people are going to want the features in IE and will 
> start writing large shims to try and make it work.
> Scraping the presentation layer to ensure there's no spamming, and it's 
> consistent with the data layer is a much better problem for search 
> engines that trying to infer the semantics from the presentation layer, 
> that's hardly been a great success.

Indeed, with HTML5 we will increase the usage of scripting, I hope, while 
decreasing the usage of scripting for features that have often been 
implemented in the past.

On Tue, 4 Jan 2005, Jim Ley wrote:
> On Tue, 04 Jan 2005 09:15:46 -0500, Matthew Raymond
> <mattraymond at earthlink.net> wrote:
> >    ??? Okay, let me get this straight. Browsers MUST support 
> > client-side XSLT because a couple of guys did some really interesting 
> > work with it?
> I can't understand how you went from what I wrote to that conclusion. 
> The issue at hand was whether sending FooML was good or bad, I was 
> trying to illustrate it was irrelevant question as there's no need to 
> send FooML, there's mark-up languages with known semantics.
> >    I'd also like to point out that webmasters have far more control on 
> > the quality and reliability of server-side XSLT than they do for 
> > client-side XSLT.
> I have no idea what triggered this rant on XSLT, I would never use it on 
> the client, but that wasn't the issue at hand, I was simply using it as 
> an illustration of successful delivery of known XML semantics today.  
> As we're discussing the future, it seems odd that people are wishing to 
> hobble next generation user agents with what is already used today bases 
> on straw man arguments on FooML.

XSLT is out of scope for this work.

> >    If you feel that specific elements and attributes could be added to 
> > WF2 to decrease the use of complicated scripting,
> I don't, I don't see the point of Web Forms at all as currently 
> proposed, they don't go far enough to be useful if they're not supported 
> everywhere, which they won't be simply because IE6 users won't be 
> upgraded in their lifetimes unless binary plugins are used (which is 
> Bill's original point in the thread)

It appears that even small steps are useful in practice.

> >    What, _specifically_, is "it"? Why would IE with working WF2 
> > support require more additional Javascript than another browser?
> I assume you saw Dean Edwards attempt at a Behavior that implemented a 
> tiny part of the Web Forms 2.0 specification?  It was a lot of 
> javascript, loads more than most people ever put on a form to do 
> validation.

But it only has to be written once, and it is only necessary for the years 
of transition.

On Tue, 4 Jan 2005, Jim Ley wrote:
> On Tue, 04 Jan 2005 15:29:16 +0000, James Graham <jg307 at cam.ac.uk> 
> wrote:
> > It's not only an extreme example, it's a terrible example. Google have 
> > repeatedly shown that they have no interest in using the semantics 
> > available in HTML,
> That's because there is no semantics in HTML other than web document 
> semantics, something it is actually highly likely it uses - since most 
> other search engines do.  Hn elements more important, google lists 
> reading from OL/UL etc.
> > Even allowing that your example is misguided, I'm not sure I believe 
> > the rest of the argument either. In order to believe that HTML should 
> > be starved in order that XForms should flourish,
> It's not a case of starving one to boosting another, the point is that 
> incremental edge additions to HTML won't achieve anything, would XForms, 
> who knows, I personally doubt it until there's a rendering model beyond 
> HTML in the mix.

It appears to be incorrect that incremental edge additions don't achieve 

> > you have to take the position that there will be a migration to XForms 
> > (and XHTML) from HTML. In order to believe that this will offer a 
> > significant advantage to the semantics of the web, you have to believe 
> > that authors will tend to use the new features offered by XForms in 
> > the way that they are designed to be used. In practice, I'm thoroughly 
> > unconvinced of the first and skeptical of the second.
> However you have the same problems with the migration to WebForms and 
> WebApplications from HTML, you have to believe that it'll offer 
> significant advantage (I can see none, since it simply doesn't work on 
> any user agents yet, and there's no likelyhood of it happening on IE - 
> especially if binary plugins are rejected as a solution.)
> You can't claim the migration to XForms won't happen, but somehow the 
> migration to WebForms will, they both suffer from the same fundamental 
> problems - You can create compatible WebForms docs within the single 
> document, but it's far from trivial, and you miss out on quite a few of 
> the benefits.  I don't in fact believe it will be easy enough for your 
> normal developer, just like XForms isn't - It took Ian a few attempts to 
> create the few basic examples on the site, and he's hardly your average 
> developer.

History seems to have shown that the bandwagon that HTML5 is predicated on 
does in fact have enough momentum to carry it forward.

On Tue, 4 Jan 2005, Jim Ley wrote:
> On Tue, 04 Jan 2005 16:54:37 +0000, James Graham <jg307 at cam.ac.uk> 
> wrote:
> > Sorry, I meant that Google don't use appropriate semantics in their 
> > own HTML documents, not that they don't use semantics when calculating 
> > search relevance of other HTML documents. View - Source on the results 
> > of a Google query indicates a bunch of <font> tags, tables and various 
> > other things but no heading elements, for example.
> A Heading element is the only thing missing from googles front page, 
> they're using LABEL for etc. on their forms.  HTML simply doesn't have 
> enough semantics to do more.  People seem to me very confused about 
> semantic mark-up here, HTML has virtually no semantic mark-up, it has 
> the semantics for web-documents, nothing else.  For the data layer of 
> web-applications, web-documents are irrelevant, we're transfering other 
> semantics (accounts data in salesforce, email data in GMail, photo data 
> in flickr etc.)
> So when we're talking about semantics in the data layer, HTML semantics 
> are not going to cut it.

There are a variety of alternative mechanisms one can use now, such as 
Microformats, to provide more detailed semantics if they truly are 

> > > the point is that incremental edge additions to HTML won't achieve 
> > > anything
> >
> > Achieve in what sense? It certianly has the possibility of making many 
> > existing documents "more semantic" than they were before (by enabling 
> > new functionality without author-JS) and offering a better user 
> > experience for ordinary people. That seems to be achieving something.
> I don't agree, there's nothing in Web Forms, even Web Applications 2.0 
> that changes GMail, it's here today, it may make it easier to do some of 
> it, but not a great deal so and that advantage will be irrelevant due to 
> the huge legacy environments.

Based on feedback from the GMail team, this doesn't seem to have been 
shown to the be the case. While there will naturally be a transition 
period, numerous developers have indicated an interest in using HTML5 
technologies today.

> Web Forms, like any technology will take a long time to get popular, the 
> browsers need to get authored, the authors need to educated, the bugs 
> need to be worked out etc.


> It seems to me that so many of the people here are thinking in the 1998 
> mindset when the growth was such that new browsers quickly swamped old 
> browsers so you could keep introducing these tiny improvements.  We're 
> not, we've got a stable environment that we all, and all our toolsets 
> know how to work with, developing web documents costs a fraction of what 
> it did 3 years ago, not because of great standards support, but because 
> the dominant browser hasn't changed in all that time.  Web forms are 
> very unlikely to suddenly make IE change, and without that, there is no 
> reason to increase your costs, buy new tools and re-learn all the 
> techniques to change nothing about what the end user sees.

It appears that incremental improvements do still work.

> So what is it?  What's the significant advantage - will it reduce my 
> development costs?  Will it improve my users experience? or ....

All of the above, it appears.

> > I thought the new consensus was that implementations before 
> > spefications had reached a stable Call For Implementations phase were 
> > a bad thing anyway?
> Of course it's a bad thing, but that doesn't change the fact it's not 
> implemented, and that real commercial viability of the features is a 
> very long way in the future, and the more Safari, Opera and Mozilla 
> penetrate the market in the mean-time, the less the use case to using it 
> will be, since as well as the legacy aspect of IE to consider, there's 
> the legacy aspect of all these installed users.

The future appears to be coming along quickly now. Patience was all that 
we needed.

> > They don't suffer from the same fundamental problems. Webforms allows 
> > you to extend existing documents. In simple cases this will be 
> > effortlessly backward compatible.
> but in those simple cases, the vast majority of your users get a crap 
> user experience.

For now.

> > XForms requires that you ditch everything you know, learn a bunch of 
> > complex specs, find a CMS that will deal with XML in a sane way and 
> > then start again with all your content. It precludes any possibility 
> > of backwards compatibility. These are hardly the equivalent situations 
> > you make out.
> However, you're selling webforms over XForms, you've not yet sold the 
> case for WebForms over HTML4 forms.  I'm no XForms fan, I have less 
> belief in what the HTML WG are doing than this organisation, but at 
> least they've realised playing at HTML ages isn't really too profitable.

Profit doesn't have to be the ultimate motive.

> > > - You can create compatible WebForms docs within the single 
> > > document, but it's far from trivial, and you miss out on quite a few 
> > > of the benefits.
> >
> > So there are benefits to WebForms after all?
> There's benefits to all sorts of things, they need to outweigh the cost 
> though to be used.  If I thought there was no value in Web Forms at all, 
> I wouldn't be wasting my time here, there's some value, and there could 
> be a miracle that meant it succeeded, I very much doubt it, but if it 
> did, then I need to ensure that it meets my needs as much as I possibly 
> can.
> At the moment WebForms offers very little, for average cost.  XForms 
> combined with other XML workflows offers a lot for an absolutely huge 
> cost.  Neither are providing a reason to move.  Web Forms has always 
> felt like a defensive measure from HTML browser vendors so as not to do 
> work on creating a real next generation user agent, and decrease the 
> reasons to switch to other richer UAs.  Formsplayer that combines SVG, 
> XBL, XForms and IE HTML browsing is a much more persuasive sale of what 
> a next generation user agent might look like.  Yes the cost of moving to 
> it is large, but it offers reasons to change, I've yet to find a reason 
> to change to Web Forms 2.

Many people have now found reason to be excited about HTML5 features.

On Sat, 8 Jan 2005, Jim Ley wrote:
> On Sat, 08 Jan 2005 22:50:52 +0100, Olav Junker Kj?r <olav at olav.dk> 
> wrote:
> > Thefore it must be possible to implement the WHAT specs on top of 
> > Internet Explorer, using only non-binary extensions. XHTML, SVG, 
> > XForms etc. is simply out of the picture, although we might all agree 
> > that they are technically better for building rich applications.
> The problem with this argument is that you're pretty much saying "we 
> can't build a browser as good as IE"  If you can do it in script in IE, 
> we don't need web-forms, we'll just do it in script, if we can only do 
> it in script at the moment in IE, then that's a huge limitation in these 
> other UA's and they really should focus on getting them up to standard 
> and not waste their time trying to get everyone else to author 
> differently to cater for the less capable user agents.
> Because the technology is solving nothing that hasn't already been 
> solved in script (by your own definition above), what's the motivation 
> for it?  Web Applications are almost all script only currently, so 
> authors obviously aren't concerned about using script.

Making things easier and better on the long term is a good goal, even if 
it is possible, but hard, to do things using script today. For example, 
<canvas> could have been implemented using <div>s, but that wouldn't be 
anywhere near as sane, and indeed authors waited for <canvas> before 
doing this kind of thing.

On Sun, 9 Jan 2005, Jim Ley wrote:
> On Sun, 09 Jan 2005 15:19:11 +0100, Olav Junker Kj?r <olav at olav.dk> 
> wrote:
> > Well, the motivation is to make it easier to build web applications, 
> > by having a standard declarative way to build forms with validation, 
> > menus etc.
> These things aren't what's difficult right now, they're things you do at 
> most once (you either do it yourself, or you pick it off the shelf from 
> other people, Google and others re-use a lot of code they find on the 
> web.)

It may be possible with script, but it is easier without script.

> > This may give a tremendous increase in productivity for web authors.
> Not at all, there's not much being added to the Web Forms, and 
> importantly whilst this theoretical you can implement it in IE with 
> script is much talked about, there's no actual announcements of a 
> commercial quality implementation ever coming about.  I really can't see 
> it coming in less than 18 months, unless it is actually done as a 
> commercial product, or with big contributions from the WHAT-WG members.  
> Neither of which I think is likely, and neither of which I think would 
> be a good idea.

This is now under way.

> Without a WHAT-WG library to do all the web-forms stuff in IE, the web 
> application authors have just increased their work, as now, rather than 
> just having to implement the widgets they need, they will have to 
> implement them in a way compatible with Web Forms 2.0.

Sure, but the long term vision here is native implementations, not shims.

> > Of course it would be cool if WHATWG could extend the underlying 
> > platform with things you can't do with script now, but this is not 
> > goning to happen as long as Microsoft are not part of WHAT (and even 
> > if they were, it would take years).
> Not at all, I'd encourage you to go and read Bill McCoy's points again.  
> The big problem is that building a bit better product to fight the 
> entrenched one is never going to work, you need to build something much, 
> much better to overcome the inertia that the product has.

It seems to be working well.

On Sun, 9 Jan 2005, Jim Ley wrote:
> On Sun, 9 Jan 2005 16:38:48 +0000 (GMT), J. Graham
> <jg307 at hermes.cam.ac.uk> wrote:
> > On Sat, 8 Jan 2005, Jim Ley wrote:
> > > The problem with this argument is that you're pretty much saying "we 
> > > can't build a browser as good as IE"
> >
> > That has nothing to do with whether the competition is better or not. 
> > Your statement is a good example of ignoring the context of technology 
> > and blindly assuming that success or faliure is based entirely on 
> > technical merit.
> I'm disappointed you got that impression, that's actually the opposite 
> of what I was trying to get across, as soon as I read it back after 
> posting I was disappointed with how it sounded.  The non-IE user agents 
> represented here are all really good user agents, however in the 
> specific situation of extensibility with just scripting, they're pretty 
> weak (with the exception of mozilla of course) and rather than giving us 
> some elements that we can go away and implement using the extensibility 
> mechanisms of IE.  I'd much rather see the effort spent purely in 
> developing extensibility mechanisms in all the user agents, then we 
> could achieve the same as Web Forms 2.0, and more.

While making Web Forms 2 and HTML5, I also wrote XBL2, which I hope 
browser vendors will implement. It provides an extension mechanism such as 
that which you describe.

> I'm disappointed that the developers of 3 very good browsers are wasting 
> their time tweaking things around the edges of things that scripters can 
> already do, when there are already many interesting technologies out 
> there they could be implementing instead.

It appears that those other technologies were not considered as 
interesting as you describe.

On Sun, 9 Jan 2005, Jim Ley wrote:
> On Sun, 9 Jan 2005 23:42:04 +0100, H?kon Wium Lie <howcome at opera.com> 
> wrote:
> > Neither do I. Street HTML is a slightly humorous term we use at Opera 
> > to describe the mess we wade through. We don't encourage it, nor 
> > propose to build web applications on it. Indeed, our efforts in WHAT 
> > WG is meant to ensure that tomorrow's applications are *not* written 
> > in an undefined dialect.
> Current web applications use HTML almost exclusively as a rendering 
> language, they're not even using the document semantics available in 
> HTML, it's just script and CSS dangling off of the HTML elements you 
> need.
> Increasing the amount of HTML elements and form tipes out there doesn't 
> change this fact, they're not going to do quite enough - There's the 
> eternal problem of the declaritive, it can only go 80% of the way there, 
> so you end up employing scripters who are much happier doing it all in 
> script, the disciplines being different.
> If the WHAT WG's aim is to improve Web Application authoring, then it's 
> scripting that needs to be helped, tweaking at the edge isn't going to 
> do anything.

We're improving scripting too.

> If the WHAT WG's aim is to discourage what they call street HTML, then 
> removing the ambiguity and the mess of the existing HTML and de-facto 
> specifications into something well grounded will be a lot more useful 
> than simply introducing more stuff that'll end up in the variously 
> implemented bin.


> > XML's draconian "stop-processing!" rule does not mix well with the 
> > natural laziness of authors or the last-minute quick fix required by 
> > their managers.
> Just because there is one ridiculous rule in XML when used for user 
> centric languages, doesn't mean that just building on a mess of ill 
> defined HTML is appropriate.  Sure the XML processing rules make it near 
> useless for use on the web if they're adhered to, but we can get more 
> rigourous than the HTML mess.

So you're arguing that instead of taking what browsers do and defining it 
rigourously, we should take something you admit is near useless, and then 
make browsers not do it?

That doesn't seem like a good strategy.

On Mon, 10 Jan 2005, Jim Ley wrote:
> On Mon, 10 Jan 2005 20:36:39 +1300, Matthew Thomas <mpt at myrealbox.com> wrote:
> > On 10 Jan, 2005, at 12:51 PM, Jim Ley wrote:
> > > ... Current web applications use HTML almost exclusively as a 
> > > rendering language, they're not even using the document semantics 
> > > available in HTML, it's just script and CSS dangling off of the HTML 
> > > elements you need.
> > 
> > Sure. If Web applications were semantic they'd need HTML block 
> > elements such as <login>, <register>, <order>, and <post>.
> We have all agreed HTML only has document semantics so web-applications 
> can never do more.  However I was meaning they don't use strong/em, or 
> p, or hn etc. So the HTML that is rendered is almost semantically empty 
> for example most web-mail products don't put the title of the email in 
> an Hn, this is what GMail thinks
> <DIV id=tt><SPAN style="FONT-SIZE: larger"><B>Re: [whatwg] Web Forms
> 2.0 - what does it extend , definition of same,</B></SPAN>
> The GMail page I'm typing on contains layout tables, span, div, b, this 
> frame doesn't even contain a title.  Web-applications can never with an 
> HTML base contain web-application level semantics - a good reason why we 
> shouldn't be looking to take HTML beyond any sort of stop-gap measures, 
> especially when XUL/XAML and more already exist to provide application 
> level mark-up.  They could however carry web document semantics to aid 
> non visual understanding, the fact they don't isn't something that needs 
> more specs to help.
> > If the What-WG's work increases the average fraction of any particular 
> > application that is written in HTML or XHTML rather than script and/or 
> > arbitary XML, we do benefit.
> Could you please clarify who the we are?  and why we benefit, for 
> example if the we is web application authors, then you need to talk in 
> terms of reduce development cost, or reduced testing cost, or better 
> result to our users etc.  (I think this is very tough given the 
> IE/scripting issue.)  Or if it's web-application consumers, how do they 
> benefit.  You're also missing one of the elements, non-arbitrary XML - 
> For example Bill McCoy's RSS reader won't be consuming arbitrary XML.

Web applications and documents are just two points on a continuum. I don't 
see why they couldn't share a vocabulary.

On Mon, 10 Jan 2005, Jim Ley wrote:
> On Mon, 10 Jan 2005 10:33:19 +0000, James Graham <jg307 at cam.ac.uk> wrote:
> > However the existance of non-semantic uses of HTML only proves that 
> > these are possible in the language, not that well written examples are 
> > not common.
> When looking at what Web-Application Developers need to create better 
> web-applications, surely looking at case studies of example 
> web-applications is highly relevant to the discussion?  I don't create 
> web-applications, I create intranet applications using web-technologies, 
> so I can't produce mine as an example, using things like GMail seems to 
> be a good way of approaching it.
> Where else are you looking at for examples of web-applications, the 
> techniques used and the problems faced by developers?
> It's my conclusion from seeing the available web-applications today, 
> that the web document semantics of HTML are almost completely useless 
> and un-used, what's needed is application level semantics.

Web application developers have been deeply involved at all stages of 
HTML5's development.

On Mon, 10 Jan 2005, Jim Ley wrote:
> On Tue, 11 Jan 2005 01:15:14 +1300, Matthew Thomas <mpt at myrealbox.com> wrote:
> > On 10 Jan, 2005, at 9:36 PM, Jim Ley wrote: .... Which are two 
> > separate issues, because Gmail could be using a <title> etc despite 
> > being an application.
> > 
> > However, Gmail requires that you log in, so search engine indexing 
> > isn't an issue;
> I don't belive search engine indexing to be relevant to 
> web-applications, web-applications simply aren't indexed, we don't want 
> search engines indexing our email, or our aggregrated RSS or ... The 
> IMDB or Amazons product pages are not web-applications, they're brochure 
> pages, their a very different use cases IMO.
> As I seem to have a very different idea on what Web Applications are, 
> could someone please help me out with the definitions being used by the 

A Web application is an interactive document, basically. I don't know how 
else to really define it.

> > And what would be the benefit of that? Most authors don't care about 
> > semantics.
> One of the main arguments I repeatedly see for the WHAT-WG stuff, is 
> more semantics in the mark-up, yet you're now arguing that authors don't 
> use it anyway, what's the point of having it if the authors aren't going 
> to use it?  The cost of developing this stuff takes implementation time 
> away from more useful things.  The reasons GMail doesn't support 
> minority browsers, is the weakness of their scripting engines, maybe the 
> time spent doing Web-Forms was invested in those, we'd see more 
> web-applications using it.  GMail is a good example of a Web-Application 
> supported in many user agents, almost all are written for IE alone.

In practice browser vendors work on all these things.

> > > They could however carry web document semantics to aid non visual 
> > > understanding, the fact they don't isn't something that needs more 
> > > specs to help.
> > 
> > How are those issues related? The rest of the Web needn't grind to a 
> > halt while we wait another decade for Google to fix their markup.
> GMail is an example, it's not IME a bad example, it's a typical example 
> of the web-application, so I think it's a very useful thing to look at 
> when looking at what features are relevant when building specifications 
> for future ones.  Now I realise use cases etc. for the various features 
> of Web Forms 2 and Web Applications are lost in the pre-public portion 
> of the WG, and no-one seems willing to re-visit them, but I still like 
> to think of the development in terms of what is useful to 
> web-application developers.

The GMail team has been actively involved in the HTML5 work.

> > It is quite impressive for you to have snipped my answer to your 
> > question and then to have asked it.
> Unfortunately, that's not an answer to who the WE are, in terms of 
> web-applications, Amazon and IMDB are certainly not relevant, they're 
> not web-applications, they publish information, that's a very different 
> use case, and isn't a use case that Web Forms 2.0 or Web Applications is 
> addressing.

They're all part of the Web. They're all relevant.

> They're all in the interaction sphere which is irrelevant to search 
> engine ranking.  Equally the arguments over search engine ranking and 
> semantic mark up really help, IMDB puts the title of the movie in a 
> STRONG element, and Amazon in a B element, not a heading as would seem 
> appropriate.

Things will improve over time. Indeed, there has been much progress over 
the past few years. (Still a long way to go.)

> So still we don't have examples of Web-Applications that use the 
> document semantics of HTML, even if we extend the definition of web 
> application to include amazon and imdb.  In any case using scripting for 
> behaviour (form validation etc.) does not prevent search engines finding 
> content, very few people use javascript to include content.


> > > You're also missing one of the elements, non-arbitrary XML - For 
> > > example Bill McCoy's RSS reader won't be consuming arbitrary XML.
> > 
> > That's not relevant. We're talking about what applications are written 
> > in, not about what they consume.
> Are we?  I thought we'd been discussing Bill McCoy's excellent points on 
> how web-applications are put together, one of which is that consuming 
> appropriate XML semantics in your web-application is a good thing - 
> that's something that XForms and XBL and others allow, it's not 
> something that the work here is contributing too.

How would you like it to contribute to that aspect?

On Mon, 10 Jan 2005, Jim Ley wrote:
> On Mon, 10 Jan 2005 14:22:46 +0000, James Graham <jg307 at cam.ac.uk> 
> wrote:
> > They're certianly web applications in my definition - they provide an 
> > interface which allows me to retrive, view and manipulate data.
> What manipulation can you do on IMDB, and if we ignore the purchasing 
> part of amazon, what manipulation do you have there?  To me these are 
> simply websites, if they're not, then just about everything is a 
> web-application.

Yes, exactly.

> > I feel I must have missed your point here. Why does it matter if a 
> > (proprietry) web application (under your definition) consumes semantic 
> > markup, non semantic markup, binary data or anything else?
> Because consuming semantic mark-up means that user agents can understand 
> the semantics of the mark-up, just rendering an HTML'ised web-document 
> view isn't particularly useful, it makes it extremely difficult to 
> re-purpose the data into other views.  Keeping the data semantic until 
> the last possible layer is a very good idea.


> > What matters is the markup sent to the client is semantic or not, so 
> > that a user can interpret it without requiring a visual rendering.
> Except of course you're ignoring yet again that web document level 
> semantics are near useless, you can guess that the <h1> is the subject 
> of the email, but how do we guess which is the from address, hmm I guess 
> that's the bit that matches a valid email address production, oh no that 
> might be the too, or maybe the cc, hmm web-document semantics don't do a 
> great job of really delivering semantics of applications.

Microformats are a way to address this.

> An html version of a blog is indeed an alternate of the RSS, yet, 
> without knowing the HTML template used you cannot create an RSS view 
> from the HTML rendering, you can however create an HTML rendering from 
> the RSS - the RSS has sufficient semantics, the HTML doesn't.  This is 
> why I believe the semantic level arguments are so weak for the WHAT-WG 
> work, HTML simply doesn't cut it.

HTML5 now includes a defined mapping to Atom.

> > Web Forms and more generally "HTML 5"
> HTML 5, where can I see the drafts of this specification?


On Mon, 10 Jan 2005, Jim Ley wrote:
> On Mon, 10 Jan 2005 17:07:32 +0000, James Graham <jg307 at cam.ac.uk> 
> wrote:
> > Jim Ley wrote:
> >>On Mon, 10 Jan 2005 14:22:46 +0000, James Graham <jg307 at cam.ac.uk> 
> >>wrote:
> > Isn't that a bit like saying "If you ignore the email part of 
> > GMail..."?
> No it's not, because purchasing is completely irrelevant to Mathews 
> benefit of not using script in the web-application, since the only 
> benefit he gave was search engine discovery (indeed that's the only 
> benefit given anywhere in recent threads).

Not requiring authors to hand-implement everything just makes things 

> The web-forum and comment on data version of web-application is very 
> well addressed in existing HTML, could you provide exactly how the 
> WHAT-WG work is solving particular use cases in this scenario - yep, I'm 
> still asking for use cases months down the line, because I'm still not 
> hearing any.

In the case of forums, HTML5's new elements help by making it easier to 
style the forum without an excess of <div>s and class="" attributes.

> > >Because consuming semantic mark-up means that user agents can 
> > >understand the semantics of the mark-up
> >
> > I understood that it was the web application itself that ws consuming 
> > the markup, not the UA.
> The ideal is both, users with different needs have big problems using 
> generic UA's by providing semantic mark-up, it's trivial to create new 
> views on the same data - think how easy Mathew Somervilles Accessible 
> Odeon scraping would've been had they based their system on iCal.

Do you have any suggestions for how to apply this to HTML5?

> > In this case, the UA has no need to ever come into contact with the 
> > RSS, just with the HTML (or XForm or XUL or whatever).
> but that's bad, it puts extra layers in the system if we don't need 
> another transformation where information is lost we shouldn't have it, 
> we're supposedly building something for the future here - email 
> transformed to javascript, transformed to HTML, transformed to a 
> particular CSS based rendering - introduces extra points of 
> complication, and we lose semantics at each level.  This isn't a good 
> idea!

How would you keep the data in the original format?

> > As a user I know the from address because it has the string 'from:' 
> > before it (or some other such thing).
> In a particular rendering you know that, if you have problems accessing 
> that particular rendering there's nothing you can do other than hope 
> someone who can understand is able to "trivially scrape" the 
> information.  Something of course that breaks as soon as the format 
> changes.  If they used standard semantics for the transport that would 
> be simple.

Microdata is intended to aid with such scraping.

> > in most situations that's not what I want to do; I just want to read 
> > my email.
> Of course, in almost all situations providing a jpeg would be fine too, 
> the motivations for semantic mark-up etc. aren't based on the most 
> situations, they're based on the ability it gives to enable users who 
> aren't covered by that standard solution.


> > The problem for language design is that there's no way to provide 
> > those semantics for every possible format of information that one 
> > might want to transmit.
> How do you mean?  it may mean private vocabularies for certain things, 
> but most web-applications are well covered by some sort of common format 
> for the majority of their data.  Even social networking sites have 
> agreed formats for it, it's really not difficult.

Could such data just be exposed by giving access to the underlying files 
using <link>, say?

> > There's not going to be an existing langauge that meets my 
> > requirements so I have to invent one. At the point that I've invented 
> > a language, semantics are irrelevant because no general purpose client 
> > application is going to know them anyway.
> No, because they'll be a lot of commonality, and as long as your base 
> language is extensible enough to allow re-use of the common elements, 
> then you have no problem, a user agent that can understand "email 
> subject", "email body" won't die if it also has "email from" it may not 
> make use of it, but there's value in the email subject and email body.

Microdata will hopefully enable this kind of extension mechanism.

On Mon, 10 Jan 2005, Jim Ley wrote:
> On Mon, 10 Jan 2005 20:47:03 +0200, Henri Sivonen <hsivonen at iki.fi> 
> wrote:
> > It wouldn't make sense for browsers to support any and all possible 
> > internal formats of the server-side apps.
> I never suggested it would, I suggested that it would make sense that a 
> web-application framework provided for the ability to support such 
> formats, similar to how XBL and XForms and similar allow it for XML 
> formats.
> Rather than what is being proposed here that we continue down the, no 
> public semantics at all of current web-applications.

I'm certainly open to proposals to expose such data in a way that Web 
authors would be willing to use.

On Tue, 11 Jan 2005, Jim Ley wrote:
> On Tue, 11 Jan 2005 10:09:59 +0000, James Graham <jg307 at cam.ac.uk> 
> wrote:
> > Jim Ley wrote:
> > >The web-forum and comment on data version of web-application is very 
> > >well addressed in existing HTML, could you provide exactly how the 
> > >WHAT-WG work is solving particular use cases in this scenario - yep, 
> > >I'm still asking for use cases months down the line, because I'm 
> > >still not hearing any.
> > >
> > Reread section 2 of the Web Forms spec for some of the more obvious 
> > improvements.
> I'm looking for use cases, I note you fail like everyone else to 
> actually deliver one.  The ability to restrict input client side to your 
> productions exists, there's no missing functionality on the web today, 
> certainly data entry in such systems already does it!

It's easier now, though.

> > The required attribute (section 2.7) provides a convenient mechanism 
> > for indicating that users cannot post without a valid email address 
> > (for example). Again, this will be possible without needing any client 
> > side javascript.
> Yet, you've not explained the use case of not requiring client-side 
> javascript, quite apart from the fact all of this absolutely requires 
> javascript in all current user agents, and in effect all user agents for 
> the lifetime of most pages authored in the next 2 years.

Authors like not having to hand-write such code.

> At the same time, things like email are less rigourous validation than 
> is currently used (even if the email validation is almost always 
> incorrect syntactically) since they generally test that the TLD is a 
> valid one.

That kind of validation won't scale with ICANN's recent forays.

> > Indeed. They could have used HTML 4's <link rel="alternate"> to point 
> > to the iCal data from the HTML page. Sadly, the convenience of 
> > building up HTML directly from the underlying database (not to mention 
> > the incompetence of Odeon) meant they didn't feel the need to insert 
> > an extra layer of abstraction between their db and the web page.
> Exactly, which is why it makes sense to create web-application language 
> that can consume more complicated formats directly, then it's not an 
> extra page to provide and an extra level of abstraction, you are simply 
> rendering the semantic data.  It's a good use case, with the current 
> HTML crop we have to create n documents for each view iCal, voice, HTML 
> etc.  If web-application languages had things such as XBL, we would be 
> creating transformations from a single rich data source.


> > But that's a parallel problem to the problem of a sutiable language 
> > for creating a Web-based interface to the data (the actual topic at 
> > hand).
> I understood the topic at hand is improving the robustness and ease of 
> authoring of web-applications?  Web-based interfaces do not mean HTML 
> even today

I'm not sure what you mean here.

> > Not at all. There are two questions here - can we make the front ends 
> > to web documents and applications accessible and can we make the 
> > underlying data avaliable for repurposing.
> This isn't true, the questions are very much linked, if you're learning 
> disabled and use a symbolic language like BLIS as your interface to the 
> world, no amount of HTML tweaking is going to make a service accessible, 
> a rich data format does make it possible.
> The argument about WHAT WG work has focussed a lot on accessiblity 
> benefits, yet these are only tweaking the areas that are already solved 
> problems (the accessibility problems of current Web Applications is 
> mostly down to laziness)  Real accessibility benefits for the harder to 
> reach members aren't being addressed here.

If you have any proposals to address such mechanisms, please let us know.

> > But making the base langauge extensible enough that it can be used for 
> > all possible situations also makes it unweildy, unoptimised and hard 
> > to use or understand.
> I assume you're stating this as a fact having reviewed the currently 
> available solutions?  Some of which are being used by an awful lot of 
> developers using good frameworks that work well.  There's even W3c 
> standards on some of it.  Could you point me at some justification for 
> your statements?

Henri's statements here seem pretty uncontroversially true. The more 
extensible a mechanism, the more complicated and unwieldy it ends up 

On Wed, 12 Jan 2005, Jim Ley wrote:
> On Wed, 12 Jan 2005 10:15:20 -0800, Brad Fults <bfults at gmail.com> wrote:
> > You seem to be completely missing the point that WF2 support is not 
> > required for the same user experience that you get right now from any 
> > given site.
> No, but if you don't then you actually increase the complexity of 
> authoring not decrease it.  Currently form validation is performed by 
> script, to still have this script in a web-forms world means you have to 
> start including checks to see if you have a web-forms UA on top of the 
> existing script (and we've not had a way to detect web-forms 2 UA's yet, 
> other than the DOM Implementation hasFeature method which requires full 
> support as I imagine partial support arriving first it's not going to 
> help.
> So whilst you could offer identical user experiences the cost will 
> increase rather than there being any benefit in using Web-Forms 2.0 
> features.

This is true during a transition, yes.

> > Also, I would submit that users of Firefox, Opera, and Safari are more
> > conscious of updates and will upgrade voluntarily (or if urged by an
> > update manager), so delivery of the technology is not a problem.
> Interesting submission, if it's sustainable one then the 16 million
> reported FireFox downloads equates to considerably less than that
> number of users.  However we measure things though non IE browsers on
> the desktop are a minority, so just having Opera/FireFox and Safari
> users upgrade more isn't actually all that likely to achieve a large
> penetration for Web Forms 2.0 clients, which is what you need before
> you can start dropping the script for validation of existing clients.
> As it also would cost me 15 USD to upgrade Opera and at the moment my
> version does a great job, I'm not completely sure I'd bother.

Opera's now free, and Opera, Safari, Firefox, and Chrome now have more 
market share than IE in many markets.

> > As far as wasting time on WF2 instead of XUL or whatever else: you 
> > make the very large assumption that if those technologies were fully 
> > integrated into Opera and Safari, they would indeed be used by a large 
> > percentage of web authors.
> I'm not making that assumption, I don't think it's all that likely that 
> any of the many proposed future technologies will attain much traction 
> for a long time.  However technologies that offer real benefits have a 
> better chance than WF2.

Which technologies did you have in mind?

On Wed, 19 Jan 2005, Jim Ley wrote:
> On Wed, 19 Jan 2005 13:55:30 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > Note the current (work in progress) proposal to allow iCal content to 
> > be mechanically integrated into HTML documents in a 
> > backwards-compatible manner:
> Yes, but developing mappings to html with magic classnames and encoding 
> everthing in HTML is a lot more work than using just the native format, 
> sure it's likely to be useful for somethings, maybe even iCalendar, but 
> not for all possible richer data formats.

Certainly, but supporting all data formats is not really scalable compared 
to having a standardised syntax like XML or Microdata for carrying data 
in HTML documents.

> > > Sure I would like it it everyone communicated everything freely 
> > > using standard languages correctly. Realistically, it isn't going to 
> > > happen.
> > 
> > Sad to say, but you're right.
> I don't think of it as sad (there's good reasons not to expose rich 
> data), however not having the ability to do it as with the current Web 
> Forms proposals I do feel sad about.

How would you suggest we enable such data to be exposed?

On Wed, 5 Jan 2005, Jim Ley wrote:
> On Wed, 05 Jan 2005 09:19:17 -0500, Matthew Raymond 
> <mattraymond at earthlink.net> wrote:
> > ...should look like this...
> > 
> > <img src="image.png">
> > 
> >    What are everyone's thoughts on this?
> It makes quality assurance harder, since visual indication of alt is not 
> obvious from testing, automated scripts are used which can easily ensure 
> that no alt attributes that are needed are missed. Making it implied 
> makes this harder.
> It also makes user agents that use the absense of an alt attribute a 
> trigger for fix up behaviour unable to tell when it should carry out the 
> fix up, either leading it to not bother attempting it or to attempt it 
> so aggressively that it has to spend loads of time on doing it on each 
> and every image.
> WIth both of these reasons, and not a single reason from you as to why 
> it would be beneficial, I think it's an easy decision to leave it as is.

It's not clear to what you are referring here.

On Thu, 6 Jan 2005, Jim Ley wrote:
> On Thu, 6 Jan 2005 16:34:28 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > What you are describing sounds quite similar to XSLT and XBL. Are 
> > there requirements not met by those technologies that you had in mind?
> Well the requirements for most of Web Forms work is met by other 
> technologies, what's relevant is the lack of support and integration 
> with legacy HTML content and browsers.

Agreed. I think we are addressing these issues adequately.

On Thu, 6 Jan 2005, Jim Ley wrote:
> On Thu, 6 Jan 2005 17:12:17 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > This is a problem with the UAs, though, not with the specs.
> The fact Mozillas XHTML implementation is too poor to be used, and IE's 
> XBL engine is absent and ... are all problems with UAs meaning authors 
> have to be hobbled with Web Forms 2.0 rather than good replacements 
> grounded in well specified technology unlike HTML.
> The whole rationale of Web Forms 2.0 seems to me based on overcoming UA 
> problems in legacy specificatios.

Not the whole rationale, but certainly there is a lot of that, yes. One 
cannot ignore the real world and be successful, generally speaking.

> > I assume you are asserting that a use case for this feature is 
> > allowing users to upload images of a specific size so that those 
> > images can then be targetted at specific UAs for use as wallpapers?
> More likely sent direct, the 4 megapixel camera in my cell phone 
> delivering images to other cell phones is not useful to use the full 4 
> megapixels, neither is it useful to the user to upload hundreds of 
> megabytes for it.  Of course at the same time publishing to the web may 
> well usefully want the 4 megapixels uploaded.
> The reason why server sampling is not a good idea, is that the user 
> experience of uploading huge files is not something they enjoy.

HTML5 now provides for ways to do client-side sampling and subsequent 

> > If so, then it would seem to me that a better, more forward-looking 
> > design for such a service would accept images of any size, the bigger 
> > the better, and would then use high quality resampling to provide 
> > users with images of the appropriate size for their device.
> I really don't understand how a company who I understand makes most of 
> its income in an environment where bandwidth is so expensive are happily 
> suggesting the "bigger the better" approach.

In the WHATWG, unless explicitly stated, one should assume that comments 
are made on behalf of the person sending the comments, not their employer.

On Fri, 7 Jan 2005, Jim Ley wrote:
> On Fri, 7 Jan 2005 07:45:51 +0100, H?kon Wium Lie <howcome at opera.com> 
> wrote:
> > I think you raise a valid point; ideally the specification will become 
> > smaller rather than larger from now on.
> How?  There are still lots of places that are under defined by my 
> reading, does this mean that there are features at risk that we'll 
> likely see removed, could we be told what they are?

Any feature not implemented is at risk.

> > WF has similar dependencies on DOM but this seems less scary since DOM 
> > is already deployed.
> WF also has dependencies on completely unspecified or under specified 
> HTML that rather than being specified at all are being relied on to be 
> "what browsers are doing at the moment"  I agree the page count 
> comparison was a cheeky one when the explicit dependencies are included, 
> but it would be nice to see the HTML you're building on well defined 
> before it's built on.

I hope you are now satisfied that we have done that.

> > Perhaps. Personally, I don't hear the thunder.
> I'd be interested as to where you see the thunder for the Web Forms 
> stuff, as outside this list I mostly here boredom or derision.

There is much excitement now, it appears.

On Fri, 7 Jan 2005, Jim Ley wrote:
> On Fri, 07 Jan 2005 09:00:21 -0500, Matthew Raymond 
> <mattraymond at earthlink.net> wrote:
> >    Semantics don't disappear simply because there's no standardized 
> > pronunciation.
> Correct, so Ian's suggestion to tell if an element is semantic or not by 
> looking at what you'd do in an aural stylesheet is really a complete 
> red-herring.

It's only intended to be a rule of thumb, but I do think it's pretty good 
at that. No?

On Sat, 8 Jan 2005, Jim Ley wrote:
> On Sat, 8 Jan 2005 23:12:40 +1300, Matthew Thomas <mpt at myrealbox.com> 
> wrote:
> > On 8 Jan, 2005, at 3:47 AM, Ian Hickson wrote:
> > > would need to implement all kinds of funky things in the rendering 
> > > engine (e.g. how to handle a drag when the elements in question are 
> > > in a 0.5 opacity block rotated 47 degrees
> > 
> > Implementation complexity circa 2005: use a dragging-in-progress 
> > cursor.
> Implementation circa 2000 in javascript in a popular user agent has no 
> problem with the last....  I've written lots of drag and drop form 
> widgets which use opacity under the cursor.  I'd never done one with a 
> 47 degree rotated box, but a quick hack here demonstrated that IE had 
> little problems dragging such an element.
> http://jibbering.com/2005/1/drag-rotate.html
> The extension to dragging an element in a table or similar, is simply a 
> matter of cloning the element at the start of the drag, having that 
> absolutely positioned under the cursor, adding something to suggest 
> where it's dropped then organising the dropping - It's trivial.
> I certainly don't see any problem with the implementation in an IE 
> javascript implementation of web-forms 2.0, now I appreciate Ian is from 
> Opera, and maybe Operas engine is significantly poorer than IE's at 
> achieving this sort of thing, but we can certainly prove useful things 
> like move can be done, rather than the really trivial to do in script 
> move-up and move-down.

This particular topic became moot when we removed the repetition block 
feature altogether.

> > If not, why will it be any great catastrophe that the SVG plug-in 
> > produced by Bill McCoy's colleagues at Adobe doesn't support Web Forms 
> > 2 in its embedded HTML either?
> I also don't see any real problem with that, an SVG rendering engine 
> will be pretty optimised at rendering such changes, more optimised than 
> IE is, graphics compositing being its core functionality.  I've never 
> tried such a form widget in an SVG UA, but using IE's filters which 
> achieve similar to SVG ones aren't a problem.

The aforementioned plugin is no longer under development, as I understand 

On Thu, 20 Jan 2005, Jim Ley wrote:
> On Thu, 20 Jan 2005 19:56:04 +1300, Matthew Thomas <mpt at myrealbox.com> 
> wrote:
> > If not, back to my previous question: Why will it be any great 
> > catastrophe that <input type="move">, like the whole of the rest of 
> > Web Forms 2, is not supported in the embedded HTML of a plug-in 
> > implementation of SVG? And if it will not be any great catastrophe, 
> > then why are you raising the prospect of SVG filters applied to 
> > draggable items (and why did you mention opacity earlier), if not in 
> > an attempt to make <input type="move"> seem more complex than it is?
> input type move is pretty simple to implement in IE with just scripting, 
> even with inline SVG provided by the plugin, it would still just work. 
> the IE filters etc. again would still just work.  I certainly do not 
> feel this should be rejected based on implementation complexity in IE, 
> when it's considerably easier than many other features.

Again, this point is now moot with the removal of that feature.

On Fri, 21 Jan 2005, Jim Ley wrote:
> On Fri, 21 Jan 2005 12:36:34 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > I love the idea of type="move". It's the kind of thing I would love to 
> > put in WF3. But it's simply not realistic to expect WF2 implementors 
> > to spend the time it would require to make that feature work 
> > correctly.
> Why pre-suppose this, I simply do not understand the point of declaring 
> that implementors are not able to do something, when the whole point of 
> the implementation phase of the spec authoring process is finding out if 
> the authoring of it, is practical or not.
> Put it in, when there's real implementation feedback, then you can 
> judge, don't reject it simply based on your own gut feelings on how much 
> effort it would be for you to test it on one or two user agents.

I guess we have different ways of writing specs.

> I am confident input move is easier to implement than many other things 
> in the spec on my implementation platform, I'm not asking for the tough 
> bits on mine to be removed because they're too tough yet, I might give 
> that feedback in the future, but right now, I'm willing to suck it and 
> see.  You agree it's a valuable feature, but it in, let's find out when 
> we're actually implementing this stuff if it's practical.

Unfortunately we won't know either way, since the feature is now gone.

On Tue, 11 Jan 2005, Jim Ley wrote:
> On Tue, 11 Jan 2005 10:47:11 -0500, Matthew Raymond
> <mattraymond at earthlink.net> wrote:
> > martijnw wrote:
> > > See:
> > > https://bugzilla.mozilla.org/show_bug.cgi?id=102695 - Treat some
> > > transparent elements as "transparent to events"
> >    I think that CSS3 would be a better target for this, and thus it
> > should probably be addressed on the W3C www-style mailing list:
> I agree.
> > CSS:
> > | #special-shape {
> > |   background-image: url(special.png);
> > |   crop: background;
> > | }
> re-use the appropriate parts from how it's done in SVG maybe:
> http://www.w3.org/TR/SVG/interact.html#PointerEventsProperty

I've left this issue up to the CSSWG.

On Fri, 21 Jan 2005, Jim Ley wrote:
> On Fri, 21 Jan 2005 12:42:27 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > I'm reluctant to add new features at this late stage. Does this demo:
> > 
> >   http://whatwg.org/demos/date-01/
> > 
> > ...not handle the case well enough?
> No, please see all the previous unconcluded discussion, I'm very much in 
> favour of Mathews proposal here, it does a very good job of addressing 
> the fallback problems with dates.

I disagree that we need elaborate schemes intended only to handle the 
transition period.

On Sun, 23 Jan 2005, Jim Ley wrote:
> On Sun, 23 Jan 2005 02:07:49 +0100, Olav Junker Kj?r <olav at olav.dk> 
> wrote:
> > Matthew Raymond wrote:
> > >    Some webmasters, however, may want everyone to enter the date in 
> > > the exact same format, for the sake of consistency (which would be 
> > > useful to simplify employee training, for instance). This can be 
> > > accomplished by assigning the value "entry" to the |applyon| 
> > > attribute:
> > 
> > This defeats the idea that the datetime control should be localized 
> > and feel native.
> feel native ?!?!?!  The majority of Web UI features don't feel native, 
> Mozilla and Opera use non-native controls (sensible approach to develop 
> multi-platform, but it means they don't follow native conventions often)

It is clear that UA implementors would like their controls to feel native, 
even if their execution may sometimes leave something to be desired.

> I understood the idea of form controls was consistency, not native 
> control, if it is native control (which I'd much prefer, as I find it 
> very hard to learn new UI's) then I think they'll be huge difficulty in 
> implementing it, as only Safari has really easy access to native 
> controls of IE+script, Opera, Mozilla etc.

As you said earlier, let's see what they can do, before prematurely 
deciding something is a huge implementation burden.

> > By constraining the date control to follow a simple format string, a 
> > lot of UI power is lost. (Btw. the point of hiding the hint for WF2 
> > compliant browsers is lost if WF2 users should enter data in the exact 
> > same format.)
> but "UI Power" isn't always the only important thing - ease of data 
> entry, which is I think the main point of Matthew here, is such that 
> forcing the format is useful - of course you don't need to use the input 
> type="date" in this scenario, (do you in any?) but it is an important 
> usecase, I've yet to see a date control on any platform that is as fast 
> to enter a date as an <input type="text"> - of course that's only 
> relevant if you know the date to input, lots of the time you don't, 
> which is why we have the richer controls because they provide 
> information.

For the native control, the spec doesn't preclude the UA from using this 
kind of data entry, but the Web author can't know what format the UA will 
expect, and so can't provide the format string.

> > But I think its a good idea to be able to specify the exact 
> > submission format, since this will make adoption and backwards 
> > compatibility easier.
> Indeed, and picking a format string vocab shouldn't be hard, there's 
> lots of prior art.

I ended up picking just a simplified form of ISO 8601's formats. This 
doesn't give as much flexibility, but it's relatively easy to add a 
conversion step on the server side so it shouldn't be a huge issue.

On Sun, 23 Jan 2005, Jim Ley wrote:
> On Sun, 23 Jan 2005 19:32:01 +0100, Olav Junker Kj?r <olav at olav.dk> 
> wrote:
> > However, I think implementations should use a date picker that looks 
> > and feels more or less native by default.
> The problem with this of course is that it's near impossible to do that 
> and allow for the customisation required in form elements via CSS.  
> Whilst theoretically you could look at all the registry settings that 
> configure how windows rendering and behaviour works and then spend lots 
> of code authoring a native work where relevant but otherwise stylable 
> date picker, but it would be ridiculous.

The theoretical solution to this is XBL, but I agree that it's not an 
easy fix. I don't know what to do at this point though to make this 
easier. If anyone has any suggestions, please let me know.

> Equally I don't think most people have an expectation that a web-page 
> behave as their application, people I've talked to do have different 
> paradigms, I'd be interested in some real studies on this, but I think 
> we should avoid talking about nativeness as an attraction.

I agree that studies on this subject would be helpful.

> > In the case of date pickers, the sensible default is to have users 
> > pick or enter dates in the format of their own culture, and display 
> > the dates in an unambiguous format (that is, with named months).
> except of course this destroys any sort of even gross control by the 
> designer, I've mentioned this before, and it's not been resolved I 
> believe, but as a designer you need to know if the date element is going 
> to take up 15 em's square, or 1 line 5ems wide.  Even the gross 
> difference between "23/1/05" and "Sunday 23rd January 2005" is something 
> I'm concerned about.

Agreed. I'm not sure how to fix this; we can't very well dictate UI.

> > I think its more important that a date is unambiguous than its easy to 
> > enter.
> That depends on the use case.

Does it?

> > Its fast to type a date in your native format, however its not as fast 
> > if you have to parse a format hint and rearrange the day and month in 
> > your head, because the page author decided that every user should use 
> > the same date format regardless of their culture.
> Except of course that a significant number of web applications are not 
> cross-cultural, so in these moving away from the default understood by 
> the users of the application, to a default which no-one is known to 
> understand as a first format, doesn't actually improve the usability of 
> the applicaiton.

It seems reasonable to assume that a user will be familiar with their 
locale's own conventions, though.

On Sun, 23 Jan 2005, Jim Ley wrote:
> On Sun, 23 Jan 2005 19:48:41 +0100, Olav Junker Kj?r <olav at olav.dk> 
> wrote:
> > Date entry should be either unambiguous (e.g. picking from a calendar) 
> > or consistently in the same format across sites. Otherwise users will 
> > buy plane tickets for the wrong dates and will be unhappy.
> The current state of the web does not involve unambigous dates, yet the 
> problem you describe is not widespread.

I do not recall recently purchasing tickets from a service that did not 
use unambiguous dates, so I reject the premise of your statement.

On Mon, 24 Jan 2005, Jim Ley wrote:
> On Mon, 24 Jan 2005 00:51:10 +0100, Olav Junker Kj?r <olav at olav.dk> wrote:
> > Jim Ley wrote:
> > >>Date entry should be either unambiguous (e.g. picking from a 
> > >>calendar) or consistently in the same format across sites. Otherwise 
> > >>users will buy plane tickets for the wrong dates and will be 
> > >>unhappy.
> > >
> > > The current state of the web does not involve unambigous dates, yet 
> > > the problem you describe is not widespread.
> > 
> > I just took a quick survey of different airline sites, and all of them 
> > use dropdowns and calendars to ensure unambiguous date entry.
> Which I think illustrates my point...  Switching these sites to Web 
> Forms 2.0 datetime as is currently defined won't happen because the 
> fallback behaviour is no good.

Yup, they'll wait til browsers support it more widely.

> the format proposal begins to make this more likely as you're guaranteed 
> formatting hints, but even then I think it will be a struggle.
> I know my proposal that did allow rich fallback to the current state was 
> rejected which is fair enough, but I think something is needed to 
> improve the fallback from the current state!

I think we should see how long it takes for the controls to get 
implemented and adopted before spending too much time designing features 
specifically for the transition period.

On Thu, 27 Jan 2005, Jim Ley wrote:
> On Thu, 27 Jan 2005 13:04:52 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > Having a single format means libraries can be written that can then 
> > just be used directly, instead of having to handle dates individually 
> > for each site as we do now.
> Libraries can already have been written, indeed many have, but they've 
> never proved popular, the reasons aren't on the client-side, and trying 
> to change the behaviour of server-side authors is surely not within the 
> remit of WHAT-WG ?

I reject the premise of your question; libraries are quite successful.

> > For example, it assumes that a UA either supports all of the new 
> > date/time input types as well as the <format> element, or it supports 
> > none of them.
> Simply make a conformance requirement that any UA supports all sections, 
> much as the test requirements do, this would overcome this problem.

Your faith in the power of conformance requirements surpasses mine.

> Such a requirement would also help a lot in CSS, and make it truly 
> degradable (rather than the current situation where supporting 
> position:absolute, but not background-color say can have results which 
> do not degrade)

Could you elaborate on this?

> > Given past experience with the way UAs implement specs a bit at a 
> > time, I really don't think this is a good assumption.
> but as all the likely implementors of the spec are members of the group, 
> that can be controlled pretty easily.

Browser vendors are not an easily controlled group. :-)

> > I also am not too happy about the idea of introducing an element 
> > purely for the purpose of hiding content from new UAs -- effectively 
> > deprecating the element straight away.
> Good point, using OBJECT for this would be much better, as we're 
> re-using an existing element with just the semantics we want...

<object> seems to have little to know relation to date picker widgets, 
unless I'm missing something fundamental here.

> > Most of the JS was added at Jim's insistence, so as to degrade 
> > gracefully in UAs with two or three users.
> Except you've still not done so, you're still using lots of code that is 
> used on browsers with significant user share, the use of toFixed is 
> particularly crazy.

How so?

> > Also, the code takes care of the timezone problem (which <format> 
> > doesn't). It also handles hiding the format just for supported types 
> > so it works with incomplete WF2 UAs, and has graceful fallback in both 
> > WF2 and non-WF2 UAs when JS is disabled.
> No it doesn't it doesn't make any real attempt to see if the UA already 
> supports Web Forms 2.0.

I beg to differ.

> > Not really, the code is generic and can be cut-and-pasted.
> Not at all, that code relies on a particular html structure, it's hardly 
> generic.

I do not think this is an undue burden.

> > In most UAs the current value is selected when you tab into a control, 
> > so that doesn't seem like a serious problem. Also, as you say, it's 
> > only an issue when JS is disabled.
> You've also still not addressed the fallback situation when you want to 
> pre-populate the fields with data, rather than the format, something 
> which is the default behaviour for a huge number of usecases.

If you prepopulate with data, the format is likely obvious.

> > A company that requires that all its employees have the exact same 
> > date and time settings for display purposes has much bigger problems.
> The problem would not appear if a company did indeed mandate such, that 
> wasn't the issue raised though, they may want a common input format, not 
> a common display format.

I don't follow. Could you elaborate?

> > You'll almost certainly have to anyway, since without type="date", 
> > etc, authors are more likely to use a number of <select>s than a 
> > single field.
> So we really should be looking at a method which can fallback to the 
> existing well understood, well supported format - there's even been such 
> proposals on the list.

Again, I think you are overestimating the need for fallback here. Sure, we 
need fallback so that once 99% of users are covered using either script 
libraries or native implementations, the remaining 1% can still _use_ the 
page even if not ideally so; however, it's unlikely that authors would 
want to adopt these features until that was possible, so graceful fallback 
for more users isn't that necessary, IMHO.

> > By supporting most formats automatically, like the demo does, I don't 
> > really see that there is a problem.
> I can assume you've never done a lot of QA on javascript intensive sites 
> if you do not see lots of javascript as a problem.

Your assumption is mistaken.

> > > | <label>New Meeting Time:
> > > | <input type="time" value="11:00:00.0Z">
> > > | <format>Format: hh:mm</format></label>
> > > | </form>
> A good example of where localisation is problematical.  You localise 
> that to my timezone, and I then book the meeting in Norway at the wrong 
> time, because I want to book it in local time.

Naturally, one would need to adapt this as appropriate for each app.

> > The <format> elements above could also, IMHO, be replaced by <span> 
> > elements that are removed by JS in WF2 UAs, or by a more comprehensive 
> > solution like in the demo.
> No it couldn't, this suffers from the fact that there is no way to 
> identify if the UA is a WF2 one or not, since you cannot know that, 
> there is no way to make the javascript modify the page.  The WF2 UA is 
> the only entity which knows if it supports input type=date - it has to 
> be its respsonsibilty to remove stuff, you cannot leave it to the 
> author.

There are plenty of ways to detect support, as many of these demos in fact 
do (for example, testing .type's value).

> > This just seems way over the top, especially given that the only real 
> > reason to have it at all is for legacy UAs.
> There are hundreds of millions of legacy UA's, and no WF2 UA's, legacy 
> UA's will be the main audience for WF2 pages for years to come.

We're not in a rush. HTML5 is a long-term project.

> > >    date, input[type=date] { format-date: "%m/%d/%Y" }
> > 
> > It's quite likely that the CSS working group will do something like 
> > this.
> Could you point me at the CSS WG charter and where it's chartered for 
> things like formatting dates, I didn't believe it was?

I did not say it was chartered to do it, I said it was likely to do it.

On Thu, 27 Jan 2005, Jim Ley wrote:
> On Thu, 27 Jan 2005 11:39:33 -0500, Matthew Raymond 
> <mattraymond at earthlink.net> wrote:
> > > Good point, using OBJECT for this would be much better, as we're 
> > > re-using an existing element with just the semantics we want...
> > 
> >    Let's not start with the object stuff again. Any control defined 
> > with <object> has absolutely no semantics beyond those of <object>.
> Which depends on the other bits, just like input - OBJECT is already a 
> form control, just like input.

Only for plugins.

> > Also, it's nigh impossible to use your <object> approach with an HTC 
> > implementation.
> No it's not, it's perfectly possible.

Do you have an example? My experiments show that IE often drops <object> 
elements from the DOM entirely.

On Fri, 28 Jan 2005, Jim Ley wrote:
> On Thu, 27 Jan 2005 12:33:33 -0500, Matthew Raymond 
> <mattraymond at earthlink.net> wrote:
> > I'm not certain that every browser supports the form control aspect of 
> > <object>.
> I'm almost certain no browser supports the datetime proposals :-)

Opera has support for some features already; WebKit is getting support for 
some too.

> There's also the question as to why we can't just do this instead:
> > | <dateinput id="date" value="2005-01-30">
> Absolutely no reason, I'd be happy with this too, but Ian, without
> really giving his reasons, has been very opposed to introducing new
> elements, I assume he has good ones, hence the reason why I've been
> suggesting re-using existing ones that come with the exact ability we
> want.

If I could encourage you to use punctuation other than commas, that would 
be great. Run-on sentences like the above are hard to read.

The problem with a new element is that we don't instantly get good 
fallback, unless we allow the element to contain form controls, in which 
case we end up with legacy and future UAs having different numbers of 
elements in the form.elements array.

> > In fact, I've half talked myself into using this format just having 
> > typed it right now...
> I'd be happy with it, the current fallback of date means I could never 
> use it.


On Sat, 29 Jan 2005, Jim Ley wrote:
> On Sat, 29 Jan 2005 00:57:57 +0200, Henri Sivonen <hsivonen at iki.fi> 
> wrote:
> > On Jan 28, 2005, at 20:57, Matthew Raymond wrote:
> > >    It still means that the webmaster has to alter all server-side 
> > > scripting involving dates/times.
> > 
> > Webmaster starts using a new version of forms and has to tweak the 
> > server side. Isn't that expected? What's the big deal?
> The problems isn't that they have to tweak the server, the problem is 
> that for all of the legacy clients, which is the vast majority of people 
> using your site for the next 2 years at least will not be WF2 browsers, 
> will get a severely degraded situation to what they currently get.

Your e-mail was sent more than 2 years ago, and the situation is 
improving. Another 10 years, and will it be an issue? We can wait.

> Far from improving the user experience WF2 will be severely harming it.  
> Whilst Ian managed to find a few sites that had plain text inputs for 
> dates, the vast majority use multiple entry elements, simply because 
> that is all that's usable.  The input type=date does not degrade 
> usefully.  Ian hasn't even been able to make it degrade usefully with 
> lots of javascript!
> Comparing the situation to netscape 4 authoring is completely wrong, the 
> situation was very different, there weren't over 500million existing UAs 
> with no motivation to upgrade.

There's good evidence that users _are_ upgrading; even Microsoft is 
pushing hard for the IE6 userbase to update.

On Mon, 31 Jan 2005, Jim Ley wrote:
> On Mon, 31 Jan 2005 17:27:30 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > * It is easy for authors to not include any fallback, which makes it
> >   worse than the <input> equivalent.
> Considering the current fallback of date requires bucketfuls of script, 
> I don't see that as a particularly relevant problem.

I do.

> > * The fallback and non-fallback controls have different names.
> This could equally be considered an advantage - seen as the WF2 has a 
> controlled submission format, it now gives the fallback behaviour 
> consistent results.

It seems like a disadvantage to me -- it means serious changes on the 

> >     2. <select> controls, which do not need to be replaced at all, and
> So replacing the vast majority of date entry widgets on the web today
> is not a use case of the input type="date" it's specifically for the
> much rarer case of input type=date.
> Can I say that failing to address the use case currently implemented
> with select boxes would be a terrible failing of WF2, it's a much
> commoner use case than the single text entry box.

I don't follow.

> > ...not to mention the extra complexity and the implementation difficulty
> > compared to just using a new "type".
> How do you know how much harder it is to implement?

I've worked for three UA implementors, and worked closely with two more.

> This is a valuable feature, leave it in, during the implementation phase 
> we can find out how difficult it is to implement.

As noted earlier, I don't think that's the best way to write a spec.

> It's very disappointing to have features (which I believe are simple to 
> implement, certainly on any codebases I'm likely to implement this on) 
> denied simply because the editor of the spec, and no-one else, believes 
> it to be hard.

I'm sorry to disappoint.

On Mon, 7 Feb 2005, Jim Ley wrote:
> On Mon, 07 Feb 2005 11:09:02 -0500, Matthew Raymond
> <mattraymond at earthlink.net> wrote:
> > > Detecting whether a UA supports type="date" is easy (I do so in the 
> > > demo script). I don't really see what you mean here.
> > 
> >       You're cycling through all the <input> elements to find one that 
> > has type="date". Most of your script is involved in that very task, so 
> > clearly a good amount of script is loaded and executed beforehand. 
> > That's hardly an efficient means of detection.
> It's also not a successful means of detection, it _DOES NOT_ detect 
> suppot for input type=date, and in the previous discussion (before I 
> gave up listing the flaws in the script, of which there are still many) 
> I mentioned the fact along with the some cases where it fails.

It seems to work pretty well to me.

> Please Ian, stop saying that script is perfect when it is so clearly 
> not.  It's obvious you've decided the input type=date stays, and as 
> you're the only person who has change control on the specification your 
> decision is final, the fact that no-one else supports you is completely 
> irrelevant, but please don't use that script to create pretend 
> justification of the degradability of input type=date.


On Fri, 21 Jan 2005, Jim Ley wrote:
> On Fri, 21 Jan 2005 12:47:07 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > On Wed, 19 Jan 2005, Jim Ley wrote:
> > > >
> > > > Not a very big deal IMHO, I don't think hasFeature really works 
> > > > anyway.
> > >
> > > It doesn't, can we please not bother with it?
> > 
> > I'd be more than happy to drop hasFeature(), but I've been asked to 
> > have it by DOM people. It probably won't do any harm. (FWIW, the spec 
> > says basically any UA can return true; it's not a test of conformance, 
> > but of intention. As you say, you wouldn't be able to test 
> > conformance.)
> Please include a big warning in the specification stating that returning 
> true is possible even if not a single part of Web Forms 2.0 is 
> supported, indeed it's possible eventhe browser is guaranteed to crash 
> when WF2 DOM methods are used.

I added that warning, and then subsequently removed hasFeature()'s new 
strings altogether, making this moot.

On Thu, 27 Jan 2005, Jim Ley wrote:
> On Tue, 25 Jan 2005 14:41:46 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > > I think we should overwrite this statement of HTML 4.01 in section 
> > > 1.8[3] of the web applications specification.
> > 
> > That would mean giving up the fiction that HTML is an SGML 
> > application.
> You're using:
> does the NONSGML mean something else then?

We've since given up that fiction, and simplified the DOCTYPE further, 
making this moot.

On Mon, 31 Jan 2005, Jim Ley wrote:
> On Mon, 31 Jan 2005 15:31:47 +0000 (UTC), Ian Hickson <ian at hixie.ch> 
> wrote:
> > of a sweep through the document before firing the "load" event (which 
> > can be implemented as a load event listener assuming you can guarentee 
> > it runs first), and as part of an interface that extends HTMLElement 
> > and applies to all elements.
> the load event is way too late, you can't have forms exhibiting 
> different behaviour before onload and after onload, it's a horrible user 
> experience.
> Equally you need to be able to deal with repitition elements that are 
> created through some script mechanism after your onload event might have 
> fired, the way to do this is of course through the stylesheet - either 
> with HTC's or expression() .

Neither of these options is standards-based, unfortunately.

> > but that could be done as part of the sweep for repeat-start 
> > attributes on load
> So you're expecting the users to see the templates until the load event 
> fires?

This is moot now that the feature is removed.

On Wed, 6 Apr 2005, Jim Ley wrote:
> On Apr 6, 2005 11:22 AM, Lachlan Hunt <lachlan.hunt at lachy.id.au> wrote:
> > However, I disagree with that statement anyway.  Validators should not 
> > be non-conformant simply because they only do their job to validate a 
> > document and nothing else.
> Absolutely, if there is a continued use of a doctype, then a validator 
> is absolutely correct to validate to it, so either the validator should 
> remain conformant, or the doctype should be dropped.  (or explicitly 
> marked as this is not an SGML or XML doctype it is simply some cargo 
> cult you should include as your first line)


> > I don't see any reason why such a statement needs to be included at 
> > all.
> Neither do I, it's completely unreasonable to say that an incredibly 
> useful QA tool is non-conformant, simply because the editor doesn't 
> consider those benefits in the same way.

I'm not sure to what you are referring here.

> > > HTML5 will most likely stop the pretense of HTML being an SGML 
> > > application.
> > 
> > What the?  I disagree with that.  HTML should remain an application of 
> > SGML, and browser's should be built to conform properly.
> Fully agree.

We dropped the pretense. Browser vendors didn't want to comply.

On Wed, 6 Apr 2005, Jim Ley wrote:
> On Apr 6, 2005 11:41 AM, Anne van Kesteren <fora at annevankesteren.nl> 
> wrote:
> > Lachlan Hunt wrote:
> > > and the mostly undefined error handling, what about HTML 5 will be 
> > > so incompatible with SGML to warrant such a decision?
> > 
> > One example:
> > 
> > <http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2005-January/002993.html>
> the specication has not currently taken this into the specification, and 
> there has been no other support in the mailing list for doing this?

The spec does now take this into account, and there has been plenty of 

> This is clearly an example of how existing browsers are non-conformant, 
> and simply making it conformant just blesses browsers in the future to 
> continue violating specs safe in the knowledge that the spec will get 
> changed to suit them, rather than the reverse.


> Exactly what's happened with CSS, do we really want to do it with HTML 
> too?


On Wed, 6 Apr 2005, Jim Ley wrote:
> On Apr 6, 2005 3:41 PM, Olav Junker Kj?r <olav at olav.dk> wrote:
> > Lachlan Hunt wrote:
> > There are three types of conformance criteria:
> > (1) Criteria that can be expressed in a DTD
> > (2) Criteria that cannot be expressed by a DTD, but can still be checked
> > by a machine.
> > (3) Criteria that can only be checked by a human.
> > 
> > A conformance checker must check (1) and (2). A simple validator which
> > only checks (1) is therefore not conformant.
> One of the motivations of the WHAT-WG stuff, is that existing users 
> don't have to change their existing tools, processes and understanding, 
> now all of sudden we're removing one of the most valuable QA tools 
> available today, based on some spurious notion that all these existing 
> users don't understand the QA tools limitations.

I think it's very clear that many of these users do indeed not understand 
those limitations. The notion isn't spurious.

> Firstly I think the conclusions that the audience for WHAT-WG stuff 
> doesn't understand the limitations of the validator is sustainable - 
> where's the evidence?

The www-validator archives speak for themselves.

> And secondly, there won't be any QA tools at all if the validator isn't 
> one of them, so we'll be getting even more crap published, and far from 
> cleaning up the correctness, we'll just have a whole new load of crud to 
> rubber stamp as valid in WF2, now I realise it's to the advantage of 
> existing browser manufacturers to rubber stamp complicated heuristic 
> behaviour they've already solved into a spec (it prevents new entrants 
> from coming along)  but how is it to the advantage to the rest of us - 
> understanding specifications becomes harder and harder and relies on the 
> fact that we knew what happened before...

Your initial assumption in that paragraph is wrong:


> I simply cannot see the point in removing one of the few QA tools that 
> actually exists for HTML, and would like to hear the actual argument for 
> doing so. (as this is a seperate issue to if application of SGML is 
> something that it would be)

The point is to make things better. I believe we are succeeding here.

On Thu, 7 Apr 2005, Jim Ley wrote:
> On Apr 7, 2005 10:24 AM, Olav Junker Kj?r <olav at olav.dk> wrote:
> > Jim Ley wrote:
> > > Firstly I think the conclusions that the audience for WHAT-WG stuff 
> > > doesn't understand the limitations of the validator is sustainable - 
> > > where's the evidence?
> > 
> > People putting small icons on their pages to indicate that the page is 
> > valid. Also, lots of articles on the web about jumping through hoops 
> > to e.g. make a flash embed validate.
> Which doesn't say anything that these users believe anything more of 
> HTML validation than it is, it's a very important _part_ of QA. Given 
> that there are no complete HTML conformance checkers in existence today 
> for existing HTML technologies.  It seems very strange to remove one of 
> the few parts of QA available so what have we then got.  Or are the 
> WHAT-WG members going to step up and implement one?

Apparently so, yes.

> > As HTML applications becomes more complex
> I thought the whole point of the WHAT work was to make HTML applications 
> simpler, not more complex, are you suggesting the current specs are 
> failing in this area?

Applications will get more complex even as individual parts get simpler. 
Making individual parts simpler in fact enables the applications to get 
more complex at a reduced cost.

> > A conformance checker would be much more valuable since it might catch 
> > real errors which might cause the page to stop working.
> But who's going to write it?  There's no point talking about perfect 
> tools when no-one's writing it...

Henri and Wakaba have both written validator code now. I encourage others 
to do the same. I rather want to do one myself, though it'll take a while.

On Wed, 6 Apr 2005, Jim Ley wrote:
> On Apr 6, 2005 10:05 PM, Henri Sivonen <hsivonen at iki.fi> wrote:
> > On Apr 6, 2005, at 15:10, Lachlan Hunt wrote:
> > > XHTML variants of HTML 5 must be a conformant XML document instead, 
> > > though I noticed that is not the case with square brackets in ID 
> > > attributes in section 3.7.2 of WF2
> > 
> > That's not a problem if you don't claim they are ID attributes but 
> > attributes that happen to be named id.
> Which would mean we also have to start redfining DOM, so 
> document.getElementById(...) is defined to work against things that 
> happen to be named id and not just things that are ID's.
> Is it really worth going down this road?


On Thu, 7 Apr 2005, Jim Ley wrote:
> On Apr 7, 2005 12:03 PM, Ian Hickson <ian at hixie.ch> wrote:
> > They trigger standards mode in modern browsers. The current one for 
> > WHATWG specs is:
> Will the spec explain this some more, in particular could you document 
> what "standards mode" is, and exactly how user agents should use this 
> doctype to trigger it?


> Would it not be better to just require WF2/WA user agents to render it 
> in this "standards mode" you talk of?  Or at the very least use 
> something that would not confuse people into thinking that it is an 
> application of SGML or XML.

Done, as much as possible.

On Thu, 7 Apr 2005, Jim Ley wrote:
> > > Or at the very least use something that would not confuse people 
> > > into thinking that it is an application of SGML or XML.
> > 
> > Do you want to replace "NONSGML" with "THIS-IS-NOT-SGML"?
> No, I want to replace <!DOCTYPE - with something completely different, 
> the whole point that anything that looks like an SGML (or XHTML) doctype 
> will confuse users into thinking that it is an application of SGML.

Not much we can do about the <!DOCTYPE string, for legacy reasons.

> I see no reason to continue only the odd model of rendering mode 
> switching - especially without what this is exactly being defined in the 
> spec. when as only new implementations will be written supporting WF2 a 
> simple <html WHATversion="2"> like mechanism can be used, this will 
> leave it in a much stronger position for going forward.

The reason to keep this ridiculous model is compatibility with legacy 
content. It's defined in the spec now, though.

On Thu, 7 Apr 2005, Jim Ley wrote:
> On Apr 7, 2005 12:04 PM, Ian Hickson <ian at hixie.ch> wrote:
> > On Thu, 7 Apr 2005, Anne van Kesteren wrote:
> > > You should know the purpose I guess. (Standards mode.) I agree that 
> > > it should be documentated.
> > 
> > Actually come to think of it there is also a second purpose, namely, 
> > telling conformance checkers what version of the specification to 
> > check against. (Which I guess is basically the original purpose of the 
> Would a version parameter not be more appropriate, simpler, less 
> confusing to users, easier to parse, easier to understand, doesn't 
> confuse users into thinking that it's really an application of SGML. 
> Doesn't cause problems for legacy user agents like the HTML Validator 
> etc. etc.
> It seems a very poor and odd choice for versioning a document.

Agreed. I've removed all mention of versioning of this nature, except for 
allowing older DOCTYPEs to trigger older modes in validators.

On Fri, 8 Apr 2005, Jim Ley wrote:
> On Apr 8, 2005 8:18 AM, Henri Sivonen <hsivonen at iki.fi> wrote:
> > No. The proposed doctype <!DOCTYPE html PUBLIC "-//WHATWG//NONSGML 
> > HTML5//EN"> activates the standards mode in IE6.
> The proposed string that MUST appear as the first line of a WHAT-WG 
> document is... please do not call it a doctype unless it is a doctype, 
> see even people on the list are confused by using this!

It's called a "DOCTYPE" in the HTML5 syntax. Nothing to do with SGML or 
XML DOCTYPEs, except for having the same historical source and the same 

On Thu, 7 Apr 2005, Jim Ley wrote:
> On Apr 7, 2005 6:59 PM, Henri Sivonen <hsivonen at iki.fi> wrote:
> > I don't think SGML validation is part of What WG conformance 
> > requirements. I thought Hixie has specifically said he doesn't bother 
> > with DTDs.
> Hixie is simply the editor of the spec, this thread has shown clearly 
> that many people contributing to the WHAT-WG work do use DTD's, indeed 
> we already have a volunteer for creating a doctype, in fact it's only at 
> this (supposedly) late stage that we've suddenly been told there's not 
> one.

DTDs aren't adequate to describe HTML.

On Thu, 7 Apr 2005, Jim Ley wrote:
> On Apr 7, 2005 8:30 PM, Henri Sivonen <hsivonen at iki.fi> wrote:
> > On Apr 7, 2005, at 21:49, Jim Ley wrote:
> > > this thread has shown clearly that many people contributing to the 
> > > WHAT-WG work do use DTD's
> > 
> > To me it seemed that you argued that DTD validation is more useful 
> > than other conformance checks as long as the other checks are 
> > vaporware
> From which you can clearly conclude I do use DTD validation as part of 
> my QA process.  All the people who have said that DTD validation is 
> absolutely useless haven't bothered to describe their QA processes at 
> all.

Henri's validator.

> Maybe we could hear about these QA techniques rather than just saying 
> how crap the existing tools are, rather than the sudden proposal to 
> seriously reduce the amount of automated QA available to WHAT-WG 
> adopters.  If there was a different proposal on how WHAT-WG documents be 
> QA'd then I'd certainly be happy to see DTD validation disappear.

Glad to hear it.

On Thu, 7 Apr 2005, Jim Ley wrote:
> On Apr 7, 2005 9:22 PM, Ian Hickson <ian at hixie.ch> wrote:
> > On Thu, 7 Apr 2005, Jim Ley wrote:
> > > From which you can clearly conclude I do use DTD validation as part 
> > > of my QA process.  All the people who have said that DTD validation 
> > > is absolutely useless haven't bothered to describe their QA 
> > > processes at all.
> > 
> > Nobody is stopping anyone from using DTDs.
> If it's not an SGML applicaiton, you most certainly are.

It's not been an SGML application for long since before the WHATWG.

> However, the main issue, is How are people going to ensure they're 
> producing valid WHAT-WG documents?  Your proposal is to throw away all 
> the existing QA resources and leave a user with none, unless they happen 
> to have the time and the resources to understand a lot of dense prose 
> and author a DTD from it.  Something which very few people are going to 
> be able to do.
> So I'll ask once again, how do the WHAT-WG believe authors of WHAT-WG 
> documents will produce conformant ones?

Henri's validator.

On Tue, 19 Apr 2005, Jim Ley wrote:
> On 4/19/05, Ian Hickson <ian at hixie.ch> wrote:
> > > Indeed, a browser that assumed every <table> was data-bearing and 
> > > should have controls displayed would be all but useless.
> > 
> > Sadly, from a pragmatic point of view this is indeed the case.
> Why, there is no WHAT-WG content available today?

All HTML content is "WHAT-WG content". So yes, there is.

> Assuming it for the WHAT-WG doctypes (or those things that say <!DOCTYPE 
> but are not doctypes in any defined sense at all) seems a perfectly 
> feasible proposition.
> Why do you say it's not?

Because HTML5 applies to all HTML documents, not just new ones or old ones 
that have had their top line changed.

On Tue, 19 Apr 2005, Jim Ley wrote:
> On 4/19/05, Ian Hickson <ian at hixie.ch> wrote:
> > The request, as I understood it, was to provide exactly what XPointer 
> > provides, by requiring XPointer support. That's already possible, by 
> > simply using the XPointer spec.
> The request was to improve the ability to link into web documents - 
> that's the use case.
> The proposal to achieve this was by requiring XPointer.  Please actually 
> respond to proposals made, and issues raised with sensible responses, 
> and do not just dismiss them with inaccurate comments saying it is not 
> appropriate for the specificaion.  Improving linking into a web 
> documents is a perfectly reasonable use case to be addressed here, if 
> it's not, why is it inappropriate, how do we know what use cases are 
> appropriate and what aren't?

I think it's an appropriate use case, I just think it's already addressed 
by XPointer.

On Tue, 19 Apr 2005, Jim Ley wrote:
> On 4/19/05, H?kon Wium Lie <howcome at opera.com> wrote:
> > Personally, I'd like to keep the list of requirements to an absolute 
> > minimum. I do not want to include XPointer on that list, even if it 
> > starts with an X.
> I would to, I don't think requiring XPointer is a necessarily a good 
> idea, but the reason given that "it's not appropriate for the spec to 
> require things" is simply misleading and was done from a position of 
> power to stop debate, rather than as a sensible argument of why it's not 
> appropriate.
> Seen as both Opera and Mozillla already have an XPointer implementation 
> (or Opera will have as soon as they have a conformant SVG viewer) I 
> hardly think the requirement is particular onerous one, and should 
> certainly be discussed on its merits not on simply "it's not 
> appropriate".

Specs should survive on their own merits, not because another spec 
requires support for it without that support being actually necessary for 
the referencing spec's own technologies.

On Wed, 20 Apr 2005, Jim Ley wrote:
> On 4/20/05, Dean Edwards <dean at edwards.name> wrote:
> > Speaking of setTimeout, where is this defined?
> Nowhere, and in fact the string method is the commoner implementation, 
> there are a number of implementations which do not support a function 
> reference.
> uniqueID is very useful, I to use it all the time for patterns such as 
> your hashtable of objects.  I certainly support the idea, and with the 
> strong issues that closures of DOM objects have in IE, it's even more 
> valuable.  It's certainly a pattern I would rather encourage in the 
> dabblers who are always on the team.

I specced setTimeout().

I haven't specced uniqueID, since it is IE-only, and would likely be 
useful across different vocabularies. It seems like something for Web DOM 
Core, if anything.

On Thu, 21 Apr 2005, Jim Ley wrote:
> > >
> > >    http://whatwg.org/specs/web-apps/current-work/#settimeout
> It's rather odd though, as it's been defined such that the mozilla 
> implementation will be non-conformant, either the mozilla implementation 
> will need to change to be conformant - breaking compatibility with 
> existing scripts.  Or mozilla will not be able to be conformant.

Could you elaborate? Is this fixed now?

On Fri, 22 Apr 2005, Jim Ley wrote:
> On 4/22/05, Henri Sivonen <hsivonen at iki.fi> wrote:
> > On Apr 21, 2005, at 01:08, dolphinling wrote:
> > > What semantics does canvas have?
> > 
> > It has the semantics of a rendering context to which scripts can draw.
> So it only has presentational semantics, so should be in a rendering 
> language like CSS?

No, it's content; it's just media-specific content, like <img>.

On Sat, 23 Apr 2005, Jim Ley wrote:
> On 4/22/05, Henri Sivonen <hsivonen at iki.fi> wrote:
> > On Apr 22, 2005, at 18:00, Jim Ley wrote:
> > > so should be in a rendering language like CSS?
> > 
> > If you value hard-line anti-presentationalism over pragmatism.
> Er, There are very good reasons why the presentation is split, the most 
> important of course being accessibilty, it's clear from this list that 
> people prefer being able to draw on-top of any element, or perhaps just 
> an img element, and I'm sure that's not from any anti-presentationalism, 
> but simply because they don't see efficient ways to author a canvas 
> element in a backwardsly compatbile accessible manner.
> Repeatedly in WF2, new elements have been rejected due to their 
> difficulty in implementing in IE6, why is canvas different? (and yes we 
> could implement it in IE6 without much difficulty)

<canvas> is already implemented, so it's ease of implementation is a done 
deal. (It is pretty easy to implement since it maps straight to OS APIs.)

On Sat, 23 Apr 2005, Jim Ley wrote:
> On 4/22/05, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> > As there are already implementations and implementors are not likely 
> > to change it all back
> Until today, the spec was very clear that it wasn't appropriate to 
> implement any feature on the spec, today for some unexplained reason 
> it's changed to just a general it could change warning.  Either way any 
> implementor would've been aware of the highly draft nature of the 
> specification, so should have been expecting it to change.  There are 
> certainly no sites relying on the functionality.

The warning was changed because it was wrong -- implementations can 
implement and ship stuff before the spec is done, and if that happens, it 
basically freezes the spec.

> > I don't think this is going to work, but if this somehow gets through 
> > then please choose the OBJECT element instead.
> OBJECT is indeed a good idea, for any extension of that needs fallback 
> behaviour.

Not sure to what you and Anne are referring here.

On Sun, 24 Apr 2005, Jim Ley wrote:
> On 4/24/05, Henri Sivonen <hsivonen at iki.fi> wrote:
> > On Apr 23, 2005, at 22:16, dolphinling wrote:
> > > There's one implementation, and one implementation in testing 
> > > builds. It would also be an easy change to make for those 
> > > implementations (and they could still keep support for the "old way" 
> > > if they need).
> > 
> > The release date of Tiger is very near. Safari will ship with canvas.
> So?  What's that got to do with the Web Applications Standard?

The Standard defines what ships.

> > Once it is out, you can't pull it back.
> It's never been in a published standard, the specification still states 
> that it's subject to change. I'm very disappointed that the "do not 
> implement in released software" has been removed without any discussion 
> on the list of the maturity of the specification, but that's just the 
> normal high handed approach of the working group.  But even without 
> that, there's no need to bless a poor implementation decisison simply 
> because one minority browser has implemented it and used it solely in 
> non-web content.
> If successful shipped implementations is what matters, then there's lots 
> of successful IE extensions that do the same as canvas and other 
> elements which it would be much more sensible to go with.

Like what? If there are IE features worthy of standardisation, other than 
those already standardised, let's do them.

On Sun, 24 Apr 2005, Jim Ley wrote:
> On 4/24/05, Kornel Lesinski <kornel at ldreams.net> wrote:
> > canvas doesn't belong to CSS, because CSS can't use it.
> Neither can HTML - it's always blank unless script is supported, so by 
> that argument, Script, and only Script is the appropriate place.

I don't think that the ECMAScript group would agree to defining HTML 

> > Enabling via JS IMHO doesn't work either. Just adds unneccesary code:
> > 
> > <div id="canvas"></div>
> > <script
> > type="text/javascript">document.getElementById('canvas').drawable=true</script>
> You've made these seem bloated, but you're ignoring the fact that the 
> only "extra" code in that example is the .drawable=true - if that really 
> is a problem, then it would be trivial to not require it and just allow 
> drawing to start on top of any element.

Overloading elements has not proved very wise, in terms of getting solid 
interoperability without difficulty.

> > It would be possible to modify prototype of HTMLCanvasElement to add 
> > functions that are missing in some implementations or
> The existence of an HTMLCanvasElement prototype is not standard 
> currently - are you suggesting that the Web Application specification 
> should require the prototyping of these objects?  I would be very much 
> opposed to this, requiring a particular coupling to javascript is not a 
> good idea.

WebIDL now does this, despite your wishes to the contrary.

On Sun, 24 Apr 2005, Jim Ley wrote:
> On 4/24/05, Kornel Lesinski <kornel at ldreams.net> wrote:
> > On Sun, 24 Apr 2005 16:14:29 +0100, Jim Ley <jim.ley at gmail.com> wrote: 
> > Drawable <img> is pretty easy to implement (change to internal 
> > bitmap),
> So the proposal to have img alone switch to drawable seems a good one. 
> The WHAT-WG members have previously said that new elements are a bad 
> idea as they're more complicated to implement - re-using image seems a 
> good option.  Especially as it would give us the ability to use the 
> image itself as a background - and to provide fallback support for the 
> user.

New elements are preferable to overloading orthogonal functionality on 
existing elements. Reusing elements is preferable to new elements when the 
new features do not interact with the existing functionality (e.g. adding 
new types to <input> is ok).

> Look at google maps, it draws on top of img elements, adding in extra 
> canvas elements would seem to be highly redundant?

Speaking for Google, we would rather have the separate <canvas> element 
than overload existing elements with new features like this.

> > > The existence of an HTMLCanvasElement prototype is not standard 
> > > currently
> > 
> > It's in current draft, with width/height properties and getContext 
> > method.
> Could you point to where?  Or was I not clear enough about talking about 
> the _prototype_ that's the thing that is not currently specified and I 
> believe is hugely unwarranted.
> [Prototypes]
> > But such coupling is already there for every current form element.
> > Prototypes are required by ECMA script already.
> Could you point me to the part of the specification?  Because by my
> reading of the ECMA spec prototypes are not required on host objects
> such as the DOM in a webbrowser.

WebIDL now defines prototypes.

On Sat, 23 Apr 2005, Jim Ley wrote:
> On 4/22/05, Olav Junker Kj?r <olav at olav.dk> wrote:
> > Brad Neuberg wrote:
> > > Whenever I implement a DHTML (Ajax?) type app that needs to talk to 
> > > the server without refreshing the client, such as through a hidden 
> > > iframe or an XmlHttpRequest object, I always wish that I could 
> > > update the window location bar to show a bookmarkable and copyable 
> > > URL, but update it in such a way that it _doesn't_ refresh the 
> > > browser or change it's location (window.location.href changes the 
> > > location).
> > 
> > You can do this by changing the fragment, e.g. set window.location to: 
> > http://www.rojo.com/manage-subscriptions#sortby=TAG" This is useful 
> > for changing to a new bookmarkable state on the client side without 
> > reloading the page.
> Hmm, but then you need client-side intelligence to test the hash portion 
> of the string, and then make subsequent requests to get the relevant 
> data from the server.  The original suggestion is much more powerful 
> than that, as it allows the server to respond directly to a request.
> I've certainly wanted this, a sensible compromise is to only be able to 
> set the query portion of the url.

pushState() now does this.

On Mon, 25 Apr 2005, Jim Ley wrote:
> On 4/25/05, Brad Neuberg <bradneuberg at yahoo.com> wrote:
> > > If successful shipped implementations is what matters, then there's 
> > > lots of successful IE extensions that do the same as canvas and 
> > > other elements which it would be much more sensible to go with.
> > 
> > I'm not against that; I thought one of the ideas behind the WHAT 
> > working group is to take already working defacto standards and simply 
> > specify them and implement them in other browsers, such as innerHTML 
> > and XmlHttpRequest.  I'd much rather choose an already existing, if 
> > not perfect, canvas or drawable surface type defacto standard than 
> > create an imaginary "perfect" one like we seem to be doing on this 
> > list. Running code is king....
> Great, lets go with VML, supported on the majority of desktops out 
> there, used by high profile sites such as Google, It's a much better 
> option albeit more complicated than canvas.

VML is so unpopular that even Microsoft have been dropping support for it. 
I don't think it's necessarily the best model to follow.

We've added SVG support to HTML instead. It's more widely supported 

On Tue, 26 Apr 2005, Jim Ley wrote:
> On 4/26/05, Brad Neuberg <bkn3 at columbia.edu> wrote:
> > How about for Web Applications 1.0?  If there are SHOULD and MAY 
> > portions of the spec, would all SHOULD elements be supported in IE 
> > while all MAY elements would not?
> We don't want optional things in specifications, optional bits and 
> profiles etc. just fragment the supported parts even more than the 
> simple incomplete/buggy implementation reality.


On Tue, 26 Apr 2005, Jim Ley wrote:
> On 4/26/05, Ian Hickson <ian at hixie.ch> wrote:
> > On Tue, 26 Apr 2005, Brad Neuberg wrote:
> >    Example:
> > 
> >     * a 3D context for <canvas> is probably not something we can
> >       realisticly expect to see implemented in IE using JS, but it's
> >       still something we've had demand for and thus something we'll
> >       likely be working on.
> A 3D canvas is just, if not more implementable than a 2D one.  Once 
> again, rather than invent strange new things which then make us need to 
> add shims to make it work in IE, why not take the existing massively 
> supported on hundreds of millions of desktops and use that?

A 3D canvas API is now being specced by the Khronos group.

> > (<canvas> for example is something we've heard a lot of demand for 
> > from people wanting to write games and the like),
> Could you provide more details?  Surely Flash meets all the use cases
> for games developers, what's missing?

Flash is vendor-specific.

> > Web Forms 2 is basically done and will be going to Call For 
> > Implementations shortly.
> When will we see a last call for comments?

Hopefully next month. (WF2 was in LC for a while, and then got integrated 
into HTML5, which is reaching LC around next month.)

On Wed, 27 Apr 2005, Jim Ley wrote:
> On 4/27/05, Boris Zbarsky <bzbarsky at mit.edu> wrote:
> > This makes it clearer that the form elements are reset in the _target_ 
> > document. I also think that "document in the frame or window targeted 
> > by the form submission" is clearer than "the document from which the 
> > submission initiated
> What if the document in the target window has changed?  what if the 
> document in the target window is in a different domain, what if another 
> document with a form in is partially way through being rendered in the 
> the other window? What about the situation where 2 seperate form posts 
> target the same window, one of which sends a replace values, the other a 
> reset - which is honoured, what does it depend on, the order of 
> submission, the order of recieving, random?

On Wed, 27 Apr 2005, Jim Ley wrote:
> Oh, and the other thing, what's the use case for the 205, I realise it's 
> mostly tidying up the hinted at HTTP spec, but I'm not really sure 
> there's much of a use case, especially as you can achieve the same with 
> a replace post which uses almost the same amount of bandwidth on typical 
> pages.
> I can't think of a single good case where just reseting is appropriate, 
> a result with no feedback doesn't strike me as useful - especially when 
> there's replace which can provide the same not reloading page, but can 
> provide feedback in an output element.
> I really think this is complicating the specification without providing 
> anything of use.

I dropped support for HTTP 205 from the spec, given lack of support 
outside the HTTP WG for this feature.

On Fri, 29 Apr 2005, Jim Ley wrote:
> On 4/28/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> >    I've been pondering how someone would have 3D graphics inside a Web 
> > application using current web standards and some in development (XBL2, 
> > HTML5, et cetera), and while I have a general idea, I'm not exactly 
> > sure how it would work.
> There are hundreds of millions of browsers that support 3D graphics in 
> their default configuration, it's been successfully deployed with 
> implementation experience going back over 6 years.
> Please do not re-invent the wheel, but standardise this (or a subset) 
> functionality.
> The supposed motivation of WHAT-WG is compatibility with IE6, VML and 
> DirectAnimation provide 2D and 3D drawing contexts that are compatibile 
> with Internet Explorer, use them, or start coming up with some reasons 
> why not to.
> As always, I'm still waiting to hear the use cases for both 2D and 3D 
> javascript drawing - "Quake like games" which is the only example I've 
> heard so far, may be a use case, but it's not yet been explained why an 
> HTML document is appropriate for such a game.

I decided to punt on 3D since other groups are working on it. I encourage 
you to send your queries to those groups.

On Wed, 4 May 2005, Jim Ley wrote:
> On 5/4/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > Jim Ley wrote:
> >      Plug-ins are by their very nature optional. Why would we want to
> > move functionality into <object> elements, which are by definition
> > external objects like plug-ins?
> OBJECT is not by definition a plug-in, and Opera/Mozilla/Safari/IE all
> currently use native renderers to render OBJECT elements, there is no
> reason why this should not continue.

The spec now defines this.

> >     How so? A "3d" context would undoubtedly have functions for loading
> > complete textures and models from files. 
> "Creating a 3D markup language is somewhat outside the purview of this 
> working group"


> >    If by declarative you mean like X3D, then WHATWG clearly shouldn't 
> > add such markup to HTML because it would duplicate the work of another 
> > group unnecessarily.
> Just like it's not necessary to embed jpeg's inside HTML documents to 
> get images in them, it's not necessary to embed 3D markup inside HTML 
> documents to get 3D images.

You should tell the Web 3D consortium, since they are now working on doing 
exactly that with X3D.

On Wed, 4 May 2005, Jim Ley wrote:
> On 5/3/05, Jon Udell <judell at gmail.com> wrote:
> > what: a password option for window.prompt
> > 
> > why: http://weblog.infoworld.com/udell/2005/05/03.html#a1227
> Modal dialogs prompt/alert/confirm etc. are bad from an accessibility
> context, and should not be used in real applications.


> As what you want is not part of a web application, but something 
> specific to your browsers chrome, request something from them, it should 
> not be standardised anywhere else.

I disagree that .prompt() and company should not be standardised.

On Sun, 8 May 2005, Jim Ley wrote:
> On 5/8/05, Ian Hickson <ian at hixie.ch> wrote:
> > On Sun, 8 May 2005, Ben Meadowcroft wrote:
> > >
> > > There are two types of help that I think are appropriate for web 
> > > applications, full page help and element sensitive help.
> > 
> > The problem with both is discoverability. Unless we can solve that, 
> > there is not much point having anything in the spec.
> For the context sensitive one we use CSS with a cursor:help that's the 
> way we usually implement it from a visible perspective, I'm not sure if 
> there's particular value in defining an attribute to take the help 
> contents though, especially as this would mean it couldn't then have 
> mark-up, which most of the context help systems I've done do use.
> adding in a link rel of help would seem a pretty low rent thing to 
> define, how that may be exposed in a UI though I'm less clear, I don't 
> like the idea of adding it to the browsers regular help chrome - there 
> needs to be a distinction between browser and web-application.


On Mon, 30 May 2005, Jim Ley wrote:
> On 5/30/05, Charles Iliya Krempeaux <supercanadian at gmail.com> wrote:
> > 
> > I (and others) have used the <noscript> tag quite a bit for displaying 
> > (what alot of people seem to call) "rich media" (for some reason) and 
> > having gracefull fall backs.  Basically, the code looks something like 
> > this:
> > 
> >    <script src="..." type="text/javascript"></script>
> >    <noscript>
> >        <iframe src="...">
> >            <a href="..."><img src="..." /></a>
> >        </iframe>
> >    </noscript>
> > 
> > So,... at first we try and run the JavaScript code to display the 
> > "rich media".  If that doesn't work, then we try and use the iframe. 
> > If that doesn't work (because the browser is too old) then we try and 
> > display the image.
> this isn't what happens in the above case at all, if the script code 
> doesn't work, then the fallback content is not displayed, it's only 
> displayed if script is not supported at all, script capable user agents 
> like Nokias internal browser will still execute your script, not manage 
> to display any "rich media", yet the noscript will also not be 
> displayed.
> As Kornel has said noscript is useless for fallback, as it only 
> fallbacks in the case of script/noscript, yet script capable user agents 
> are so varied that there's no way any non-trivial script is going to 
> successfully work in them all.
> Removing noscript is an excellent idea.

I've left it in, since it's so widely used and doesn't seem that
harmful in text/html. I have, however, made it non-conforming in XHTML.

On Wed, 22 Jun 2005, Jim Ley wrote:
> On 6/22/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > Jim Ley wrote:
> > > On 6/21/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > >
> > >>Matthew Raymond wrote:
> > >>   Now that I think about it, wouldn't the following be valid also?...
> > >>
> > >>| <?xml version="1.0" encoding="UTF-8"?>
> > >>| <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
> > >>|  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
> > >
> > >>|   <X3D xmlns="http://www.web3d.org/specifications/x3d-3.0.xsd"
> > >
> > > Of course not, there is no X3D element in the above dtd.
> > 
> >    Oh, I see. This was a suggestion in the XHTML 1.0 spec of how
> > namespaces MIGHT be used. I guess it's out of date.
> It is of course Well-Formed, but it's not valid, you could indeed mix an 
> XHTML 1.0 or 1.1 document with other elements and use your own dtd to 
> make it valid (if you felt a need to).  This wouldn't work with XHTML 
> 2.0 as is currently drafted though as XHTML 2.0 must use a particular 
> doctype, I imagine that will change though.

You could say that, yes.

On Sun, 3 Jul 2005, Jim Ley wrote:
> On 7/3/05, Hallvord Reiar Michaelsen Steen <hallvord at hallvord.com> wrote:
> > So, should we not tell the JavaScript that a 304 response was 
> > returned, but show the original response including headers? Views?
> Absolutely not!  The actual status recieved should be made available, 
> nothing else is appropriate (I'd also like to see the IE reference 
> implementation non HTTP status headers also made available)
> Whilst I agree, that the issue can catch people out, that's not a good 
> reason to hobble the interface and make the javascript implementation 
> have to be different to other implementations that are accessing the 
> same content.
> It's trivial to work around, and more than usable, if you are interested 
> in your scripts status codes.

On Sun, 3 Jul 2005, Jim Ley wrote:
> On 7/3/05, Hallvord Reiar Michaelsen Steen <hallvord at hallvord.com> wrote:
> > On 3 Jul 2005 at 21:30, Jim Ley wrote:
> > > It's trivial to work around
> > 
> > That is obvious. However, *will* people work around it, or will the 
> > browser that is better at caching documents be at a disadvantage 
> > because web apps will mysteriously appear broken to the end user..?
> XMLHTTP in IE is fully wired into the cache, and works appropriately, I 
> can't see how the behaviour will differ in different caching scenarios 
> in any case.  IE also returns the exact status code recieved from the 
> server.
> > I don't think IE ever sends a conditional request for a document 
> > requested via XMLHttpRequest (I don't know every corner of the HTTP 
> > caching spec though).
> It does if the cache is appropriately configured.
> > I think faking 200 would be in the interest of smaller browsers
> I don't see how hobbling smaller browsers helps them in any way.  I also 
> certianly don't see the point in writing a specification just for 
> smaller browsers, but we've discussed that before.
> > and make life simpler for JS authors under most conditions (I don't 
> > see much of a use case for wanting to know about the 304 response..)
> I've deployed a number of systems that rely on getting a 304 response to 
> the xmlhttp request object - Generally the client has been polling a 
> server for the state of a remote thing (the example most recently in my 
> mind was the temp of a freezer) and if it's not changed since the last 
> response, then I quite rightly send a 304, and test for it on the 
> client, it was an embedded IE solution, and it worked fine in IE.

On Fri, 8 Jul 2005, Jim Ley wrote:
> On 7/8/05, Hallvord Reiar Michaelsen Steen <hallvord at hallvord.com> 
> wrote:
> > This may imply that a client with a cached document should return a 
> > status 200 when the requested document matches one in the cache 
> > (whether or not the UA has checked with the server if the resource is 
> > current).
> I wouldn't be against this, if the resource is cacheable, then I'm happy 
> that what comes back could be a 200 or a 304, all my implementations, 
> and indeed any situation I can imagine where knowing a 304 on the client 
> is for resources that are "must-revalidate", if it's just naturally 
> cacheable, I'm not sure the fact it's been checked for freshness is 
> relevant.
> Consider a cache which updates itself every 20 minutes for a resource 
> (without any request from the user agent), first time it gets a 200, 
> then each of the next requests it gets a 304, when the user agent then 
> makes a request to it, it's going to return the resource with 200, 
> that's reasonable.
> So yes, I would be happy with the above interpretation, as long as a 
> specific request from the script, results in that value being what's 
> actually returned.  I'm happy that cache itself operates seperately and 
> simple freshness checks for a resource could stay as a 200 certainly.
> The arguments make sense.

XMLHttpRequest issues should be sent to the public-webapss group now.

On Mon, 18 Jul 2005, Jim Ley wrote:
> On 7/18/05, Ian Hickson <ian at hixie.ch> wrote:
> > Why would you suspend a timer? (And why would the UA not suspend the 
> > timers itself?)
> You're saying that when a user print's an HTML5 user agent MUST stop all 
> setTimeout counters, I don't see that in the spec, nor why it would be 
> an expectation of a scripter.
> The common use of onbeforeprint/onafterprint is to add content to a 
> document that is only relevant to printed media, this is something that 
> cannot be done with CSS, since CSS is optional, so if we just hide 
> content with CSS, we're stuck with the situation that users without CSS 
> or with an appropriate user stylesheet get it and get confused.
> Of course for showing temporarily hidden stuff with script, as has been 
> mentioned, there's no problem doing it with CSS.

On Tue, 19 Jul 2005, Jim Ley wrote:
> On 7/19/05, Dean Edwards <dean at edwards.name> wrote:
> > Matthew Raymond wrote:
> > > For instance, such events could be combined with AJAX to force 
> > > people into a pay-to-print scenario.
> > 
> > What's wrong with paying to print a high quality version of an image? 
> > If you ask me this is a great example of why we should allow these 
> > events.
> This is another of the use cases I've used "enhanced" printing for - I 
> actually generally used ScriptX http://www.meadroid.com/scriptx/ rather 
> than simply the IE methods, but the events are all that's needed.  Not 
> paying for printing images, but swapping out images with higher quality 
> images suitable for print.
> Someone will probably suggest CSS background-images as a suitable for 
> this aswell, yet again ignoring the fact that CSS is _optional_, and 
> content-images must not be in background images as they simply won't be 
> seen without CSS or if background images are disabled.
> I can appreciate the viewpoint that onbeforeprint/onafterprint aren't 
> archetecturally brilliant, but then most of web-applications spec isn't 
> brilliant from an architectural perspective - it's shoe-horning 
> web-application functionality into a document mark-up language, however 
> I don't see what's especially bad about onbeforeprint/onafterprint, and 
> they're very commonly used in intranet applications - if Opera and 
> Mozilla are going to make any headway there we need the kind of high 
> quality print control that is obtainable with IE.

I've since specced those events.

On Tue, 19 Jul 2005, Jim Ley wrote:
> On 7/19/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > Jim Ley wrote:
> > > You're saying that when a user print's an HTML5 user agent MUST stop 
> > > all setTimeout counters, I don't see that in the spec, nor why it 
> > > would be an expectation of a scripter.
> > 
> >   So wait, we need to add new events because user agent vendors may be 
> > too stupid to solve print-related problems on their own? I'd rather 
> > not have events just to fix random user agent problems.
> As a scripter, I do not have an expectation that print will cause any 
> effects on my scripts - Ian just said that it should have something that 
> is the opposite of my expectation, as this is not defined, it needs to 
> be defined - there are no user agent problems here.

It is now defined.

> > > The common use of onbeforeprint/onafterprint is to add content to a 
> > > document that is only relevant to printed media, this is something 
> > > that cannot be done with CSS, since CSS is optional, so if we just 
> > > hide content with CSS, we're stuck with the situation that users 
> > > without CSS or with an appropriate user stylesheet get it and get 
> > > confused.
> > 
> >   What about the browsers that don't support Javascript, or have it 
> > turned off?
> They don't get anything at all, this isn't necessarily a problem - 
> having content there which is visible on screen but not understandable 
> is a problem, a requirement from a previous project was simply date of 
> printing, this was required by the process, and the normal footers of 
> the browser were suppressed.  Another common one is adding links 
> explicitly in the page - to do this with CSS requires CSS3 features, or 
> for external links to be in a different class, and of course neither are 
> available in the most important Web Application platform.

Media-specific CSS is indeed the way to do things that are generally done 
using the aforementioned events, even if IE doesn't support the parts of 
CSS you would like yet.

On Tue, 19 Jul 2005, Jim Ley wrote:
> On 7/19/05, Ian Hickson <ian at hixie.ch> wrote:
> > On Tue, 19 Jul 2005, Dimitri Glazkov wrote:
> > > However, I think am starting to see what you're seeing. Basically, 
> > > your approach is to provide all content in the DOM tree and then 
> > > flip switches as needed to present it to various media types. Right?
> > 
> > Right.
> This is flawed though, as it requires all the content to be in the page, 
> including media-specific content. CSS cannot remove content, CSS is 
> optional, consider:
> This page <span id="viewed">viewed</span><span 
> id="printed">printed</span> on ...
> This is a contrived example of how people want web-applications to have 
> media specific content - printed media particularly, although it would 
> also apply to web applications deployed over interactive voice systems, 
> but it shows how relying on optional methods to change content is simply 
> flawed.

We have the "hidden" attribute to remove this content from all media now, 
which can be used with the events to show them at the right time, if you 
are so inclined.

On Tue, 19 Jul 2005, Jim Ley wrote:
> On 7/19/05, J. Graham <jg307 at hermes.cam.ac.uk> wrote:
> > On Tue, 19 Jul 2005, Jim Ley wrote:
> > > Someone will probably suggest CSS background-images as a suitable 
> > > for this aswell, yet again ignoring the fact that CSS is _optional_, 
> > > and content-images must not be in background images as they simply 
> > > won't be seen without CSS or if background images are disabled.
> > 
> > Er.. script is (in practice) at least as optional as CSS since more 
> > people actually disable script than use alternate stylesheets.
> Yes and no, we can only make these changes to the content with script so 
> a script print solution is acceptable, for example in the above example 
> the static non-scripted document would not have any of this media 
> specific content in, it would only be added with script when 
> appropriate, that way the media specific nature can be relied upon - the 
> script is unobtrusive.  CSS however is optional at the fine grained 
> level.

Agreed, more or less.

On Thu, 21 Jul 2005, Jim Ley wrote:
> On 7/20/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > Jim Ley wrote:
> > > This is another of the use cases I've used "enhanced" printing for - 
> > > I actually generally used ScriptX http://www.meadroid.com/scriptx/ 
> > > rather than simply the IE methods, but the events are all that's 
> > > needed.  Not paying for printing images, but swapping out images 
> > > with higher quality images suitable for print.
> > 
> >   Once again, I don't understand why you can't simply provide the user 
> > with a button on the web page that either calls up a printable version 
> > or clones the document so that the clone can be used for printing.
> The main reason is that my users have always said they want to use their 
> regularly printing mechanisms, not some link that opens a new page, and 
> then lets them do it, it's simply much too slow for any of the users.
> There's also signficant problems - the server implementation for your 
> above is extremely complicated and slow - any user modifications need to 
> be serialised into a suitable format for supplying to the server, then 
> it needs to develop the equivalent view in the static format suitable 
> for printing, then it needs to return it - increasing server load, 
> implementation complexity and bandwidth usage.
> As you note the DOM clone is not a method that exists - because of that, 
> lets forget it, web-applications need to work in IE, we can't implement 
> that in IE.
> > a separate "printable version" page is sufficient.
> Such a solution has been regularly rejected in various projects, I don't 
> think it's likely to happen in the future either, much more likely that 
> IE only will continue to be required for the applications, or at least 
> the high-quality printing components of them.
> >   ActiveX is commonly used in intranet applications.
> Not at all, there's lots over the public web, not to a general public 
> audience of course, and web technology is very important in the non-web 
> world, it's also the area that IE still utterly dominates.
> >   Now you've completely lost me, use-case-wise. On an intranet, why is 
> > a printable version of the document not an acceptable workaround?
> In a nutshell - Because you can't print it by pressing ctrl-p, and all 
> the reasons above.
> >   It really doesn't matter though, because manipulating and printing a 
> > copy of the document is more effective anyways without disabling or 
> > changing part of the browser functionality.
> What ways?
> >   Here's a question for you to chew on: What happens if you want to 
> > print and the webmaster screwed up something in the onbeforeprint or 
> > onafterprint event? Will it effectively disable printing?
> Of course not, you'd just disable script and print...
> > What if it's an AJAX application and the UI of the app is hidden for 
> > printing but never restored?
> Just the same as happens with things like gmail when the UI disappears 
> ot locks up or whatever, the user presses refresh and starts again. Of 
> course in the real world we also have QA procedures with testers making 
> sure this sort of stuff doesn't happen, javascript breaks sites in 
> millions of ways every day, more events don't make it more likely - they 
> often make it less, as viewer hacks are needed to hit the required 
> functionality.
> In any case, protecting stupid developers is not a good approach to spec 
> authoring, all you do is harm your intelligent developers, and your 
> users who lose functionality they want.

The spec now enables both of these approaches.

On Mon, 29 Aug 2005, Jim Ley wrote:
> On 8/29/05, Hallvord Reiar Michaelsen Steen <hallvord at hallvord.com> 
> wrote:
> > On 24 Aug 2005 at 12:16, Ian Hickson wrote:
> > > contentEditable needs scripting anyway, to offer things like "insert 
> > > <em> element here", etc.
> > 
> > Why must contentEditable depend on scripting? What if we make sure the 
> > wording of the spec allows non-scripting implementations?
> Please, no, a lot the use cases for contentEditable are not full wysiwyg 
> editing, a lot of the ones I create allow only a minimal subset of 
> editing, and they do this by scripting, if you can only strong/make 
> link/italic/colour/insert image, then you get a simple editor that 
> allows for easy editing, but doesn't run into much tag-soup that needs 
> elaborate cleaning up.
> Whilst I agree the concept of contentEditable is not good, I don't think 
> it should be solved by trying to modify the existing behaviour the 
> accept="text/html" is a much better way of meeting your use case.


> > My question is whether we could make contentEditable more useful for 
> > HTML/CMS authors by removing scripting requirements.
> I would be extremely unhappy, and would need to find ways of blocking 
> browsers that implemented contentEditable in this manner from providing 
> the functionality, that's not a good thing, but the risk of letting any 
> user/browsers attempts at html into the CMS would be worse.
> So whilst I agree with the need, please seperate the browser provided 
> from the script provided interfaces.

I believe this is more or less now done.

On Tue, 30 Aug 2005, Jim Ley wrote:
> On 8/30/05, Hallvord Reiar Michaelsen Steen <hallvord at hallvord.com> 
> wrote:
> > On 29 Aug 2005 at 17:25, Jim Ley wrote:
> > > Please, no, a lot the use cases for contentEditable are not full 
> > > wysiwyg editing, a lot of the ones I create allow only a minimal 
> > > subset of editing, and they do this by scripting, if you can only 
> > > strong/make link/italic/colour/insert image, then you get a simple 
> > > editor that allows for easy editing, but doesn't run into much 
> > > tag-soup that needs elaborate cleaning up.
> > 
> > If the UA makes tag soup rather than valid code, that is a bug in the 
> > UA and should be reported in the appropriate bug system.
> WYSIWYG editing has to produce "tag-soup", it's free of semantics, as 
> the wysiwyg cannot know the semantics intended by the user, for that 
> reason the only way is to limit the elements to those with only strong 
> semantics - links, images etc. - Colour something red, use a list - how 
> does the UA know the semantics are correct?

As you say, for links and lists and the like it's not hard. There's no 
reason to expect WYSIWYG editors to use subtle elements like <cite> or 

> > If security and content filtering is a concern - well, you have to 
> > filter anyway, remember to never trust user input.
> It's nothing to do with security, it's to do with the semantic viability 
> of the resulting mark-up, and yes of course it gets validated, but 
> rejecting it and returning the user to the same flawed interface is not 
> going to help them solve their problem.
> > Also, it would be trivial to specify what functionality a UA should 
> > support in non- scripting mode, and what should only be activated 
> > through scriptable interfaces.
> If you feel it's trivial, that's fine, I don't particularly see it as 
> trivial.
> > Saying that contentEditable elements can become part of a form will 
> > give us the best parts from each of the worlds IMO.
> No, it gives us unpredictability, and unpredictability on thousands of 
> existing pages, giving a rubber stamp to contentEditable as existing now 
> makes sense.

Done, more or less.

On Tue, 30 Aug 2005, Jim Ley wrote:
> On 8/30/05, Maniac <Maniac at softwaremaniacs.org> wrote:
> > Jim Ley wrote:
> > >WYSIWYG editing has to produce "tag-soup", it's free of semantics, as 
> > >the wysiwyg cannot know the semantics intended by the user, for that 
> > >reason the only way is to limit the elements to those with only 
> > >strong semantics - links, images etc.
> >
> > That won't work. People use <blockquote> for indentation as we know.
> with contentEditable as implemented in IE and mozilla currently (with a 
> paste rich text block in place), it works as the only way users can 
> enter mark-up is with their limited controls provided by the page 
> author.
> > I also think that WYSIWYG semantics is essentially very hard anyway...
> Absolutely, which is why you constrain the problem to something 
> manageable.

Is the spec acceptable now in this regard?

On Tue, 30 Aug 2005, Jim Ley wrote:
> On 8/29/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > By the way, I think Hallvord's asking for giving the UA vendors 
> > flexibility in what tools are available for HTML editing in a 
> > document, not asking for anything to be mandatory. Vendors are likely 
> > to implement what they feel is useful for the users anyway.
> Sure, but I'm saying they're not useful for the use cases of 
> contentEditable as it is used today, we've seen lots of people asking 
> what are the use cases for contentEditable, and I'm not completely sure 
> everyone is actually looking at if their proposals meet them. 
> contentEditable is not used for authoring of full pages, it's used for 
> simple authoring - wiki like markup for people who can't do wiki-markup 
> basically.  I've seen very few outside email apps that don't limit what 
> can be achieved.
> >   Servers must validate all input, regardless of whether or not it 
> > came from a form control.
> Validating the semantic appropriateness of mark-up is not 
> computationally feasible, however it's not possible to have a link or an 
> image semantically invalid, they are what they are.  This is the 
> difference, limiting what the user can create in their contentEditable 
> is important to maintain semantic appropriateness.
> For ensuring only elements and tags you want are in the CMS, that's 
> fine, but rejecting it, it will not be possible to give a meaningful 
> answer of why the edit failed to user of a wysiwyg control - "you used a 
> blockquote, please resend without a blockquote" - If a user's done 
> nothing but gone, give me a left margin in his editor, he'll not have a 
> clue on how to fix that or WTF a blockquote is.
> >   It would be nice to be able to explicitly define what markup can be 
> > used in a |contenteditable| element. Any suggestions how that can be 
> > defined?
> It might be nice, but I can't see how a user agent could really achieve 
> such a thing, what's it going to do change its edit bar for every user, 
> that would lose any consistency that would be gained by providing it in 
> browser.
> I think a rich textarea is a good idea, I just see it as distinct from 
> contentEditable - something with existing implementations and uses.

On Tue, 30 Aug 2005, Jim Ley wrote:
> On 8/30/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> >   You're talking about defining behavior for a semantic element. 
> > You're essentially dictating parts of the implementation of 
> > |contenteditable| to user agent vendors.
> Not at all, I'm saying the current implementation in IE is appropriate 
> for the use case, and moving away from IE's implementation will bring 
> new potential use cases into what's possible, but at the expense of 
> current use cases.
> I've not actually seen many people wanting full on editing of entire 
> web-pages in a web-page, most of the use cases involve editing of parts 
> of webpages in constrained fashion.
> > especially if IE doesn't have those limitations.
> I'm not asking for any changes to IE's implementation of 
> contentEditable, whilst not being good, it does meet the use case I'm 
> raising here.  I'm arguing against changing it to try and meet other use 
> cases - I agree entirely with your other post with your 3 suggestions, 
> leave contentEditable as is but well defined, add other elements.
> >   (Wouldn't they have to have hit a "blockquote" button on their 
> > toolbar to get that?)
> Who knows what UA's might do, it was just an example, I'm sure you can
> see the general issue.
> > > It might be nice, but I can't see how a user agent could really 
> > > achieve such a thing, what's it going to do change its edit bar for 
> > > every user, that would lose any consistency that would be gained by 
> > > providing it in browser.
> > 
> >   Oh, I think I get it. You don't necessarily want there to be 
> > toolbars and the like,
> No, I want contentEditable left as is, because not all the use cases and 
> delivered products of contentEditable are applicable to full spectrum 
> HTML authoring, they're limited to elements, no CSS, they're limited in 
> what elements they use etc.  A UA toolbar in a textarea 
> accept="text/html" would be a great idea.
> > Is a simple, straight-forward rich editing control too much to ask 
> > for?
> Absolutely not, but it's not the same thing as contentEditable, it has 
> different use cases, that's all I'm saying, we need both, not just one.

I haven't added a rich textarea yet. We still haven't completely nailed 
down contentEditable's feature set; even this is likely something for a 
future version. I haven't much extended contentEditable beyond IE's 
feature set, in fact.

On Sat, 3 Sep 2005, Jim Ley wrote:
> On 9/2/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > 1) Why wouldn't you want the content in the element to be inserted by 
> > Javascript when the page loads when you can just include the content 
> > in markup and hide it using CSS?
> Not particularly wanting to support the OP's issue - I don't see a 
> problem with the change to the content model of a to require content, 
> it's a good thing.  However styling a link to print away is not a good 
> idea, as it means those without css get a link which does nothing, of 
> course it's still possible with the method in the OP's post that the 
> user gets a nothing link, but that doesn't mean the link existing in the 
> source is a good idea.
> > 3) How does your original example even prevent the content from being 
> > viewed when printing?
> I don't think that's the purpose, I think the purpose is to ensure 
> there's not content in the page which is purely behavioural and does 
> nothing when script is not available.
> > 4) What prevents you from inserting the entire <a> element into a 
> > <span>?
> That is of course a very, very good question.
> >   The bottom line is that you need a much better use case.
> Absolutely!

I didn't follow that e-mail, unfortunately.

On Sat, 3 Sep 2005, Jim Ley wrote:
> On 9/3/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > Jim Ley wrote:
> > > Not particularly wanting to support the OP's issue - I don't see a 
> > > problem with the change to the content model of a to require 
> > > content, it's a good thing.  However styling a link to print away is 
> > > not a good idea, as it means those without css get a link which does 
> > > nothing,
> > 
> >   Nothing in a print out does anything.
> The relevance to the button doing nothing, is the button on the page 
> that if script is enabled and appropriate vendor API's are available 
> will print the document, so the the OP only adds the link once he knows 
> script and a window.print method are available, not after printing.
> >   How many user agents support Javascript but not CSS1? Does Lynx or 
> > some other text-mode browser support Javascript? I'll have to look 
> > into that...
> Loads, IE, Mozilla Family, Opera and Safari perhaps being the commonest 
> - ie CSS can be disabled in all of them distinct from disabling script.
> >   Makes sense. Personally, I'm wondering why you want to print from a 
> > link at all unless you want to perform a special print operation.
> Oh absolutely, it's silly (without having things like ScriptX to provide 
> real printing support in restricted environments) but you can't hide 
> scripted things via CSS, CSS and Script can be disabled seperately in 
> all modern browsers.


On Sun, 4 Sep 2005, Jim Ley wrote:
> On 9/4/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > Jim Ley wrote:
> > > Loads, IE, Mozilla Family, Opera and Safari perhaps being the 
> > > commonest - ie CSS can be disabled in all of them distinct from 
> > > disabling script.
> > 
> >   You're not entirely correct about how these browsers support turning 
> > off CSS. IE actually doesn't support it.
> There's lots of ways of disabling stylesheets in IE without a universal 
> user stylesheet, expressions, behaviours on style/link/element etc.
> > Mozilla Firefox allows you to turn off styling for the screen media, 
> > but not the print media.
> I never realised FF was so flawed, thanks for correcting me.
> >   Of course, I asked about browsers that don't SUPPORT both Javascript 
> > and CSS, not about browser that allow you to turn it off if you so 
> > desire.
> I couldn't see the relevance of browsers which didn't support both, as 
> disabled CSS is equivalent for the purposes at discussion.
> >   Wrong. CSS can't be disabled for Firefox.
> Yes, but this is obviously a horrible bug, and will undoubtedly be fixed 
> in future options, CSS 1 strongly recommends that users be able to 
> disable stylesheets, why ignore a strong recommendation from the 
> specification?

It appears to have to some extent been fixed, though possibly not the 
extent required to match what you are describing.

On Mon, 5 Sep 2005, Jim Ley wrote:
> On 9/5/05, Matthew Raymond <mattraymond at earthlink.net> wrote:
> >   None of which are obvious to the average user.
> but quite obvious to people who use the IEAK to customise IE's for 
> corporate roll outs...
> > > I never realised FF was so flawed, thanks for correcting me.
> > 
> >   Is it a flaw?
> Definately, if you disable stylesheets, you should disable stylesheets, 
> just disabling particular media ones is flawed.
> > > I couldn't see the relevance of browsers which didn't support both, 
> > > as disabled CSS is equivalent for the purposes at discussion.
> > 
> >   If having a Javascript-capable browser effectively means that you 
> > have a CSS-capable browser,
> but it doesn't mean that... CSS is optional, as is javascript, there's 
> no relationship between the 2 things.  Don't fall in the trap of 
> thinking we're specifying things for the standard configurations of 
> current browsers - that's how you make the web more inaccessible for 
> people.
> > then you don't need additional Javascript in order to hide the button 
> > when printing.
> This has never been what I've been discussing - the hiding of the button 
> is when script is disabled or printing functionality not there - it's 
> got nothing to do with hiding the button when it is printed, it's purely 
> to do with not having a control on the screen which does nothing.
> >   I can't find that recommendation in CSS1.
> It's at the end of section 7 
> http://www.w3.org/TR/REC-CSS1#css1-conformance
> > As a matter of fact, I don't think CSS1 has media types or 
> > print-specific properties.
> no, it's purely talking about disabling css.

This conversation appears to be going in circles now.

On Sat, 3 Sep 2005, Jim Ley wrote:
> On 9/3/05, S. Mike Dierken <mdierken at hotmail.com> wrote:
> > Destination anchors in HTML documents may be specified either by the A 
> > element (naming it with the name attribute),
> Yes, but it still shouldn't be empty, how can you link to part of a page 
> that's nothing?  The same ability to link to an a can be done by putting 
> something inside the a.  Non-Empty is a good thing in the spec.
> Indeed there are even implementations about that:
> <a style="position:absolute;top:100px;" name=chicken></a>
> <a href="#chicken">see chicken</a> 
> currently does nothing...

name="" is gone, and id="" is expected to be used instead. But I don't see 
why you couldn't like to an empty element.

On Sun, 4 Sep 2005, Jim Ley wrote:
> On 9/4/05, S. Mike Dierken <mdierken at hotmail.com> wrote:
> > > 
> > > Yes, but it still shouldn't be empty, how can you link to part of a 
> > > page that's nothing?
> >
> > You mean 'why' rather than how?
> No, I do indeed mean how?  What does it mean to link to an element in a 
> visual UA which has no visual representation, or in an aural UA that has 
> no Aural representation.  I gave an example which existing browsers are 
> unable to cope with - an empty positioned A element. What algorithm 
> would you recommend using for the usual UA in that situation?

Even elements that aren't rendered have a position relative to other 
content on the page, I don't think it's that much of a problem. 

On Sat, 3 Sep 2005, Jim Ley wrote:
> On 9/3/05, Simon Pieters <zcorpan at hotmail.com> wrote:
> > |If the a element has no href attribute, then the element is a placeholder
> > |for where a link might otherwise have been placed, if it had been relevant.
> > 
> > Why must a placeholder have contents?
> because a link requires contents.  There's no need for empty a
> elements, they add nothing that an author cannot otherwise do.

I agree in principle, but I haven't made it non-conforming, for the same 
reason empty <ol> elements are now allowed: greater author flexibility, 
especially when dealing with script-generated content.

> > I merely want a blank <a/> to be allowed, if the href attribute is not 
> > set.
> You've only provided one use case, and it's not a good use case, as you 
> can just as easily create the a element as have it in the source in the 
> first place, it's much better to do that.

Sure, but that's not a reason to just disallow it altogether.

> > I don't want to hide it, I want it to be non existent when scripting is
> > disabled or non-supported.
> It's not non-existent though, it's there...

That is true.

> > > 4) What prevents you from inserting the entire <a> element into a 
> > > <span>?
> > 
> > It's more code, and an empty <a/> is IMHO equally harmful as an empty 
> > <span/>, so I can use the a element directly.
> There's no need for an empty SPAN, you don't need any empty elements at 
> all, there is almost no more code (a createElement and appendChild 
> instead of a gEBI) and it's much neater, that's certainly not enough of 
> a difference to make a difference.

I dunno about _that_, inserting an element in the middle of a text node is 
non-trivial, especially when making it resilient to text changes.

> > |  <p>Sample code: <a id="sel"></a> <textarea>...</textarea></p>
> > |  <script>
> > |   var elm = document.getElementById("sel");
> > |   elm.appendChild(document.createTextNode("select all"));
> > |   elm.href = "javascript:selectall()";
> > |   function selectall(){...}
> > |  </script>
> No, please stop suggesting href="javascript:   this shows you're
> scripting experience is quite limited, the above fails in a number of
> browsers simply because the href: will result in a navigation, even if
> a null navigation which will unselect the text anyway.   The total
> length of the code withotut the a in the source is also shorter, so
> this is not a use case.

Yeah, this is the kind of thing better done using a button rather than a 

On Mon, 5 Sep 2005, Jim Ley wrote:
> On 9/5/05, Lachlan Hunt <lachlan.hunt at lachy.id.au> wrote:
> > Aankhen wrote:
> > > I suggest #2, which implies consistently treating the first argument 
> > > passed to the function as a single class name to match (this means 
> > > "foo bar" would always return no elements,
> > 
> > No, as already demonstrated, #2 does return matches in some cases.
> Surely that's just an implementation bug?  rather than indicative of any 
> underlying problem in the spec.
> The ElementClassName file :
> className = className.replace(/^\s*([^\s]*)\s*$/, "$1")
> doesn't enforce the classnames have no spaces in them and results it
> in continuing to test the className attributes with a regexp
> containing the space.
> a quick untested fix would I think be:
> className = className.match(/^\s*(\S+)\s*$/) ?
> className.replace(/^\s*(\S+)\s*$/,"$1") : "";
> (also using \S rather than [^\s], but that's purely style of course)
> > > Special-casing "foo bar" and other values seems to be adding 
> > > complexity without much return.
> > 
> > It's not about special casing, it's about defining error recovery 
> > consistently between implementations.  As it's currently defined, 
> > ("foo bar" is, I believe, erroneous since each parameter represents a 
> > single class name.
> I think it is defined in the spec, it's erroneous, and your 
> implementation is just broken as above, I'd quite like it to be defined 
> as 3, mainly because a DOM binding with optional parameters isn't 
> language independant, and if it's a ECMAScript tied DOM, then the DOM 
> needs to be a lot more ECMAScript like.

I'm not sure what you are referring to here.

On Mon, 5 Sep 2005, Jim Ley wrote:
> On 9/5/05, Lachlan Hunt <lachlan.hunt at lachy.id.au> wrote:
> > Jim Ley wrote:
> > > mainly because a DOM binding with optional parameters isn't language 
> > > independant, and if it's a ECMAScript tied DOM, then the DOM needs 
> > > to be a lot more ECMAScript like.
> > 
> > I may not be understanding what you mean, but if optional parameters 
> > aren't language independant, shouldn't it be defined in a more 
> > language independant way, so that any non-ECMAScript languages can 
> > still implement this?
> Yes, DOM currently is language agnostic, however the optional className 
> parameters aren't compatible with languages which can't do that.  So as 
> defined now getElementsByClassName would not manage to do that.
> However there's a good argument for making an ECMAScript specific DOM, 
> as it would be more natural for the majority of ECMAScript programmers 
> who use it, but that would mean redefining most of the HTML DOM into 
> something neat in ES eyes.

getElementsByClassName() no longer uses the array form. Even if it did, 
WebIDL now defines this in a language-neutral fashion.

On Mon, 5 Sep 2005, Jim Ley wrote:
> On 9/5/05, Ian Hickson <ian at hixie.ch> wrote:
> > > That's why I propose to make this function use exactly the syntax 
> > > that class attribute uses. getElementsByClassName("bar foo") should 
> > > match class="foo bar", class="bar baz foo", etc.
> > 
> > I fear that this would be rife with implementation bugs, as opposed to 
> > requiring the author to pre-split the search input, which guarentees 
> > that the UA does not have to process the search input in any way, only 
> > having to deal with the actual class attribute.
> Yet it's the exact same processing as the UA's already have for the 
> class attribute in HTML?  A UA already needs to tokenise a whitespaced 
> seperated string into classnames, what's different about this whitespace 
> seperated string?

The current design was reached based on feedback from implementors, so 
hopefully it won't be as bug prone as you fear.

On Fri, 3 Feb 2006, Jim Ley wrote:
> On 2/3/06, Gervase Markham <gerv at mozilla.org> wrote:
> > This seems like a sensible change. Call it getElementsByClassNames() 
> > would make it obvious that if you supply multiple class names, you get 
> > only elements with all those names. And it would be a reasonably 
> > obvious reduction that if you just supply a single name, you would get 
> > all elements which had that one class name.
> Rather than talk about technical details, can be talk about the actual 
> use cases please, working groups keep on creating things which need 
> implementing, testing etc. without once giving the use case.  This 
> thread is now 21 messages old and there's not one use case which is 
> fulfilled by a getElementsByClassName.  All the ones suggested are 
> fulfilled by the ability to attach events to a particular class name.

A sample use case would be a forum's "mark all as read" feature which 
might want to find all the forum posts on the page to mark them all as 
read, where the posts are elements with class="forum post"

On Fri, 3 Feb 2006, Jim Ley wrote:
> On 2/3/06, Gervase Markham <gerv at mozilla.org> wrote:
> > Jim Ley wrote:
> > > Rather than talk about technical details, can be talk about the 
> > > actual use cases please, working groups keep on creating things 
> > > which need implementing, testing etc. without once giving the use 
> > > case.  This thread is now 21 messages old and there's not one use 
> > > case which is fulfilled by a getElementsByClassName.  All the ones 
> > > suggested are fulfilled by the ability to attach events to a 
> > > particular class name.
> >
> > I thought we were discussing it because it was in the spec? :-)
> Firstly it has to justify its inclusion in the spec.  Until we know what 
> it's _for_ how can we possibly design it?  Or comment on any individual 
> design?


> > I know nothing of this "attaching events to a class name" of which you 
> > speak. Can you give me a reference to a document or proposal 
> > describing it?
> It's the one use case described in this mailing list,
> See e.g. 
> http://listserver.dreamhost.com/pipermail/whatwg-whatwg.org/2006-January/005434.html
> the document of course shows no use cases at all.

Indeed, the spec is not the place to document the rationale for the spec.

On Fri, 3 Feb 2006, Jim Ley wrote:
> On 2/3/06, Gervase Markham <gerv at mozilla.org> wrote:
> > Jim Ley wrote:
> > >  the document of course shows no use cases at all.
> >
> > Is there some doubt that the ability to tag an arbitrary set of 
> > elements and later easily get an array of those elements is a useful 
> > feature for web development?
> I've yet to hear of an actual reason to do so, people keep saying it 
> seems useful...
> > If you would like use cases, I present all of the web pages currently 
> > using a JS implementation of getElementsByClassName based on 
> > getElementsByTagName("*") and some manual class name inspection logic.
> Yes, but they're all using it to attach events to every one of the 
> class, which is why you have to look at use cases, the reason they're 
> doing it is not because getElementsByClassName is missing, but because 
> addEventListenerToClass or -moz-binding etc. are missing.
> It's the classic mistake of looking at making the workarounds easier, 
> when you should be looking at making the underlying use easier.

I certainly agree that XBL would be a better way of adding event listeners 
to custom widgets.

On Fri, 3 Feb 2006, Jim Ley wrote:
> On 2/3/06, Gervase Markham <gerv at mozilla.org> wrote:
> > Jim Ley wrote:
> > > Yes, but they're all using it to attach events to every one of the 
> > > class, which is why you have to look at use cases, the reason 
> > > they're doing it is not because getElementsByClassName is missing, 
> > > but because addEventListenerToClass or -moz-binding etc. are 
> > > missing.
> >
> > But why implement addEventListenerToClass() when you could implement 
> > getElementsByClassName(), which has a far more general utility? As 
> > soon as a single non-event-listener-related application comes along, 
> > you find you've implemented the wrong thing.
> Er, no the use case people have is that they want everything that has 
> class X to respond to a particular event, if you model that with 
> getElementsByClassName then you cannot change a class on an element and 
> have it respond, without re-running the attachment, and manage the fact 
> you've already attached it to some classes etc.
> It does not simplify the situation at all.  It can also only happen once 
> the element with the class is available, that fails the consistency of 
> UI axiom, since your element will respond differently after the function 
> has run.

Indeed; XBL is the solution for that.

> > Here's a use case, then: the about:license document I just checked 
> > into the Mozilla codebase. When the page is called with the spelling 
> > "about:licence" instead of "about:license", I use 
> > getElementsByClassName() to search for elements with the class 
> > "correctme", and do a search and replace within them to correct the 
> > spelling. However, I can't correct it everywhere as I shouldn't be 
> > mutating legal documents. But I can do it in commentary, titles, 
> > contents and so on.
> What an extremely odd use case, but it is at least a use case, thankyou.  
> I'm not sure it's really one significant enough to warrant implementing 
> it given the large number of other methods of achieving the same 
> spelling correction.  Especially as the majority of them can be done 
> without requiring javascript at all.

Could you elaborate on how you would do it without scripting?

On Fri, 3 Feb 2006, Jim Ley wrote:
> On 2/3/06, Gervase Markham <gerv at mozilla.org> wrote:
> > As an aside, I'd be interested in hearing about any JavaScript-less 
> > methods (that don't involve marking up every instance of the word; 
> > this doesn't work, as some are e.g. in href attributes.)
> I was imaging your build environment making similar sort of changes to 
> your script and the two resulting copies existing.

Moving the script to the build system doesn't remove the need for 

On Sat, 4 Feb 2006, Jim Ley wrote:
> On 2/4/06, Brad Fults <bfults at gmail.com> wrote:
> > I can't believe that you're so insistent upon this extremely narrow 
> > set of use cases and that there aren't any other popular use cases for 
> > getElementsByClassName().
> It's the only one that's ever been voiced without the extreme prompting 
> now generating some.  The WHAT specifications (like the W3 
> specifications, indeed most specifications) are creating features, and 
> never voicing why they're necessary, the use cases are simply not made - 
> there probably are use cases for them, but they _must_ be voiced, 
> otherwise you simply cannot review them.

Use cases are carefully examined for HTML5's development.

> > The requirement for a loaded document is to be expected when one 
> > wishes to manipulate the constructed DOM from that document.
> No such requirement exists in web-browsers today, why are you suddenly 
> inventing it?

Not sure what this means.

> > I want my designer to be able to specify an arbitrary set of elements 
> > in the markup for a web app that are to be "widgets". Now if the web 
> > app is sent out to a client that supports function X, I want to 
> > construct this X-widget around each of the elements in the set 
> > previously defined.
> This use case is fulfilled by the -moz-binding and similar proposals, it 
> does this more successfully (as it does not lead to the inconsistent UI 
> problem)  I don't see the point in having both className selectors and 
> -moz-binding XBL approaches, but thanks for the details.

I agree that XBL would be good for this.

> > Now that we can get past "why" we're specifying such a function, I 
> > feel the need to reiterate the constraints on its specification,
> but we can't know the constraints until we know why we're specifying 
> it...


> > 2. getElementsByClassName() must be *binding language* agnostic. That 
> > is, we cannot assume that it will only be used in JS.
> I don't agree that it's necessary to do this, one of the historic 
> problems of the DOM in the ECMAScript context (and others) is that 
> individual language strenghts are not gained.  There is nothing 
> obviously wrong with having language specific API's.


> > If getElementsByClassName() receives an array (synonymous with list), 
> > which all binding languages I am aware of are capable of supplying,
> Do some binding languages not require the type of the parameter to be 
> specified as either an array or a string?
> I do not personally see the use case for a class specific operator, 
> either a CSS Selector method, that selects on any CSS selector, or 
> nothing seems appropriate.

We have that too now, actually.

On Sat, 4 Feb 2006, Jim Ley wrote:
> On 2/4/06, Brad Fults <bfults at gmail.com> wrote:
> > I fully admit the possibility that this may be better accomplished 
> > with some other theoretical and/or vendor-specific technology, but you 
> > again missed the core point.
> the core point is we're inventing something new to meet a use case, you 
> invent the best thing to meet the use case, you don't invent things that 
> allow you to write loads more script to fulfil the use case.  Of course 
> if there were lots of use cases then the general is good.
> The problem is people on this list are continually inventing methods, 
> without considering the use cases, hopefully by forcing people to voice 
> the use case and defend it against superior, already implemented 
> technologies like -moz-binding will mean that people will give the 
> information first so we are actually able to evaluate their proposals.
> You cannot evaluate a proposal without knowing the use case.


> > If they are useful, then getElementsByClassName() is also useful 
> > because it gives an author *more control* over the DOM for solving the 
> > *same types of tasks*.
> That is not immediately apparent, and neither is it apparent that a 
> classname specific shortname is worthwhile when a CSSSelector one would 
> be more appropriate.  You don't continually add methods, methods are 
> complexity, they need writing, they need testing etc.  you have to have 
> a reason to add a method.


> > The reasons why XBL is not currently an acceptable substitute are 
> > numerous, including its extremely limited implementation,
> So something with no implementation should be taken over something with 
> an existing implementation, that's pretty odd.

That is indeed an argument I've often had trouble with.

> > its separate and higher learning curve, and the fact that it doesn't 
> > hurt anyone to have two methods to accomplish similar tasks,
> It absolutely does hurt to increase complexity of implementations, 
> specifications and tests!

That is true, though one need not get upset about it. :-)

> > > I do not personally see the use case for a class specific operator, 
> > > either a CSS Selector method, that selects on any CSS selector, or 
> > > nothing seems appropriate.
> >
> > With all due respect, whether you personally see the use case for a 
> > specific method to be defined for use by all web authors is largely 
> > irrelevant.
> Well everyone's opinions on the list are largely irrelevant, the WHAT 
> individual has sole discretion of what goes in.

Am I the "WHAT individual" you are referring to? What a strange title.

> > If other authors and designers do see use cases and have concrete 
> > examples of where this function would add a great deal of power and 
> > flexibility to their tool set, it is worth consideration and design.
> But a CSSSelector method has more power, not less, and adds little in 
> implementation complexity surely?

There are pros and cons to both. For various reasons, we've ended up with 
both, as it happens.

> > It is unfair to the rest of the contributors to this specification and 
> > web authors in general to hold up the design and/or implementation of 
> > a (generally-agreed) useful tool due to simple personal differences in 
> > taste or opinion.
> I'm not holding up anything?  This is not a democracy.  If you want a 
> quality specification though, you engage in debates with people who have 
> opposite ideas.


> People have shown enough use cases that indicate the ability to select 
> elements by a css selector is a useful function - they've not made the 
> case to me at all that a class name specific one is useful though.

getElementsByClassName() is similar to getElementById() and 
getElementsByTagName() in being special-cased variants of querySelector() 
optimised for specific subsets of the general functionality. It turns out 
that for all three, there are a number of cases where the special-cased 
version is more useful. For example, the HTML5 spec itself would use 
getElementsByClassName() to find all the elements with class=XXX if we 
were to provide some sort of UI to jump to them.

> > Ian has already indicated that the specification of a method to 
> > collect DOM elements based on a CSS selector is best left to the CSS 
> > WG.
> Then why isn't className?  or why don't we just wait for that, having 
> both cssselector and classname is needless verbosity.  Whilst 
> implementation of ClassName is trivial if you CSSSelector, increasing 
> the memory footprint of a DOM is not useful to anyone, and a severe 
> limitation on getting it on many devices.

The incremental increase here is minimal, but in general I agree. As it 
happens, though, user agent implementors have jumped at implementing this 
particular feature.

On Sun, 5 Feb 2006, Jim Ley wrote:
> On 2/5/06, James Graham <jg307 at cam.ac.uk> wrote:
> > Jim Ley wrote:
> > > So something with no implementation should be taken over something 
> > > with an existing implementation, that's pretty odd.
> >
> > Surely you can see that's a insane argument given the relative 
> > complexity of implementing the _entire_xbl_spec_ vs. implementing a 
> > single DOM method.
> Of course, I wasn't actually making the argument.

It certainly sounded like you were...

[snip off-topic e-mails about the Selectors and XPath APIs]

> > I do however know that arguing "we shouldn't implement feature x 
> > because more complex features y and z provide a superset of x's 
> > features" is wrong if a cost benefit analysis shows that x is better 
> > "value for complexity" than y or z.
> Of course it should!  but remember also the cost of not doing x is 
> relevant, and the likelyhood of y or z being implemented anyway.  
> There's little point in requiring feature x be implemented if feature y 
> and z are being implemented anyway, that's just bloat.

That can be the case, yes.

On Fri, 3 Feb 2006, Jim Ley wrote:
> On 2/3/06, Michel Fortin <michel.fortin at michelf.com> wrote:
> > So to generalize the use case, when I want to attach an event to a 
> > child element or an element linked by any other mean to the element 
> > having that class, I can't use addEventListenerToClass.
> So this shows that addEventListenerToCSSSelector is really what you want 
> so you can attach it to A's that are children of the class doesn't it?

That sounds like a remarkably specific API. I think XBL or just a generic 
selection of elements would be better.

On Sat, 4 Feb 2006, Jim Ley wrote:
> On 2/4/06, Lachlan Hunt <lachlan.hunt at lachy.id.au> wrote:
> > For example, if an author marked up dates in their document like this 
> > (due to the lack of a date element)
> >
> > <span class="date">2006-02-03T01:30Z</date>
> A nice use case, and one well met by XBL including the currently 
> implemented -moz-binding, met much superiorly as that has quite 
> interesting effects for the screen reader user who is in the middle of 
> reading one of the dates...

I don't think we'd want to require authors to use XBL for such a simple 

On Sat, 22 Oct 2005, Jim Ley wrote:
> On 10/22/05, Ian Hickson <ian at hixie.ch> wrote:
> > On Fri, 21 Oct 2005, S. Mike Dierken wrote:
> > > Oh, that really shouldn't be done via POST. Clicking a link should 
> > > be safe and sending a POST as a side-effect is not safe.
> >
> > GET means that you can do it again without affecting anything. In the 
> > case of tracking, you can't -- the very act of contacting that 
> > tracking URI can cost someone money. Hence POST. (This is another 
> > advantage of ping over redirects, come to think of it.)
> No!, because just because someone has done it again doesn't mean that no 
> money should change hands.

If the user follows the link again, the UA should repost, sure. My point 
is that the money shouldn't change hands e.g. just because an intermediary 
cache decides to rewarm it's cache and refetch everything fetched in the 
last hour.

> If I provide a site that ends up linking to X, and people fail to 
> remember where X is but instead use the adverts on my site, I'm still 
> providing the service.

Indeed, and that should POST fine.

> For me this is simply not going to work, adaware, norton internet 
> privacy, and similar etc. will soon reconfigure the browser to stop the 
> ping's, as it's irrelevant to the user experience users won't care, 
> however major search engines which derive significant income from 
> tracking will have to revert to detection methods

Why do these tools not block the current tracking pings also?

> The previously proposed "is ping supported" script is definately not 
> sufficient for even coping with todays browsers let alone future ones 
> which disable the tracking.

True, we'd probably end up using browser sniffing.

> Redirects work because the user cannot get the information they're 
> paying for without being tracked (except of course in the case of many 
> google adverts which leave the destination url in the tracking uri which 
> means google loses the money for those clicks.)  As you're now making it 
> trivial for users to get their information for free I can't see what 
> possible advantage this is to anyone trying to track reliably.

The advantage is two-fold; first of all, it allows users to opt-out of the 
tracking, which is an advantage to publishers who care about their users' 
preferences (this is the main reason Google is interested in this 
feature), and secondly, it reduces the latency impact of tracking 
dramatically, making for a faster user experience.

> As I and the site know this is going to under account for my clicks, I 
> fail to see how a 3rd party ad broker using this could survive any sort 
> of audit of their service, they would simply not be providing an 
> accurate service.

Advertising customers aren't going to complain if their publishers 
under-charge them.

On Thu, 19 Jan 2006, Jim Ley wrote:
> On 1/19/06, Tyler Close <tyler.close at gmail.com> wrote:
> > I think it would be fair to characterize current techniques for link 
> > click tracking as "opaque". In contrast, the proposed "ping" attribute 
> > explicitly declares in the HTML what is intended and how it will 
> > happen. Perhaps the right way to explain the "ping" attribute is as 
> > providing transparent, or explicit, feedback; shining a light on the 
> > dark corners of click tracking. If it is explained that the feature 
> > will make link click tracking explicit, controllable and more usable, 
> > I think the user base will react more positively.
> No, they'll just disable it, as it does them directly no benefit and has 
> a cost, so if you educate them enough to make a decision, they will not 
> decide to be tracked.

There are advantages to sending hyperlink instrumentation data as obtained 
by ping="". For example, users get a measurably better user experience 
when they opt-in to the Search History feature, which adjusts Google 
rankings on a per-user basis based on past clicks.

> Since the main use of tracking has a direct economic cost to many 
> parties the sites will then return to using the established successful 
> methods for tracking, no-one will gain and browsers would've wasted lots 
> of time that could've been spent on more productive features.


On Fri, 20 Jan 2006, Jim Ley wrote:
> On 1/19/06, Tyler Close <tyler.close at gmail.com> wrote:
> > On 1/19/06, Jim Ley <jim.ley at gmail.com> wrote:
> > > No, they'll just disable it, as it does them directly no benefit and 
> > > has a cost, so if you educate them enough to make a decision, they 
> > > will not decide to be tracked.
> >
> > Why hasn't this happened to the HTTP Referer header?
> Because no-one's ever attempted to educate people enough to make a 
> decision.

I don't think that's true at all.

> > I think an economic analysis of the scenario is a valid approach. 
> > Could you spell out your argument in more detail? For example, after 
> > I've submitted a search request to Google, what is the economic cost 
> > to me of letting Google know which result I selected? What is the 
> > economic benefit to me of providing this information to Google?
> You're now discussing a very minor use case

Analytics is not a minor use case, in practice.

> the main use case is in advert tracking, the economic case here is 
> clear, accurate information is required by the people paying for the ads 
> to be shown and those showing the adverts - if you're allowing an 
> ad-service to show adverts on your page, are you willing to accept that 
> ad-service using a disableable method of tracking what to pay you?

It depends whether your greed outweighs your respect for your readers, I 
guess. Or to put it another way, whether you care more about short-term 
gains or long-term gains, since not respecting your users tends to hurt 
long-term growth.

> The use case of tracking what you click to leave a site is that it has 
> no direct benefit to the user whatsoever, they gain nothing at all, and 
> there's the slowness cost - indeed the site may be slower still if they 
> use redirect methods, but that's the sort of cost that would make the 
> tracking uneconomic as it will annoy users.

Making the tracking faster is indeed a direct win for users.

> > I get more value in the future for revealing my search terms, in terms 
> > of better query results.
> People don't make the same search more than once

Without wanting to contradict you, I should point out that human behaviour 
in this field is somewhat counter-intuitive.

> google already knows what the most popular search result on a particular 
> term is and without knowing what it was you were actually looking for 
> (most search terms don't express this very well) and what happened when 
> you arrived at the site they cannot know how useful the link truly was.

Yet, this information has been used to materially improve search results.

> but mostly that's a minor use case compared to the main reason for 
> leaving site tracking, and that use case the ping proposals abjectly 
> fails to meet.

I disagree on both of these counts.

On Sun, 30 Oct 2005, Jim Ley wrote:
> On 10/30/05, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> >  <http://annevankesteren.nl/test/xml/xhtml/style-element/005>
> >
> > (Mozilla seems to treat elements and comments differently as shown in 
> > 001, 002, 003 and 004. Both testcases all show green in Opera and 
> > Mozilla.)
> Is this not a bug? I can't see anywhere in any XHTML specification that 
> states that html:style children of html:style elements should be treated 
> as if they were stylesheets.

This is now defined in HTML5.

> > That should probably be reflected in the description of the html:style 
> > element (that elements with known semantics are parsed).
> "reflected in the description"  surely the bugs in the UA's should be 
> fixed, there's no-one relying on such documents, so it's not something 
> that needs codifying to make the web more consistent.

If it doesn't matter, then being consistent with implementations leads to 
the least amount of work spent by humans all around.

On Wed, 30 Nov 2005, Jim Ley wrote:
> On 11/30/05, Boris Zbarsky <bzbarsky at mit.edu> wrote:
> > What should XMLHttpRequest.status return on connection timeout?  Ian 
> > and I were talking about this, and it seems like "502" is a good 
> > response code here...
> >
> > See https://bugzilla.mozilla.org/show_bug.cgi?id=304980
> I understood the aim was to mimic IE's implementation? Which will return 
> a 5 digit code in the 12xxx range from WinInet for errors not returned 
> by a server)
> Of the 5xx 504 is more justifiable than 502, as then you can pretend the 
> browser is simply a proxy which has timed out, 502 which specifically 
> mentions an invalid response doesn't sound a good idea.
> I believe Safari now has a 1 year timeout, so that could be an 
> interesting test to run on a release build :-)

XMLHttpRequest is now dealt with by another spec.

On Tue, 20 Dec 2005, Jim Ley wrote:
> On 12/20/05, Maciej Stachowiak <mjs at apple.com> wrote:
> >> Um, they shouldn't be able to. Or at least, in many UAs they can't.
> >
> > Do you know of UAs that will prevent a file: URL document from loading 
> > another file: URL in a frame or iframe? Or apply any restrictions to 
> > scripting access to the resulting document. I don't know of any that 
> > will.
> Well other than Internet Explorer 6 on XP service pack 2 of course? 
> Although there are of course still ways of doing it.
> > I don't think reading /dev/mouse will specifically do anything bad, 
> > but I see your point. For file: in file: inclusion I think it would be 
> > wise to exclude certain system paths such as /dev and /etc. I think 
> > this may be done already.
> This shouldn't be specified in the specifcation, what is safe to be 
> included can only be known to the user agent as it's wholly specific to 
> the platform and configuration of the platform.

I've left the details of file: out of HTML5.

On Sun, 1 Jan 2006, Jim Ley wrote:
> On 1/1/06, Sander Tekelenburg <tekelenb at euronet.nl> wrote:
> > 
> > It could offer "shortcuts" (key combo's) to standard LINKs like next, 
> > previous, help, search, home. Etc.
> next/previous - most pages on the internet don't have a meaningful 
> next/previous state, and those that do are generally only navigated via 
> forms.  And the few places you do seem them, people don't have a problem 
> navigating it and only want to do so after consuming the page (e.g. a 
> multipage article)
> help - how many sites does this apply to?
> so search - marginally relevant and home are the only ones that are 
> really used regularly - your Etc. is a pretty big etc. as there really 
> aren't any more.
> > (As to us "failing": 5 years ago only lynx and iCab offered LINK 
> > support.
> And IE of course, okay through an extension, but still, that's the same 
> as with other current UA's you're counting.
> LINK's have certainly failed, there is simply not enough consistency in 
> webpages to meaningfully derive labels that have meanings which cross 
> these types - take a few popular pages, say a mapping service, a 
> wikipedia article, a book on a bookshop page, an auction page, a 
> personal photo page, and create the LINKs here that would provide the 
> consistency to make it useful to users.

This changed in the spec recently to make the spec mostly neutral on this 

On Sun, 22 Jan 2006, Jim Ley wrote:
> On 1/22/06, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> > However, I'm still not sure what problem is being solved here.
> Me either, I know it gets boring me saying it, but one of the problems 
> with working groups of all denominations, is the focus on the technical 
> features rather than the use cases.
> What I'm guessing the problem is
> "I want to produce a web-application that exposes certain features in a 
> way consistent with other web-applications and consistent with the 
> underlying window manager."
> I don't think that's solvable with simple measures like this, nor am I 
> actually convinced of the utility in solving it, if you want to create 
> Web Applications that go-outside the document, then your own mozilla 
> container, or a Mac desktop widget thing, or a Opera widget thing, or a 
> Zeepe.com widget thing is the approach to go down.

I've no idea what the topic of this e-mail was.

On Sat, 14 Jan 2006, Jim Ley wrote:
> On 1/14/06, Karoly Negyesi <karoly at negyesi.net> wrote:
> > A getElementsByCSSSelector IMO would be great and introduces minimal 
> > plus burden on browsers -- they have CSS selector code anyways.
> >
> > It's very unfriendly that I can do magic with CSS2/3 selectors and 
> > then I need to translate them myself if I want to change them 
> > on-the-fly.
> Why would you want to change the content of all elements that matched a 
> particular selector?
> Could you explain some use cases?

Gerv gave an example earlier.

On Sat, 14 Jan 2006, Jim Ley wrote:
> On 1/14/06, Julien Couvreur <julien.couvreur at gmail.com> wrote:
> [use cases for CSS selectors]
> > One of the main uses is to bind behaviors to elements. This allows for 
> > a clean markup with a well separated logic.
> Then it fails miserably at the job, HTML documents render progressively, 
> behaviour also needs to render progressively, getElementsByCSSSelector 
> fails at this.
> There could indeed be useful methods for achieving this functionality 
> sort of thing - for example something along the line of events bound at 
> the css selector level, something like - 
> cssSelectorAddEventListener(".chicken td","DOMActivate", ...) maybe? but 
> your proposed DOM method fails to meet your described use case - do you 
> have any others, some that actually succeed, rather than being a half 
> way measure based on current practices that are stuck due to the limited 
> nature.

On Sat, 14 Jan 2006, Jim Ley wrote:
> On 1/14/06, liorean <liorean at gmail.com> wrote:
> > On 14/01/06, Jim Ley <jim.ley at gmail.com> wrote:
> > > Could you explain some use cases?
> >
> > For the very same reason you might want DOM to provide an XPATH 
> > engine, TreeWalkers or NodeIterators - To get efficient host-native 
> > filtering of the node tree. In this case, filtering based on a scheme 
> > used in related technologies. Preferably returning a DOMCollection 
> > instead of a static array or matches.
> The use case for Regular Expressions are clear - I want to detect if a 
> string is something that is probably a date in a particular format etc.  
> The equivalent for a DOM is not clear - if your argument is purely 
> efficiency - which could be a good one certainly - then you still need 
> use cases that justify the underlying technique - I want a nodelist of 
> all things in a document which match a particular CSS class is not an 
> obvious one to me - every use case I can see for it is better to simply 
> change the CSS class itself.

Another example use case for selecting elements by class name would be 
something like Reddit offering a UI that expanded or collapsed a bunch of 
subthreads at once.

On Tue, 17 Jan 2006, Jim Ley wrote:
> On 1/17/06, Hallvord Reiar Michaelsen Steen <hallvord at hallvord.com> 
> wrote:
> > Kayak.com is in trouble because they've set a maxlength that is 
> > smaller than some of the data the script sets input value to. (I'm 
> > sending them some feedback about that). However, the site shows an 
> > interesting problem: the UA (testing in Opera 9) does not submit the 
> > form because of the validation problem, but the onsubmit event has 
> > been called, meaning the site has disabled its submit button. Hence, 
> > the user has no way to fix the data and resubmit (even if she actually 
> > understands what the error is).
> >
> > Should we really fire onsubmit if the UA prevents submitting the form? 
> > Button-disabling-on-submit scripting isn't exactly rare..
> I think you have to fire onsubmit, there are also lots of other things 
> people do onsubmit - copying information into hidden fields, calling 
> tracking scripts etc.  It's really an issue with the user agent.
> The problem here is actually a problem of backwards compatibility, 
> current user agents do not stop submission when maxlength is too long.
> This means valid content, The HTML 4.01 doesn't say that having a value 
> longer than maxlength is an error, won't work in user agents.
> You should implement the behaviour only for documents identified as a 
> Web Forms 2.0 user agent.

That's the wrong solution, since new features will be used in old user 
agents. However, we've solved this problem in a more backwards-compatible 
way now.

On Wed, 18 Jan 2006, Jim Ley wrote:
> On 1/18/06, Hallvord Reiar Michaelsen Steen <hallvord at hallvord.com> wrote:
> > I'm not suggesting that we shouldn't fire onsubmit at all, only that 
> > perhaps it would be more backwards-compatible if onsubmit took place 
> > after the UA validation.
> But it still doesn't fire if the useragent prevents validation? That 
> would certainly be safer than as I read your previous proposal, I'm not 
> confident it wouldn't break some legacy pages.
> > I'm not sure if making that impossible would be a big limitation.
> Certainly not for future scripts, but the problem is the authors who've 
> never heard of Web Forms 2.0...

Indeed, we don't want to be changing the behaviour of old documents.

> > > You should implement the behaviour only for documents identified as 
> > > a Web Forms 2.0 user agent.
> >
> > I think we've been there, discussed that and voted against using any 
> > xmlns or DOCTYPE tweaks to distinguish a document as a WF2 one.
> Voted???

I think Hallvord was speaking more metaphorically.

> > The only thing I want to discuss in this thread, is: should firing the 
> > onsubmit event and UA validation happen in reversed order to ensure 
> > backwards compatibility with scripts that believe a form has been 
> > submitted when it hasn't due to a validation error?
> Couple of points to note off the top of my head:
>   WF2 aware scripts need to know that validation happened and failed.
>   Legacy scripts need to know if a form was submitted - you can only
> do this by not ever suggesting that it had been as far as I can see
> which means not firing onsubmit event.
> So I would certainly agree that firing onsubmit after form validation
> is the only way to ensure backwards compatibility, it may be that we
> need an onaftervalidation type thing which fires after validation is
> complete so WF2 aware UA's can do the same disabling/screen tidy up
> that we want to do.

I've now switched the order and added oninvalid="" events for hooking into 
the validation mechanism.

> Note I don't think this will still be compatible with all legacy
> clients, there's lots of scripts of the type:
> <a onclick="disableUI();documente.forms[0].submit();">

This hould now work also, since .submit() doesn't validate the form.

> where non form controls are used for the submission, so I don't think 
> either of the proposals are going to be perfect, which is why I think 
> it's important to ensure that user agent validation can only occur with 
> the explicit awareness of the author - not as a byproduct of including 
> another attribute.

Not sure what you mean here.

On Thu, 19 Jan 2006, Jim Ley wrote:
> On 1/19/06, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> > Quoting Alexey Feldgendler <alexey at feldgendler.ru>:
> > > I wonder why alt is a required attribute for IMG in HTML while an 
> > > empty value is allowed.
> >
> > Because an empty value means that there is no alternate text and no 
> > attribute at all means that alternate text is missing. (Which is 
> > clearly not what you want.)
> I think Alexey's point is that in a correctly authored page no alt 
> attribute could perfectly reasonably mean the attribute is empty, this 
> is a good argument, but one that falls down in reality because so few 
> pages are correctly authored so those groups needing good ALT are left 
> at a disservice unless authors co-operate by specifically giving ALT an 
> empty value.

There's definitely a distinction between decorative images and images that 
don't have alternative text (for whatever reason).

On Sat, 21 Jan 2006, Jim Ley wrote:
> On 1/21/06, Matthew Raymond <mattraymond at earthlink.net> wrote:
> > Alexey Feldgendler wrote:
> >   If an <img> element is being used in a "certainly presentational" 
> > way, should it not be done away with in favor of CSS?
> CSS only allows for background images not for other presentational 
> images, to cover for this "feature" of CSS, presentational images that 
> are not background images are used in HTML.

I don't think that's exactly how things came about, but I agree that it's 
the case now.

On Mon, 23 Jan 2006, Jim Ley wrote:
> On 1/23/06, Alexey Feldgendler <alexey at feldgendler.ru> wrote:
> > On Mon, 23 Jan 2006 17:15:39 +0600, Lachlan Hunt
> > <lachlan.hunt at lachy.id.au> wrote:
> > > http://www.w3.org/TR/2001/WD-DOM-Level-2-HTML-20011210/html.html#ID-75233634
> >
> > I'm surprised. document.write is defined but it's substantially 
> > different from what the browsers implement. DOM 2 says that write() 
> > can be called only between calls to open() and close(), and that a 
> > call to open() clears the existing content of the document.
> That's because the existence of a global object called document that 
> points to the current document doesn't exist in any standard.

It does now!

> > This is very different from the current practice of calling write() 
> > without open() to inject unparsed HTML into an already-parsed 
> > document.
> Er, no, no UA supports this, it supports it in HTML documents _that are 
> being parsed_ but not ones that are already parsed where document.write 
> performs an implied document.open() so content is cleared.


On Mon, 23 Jan 2006, Jim Ley wrote:
> On 1/23/06, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> > <http://webforms2.testsuite.org/>
> >
> > Most of the tests are inside the "controls/" section. There need 
> > always need to be more tests obviously. Currently it hosts a over two 
> > hundred test files if I'm not mistaken, but not everything is covered 
> > yet. In "elements/" some elements are missing and some controls need 
> > to be covered as well.
> >
> > Feedback, new tests and other things are appreciated.
> It appears many of the tests require CSS 2, does this mean that Web 
> Forms 2.0 requires CSS2, I'd missed that, I am unable to test my 
> implementation because it is not a CSS user agent.

It does not mean that, no. It does mean that those tests do, though.

On Mon, 23 Jan 2006, Jim Ley wrote:
> On 1/23/06, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> > Quoting Jim Ley <jim.ley at gmail.com>:
> > > It appears many of the tests require CSS 2, does this mean that Web 
> > > Forms 2.0 requires CSS2, I'd missed that, I am unable to test my 
> > > implementation because it is not a CSS user agent.
> >
> > There are quite a few tests that test the relationship of Web Forms 2 
> > with the CSS3 Basic User Interface Module as mentioned in section 8.2 
> > of the Web Forms 2 specification. These test also rely on some CSS 2.1 
> > features I assume, yes.
> Well testing those is of course fine, as it's testing CSS 3 !


> > Some other "visual UA orientated" tests might do the same. Given your 
> > comment I will try to make all forthcoming tests that not really 
> > require CSS not to use it either and use some other mechanism to show 
> > that the test has passed. By, for example, using some scripting as 
> > already done in quite a few tests.
> A combination of things would be good if possible.


On Sat, 11 Feb 2006, Jim Ley wrote:
> On 2/10/06, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> > Browsers disagree on what should be selected in such cases. Simple 
> > testcase:
> >
> >  <http://webforms2.testsuite.org/controls/select/009.htm>
> >
> > Opera 9 passes that test and I heard Safari nightlies do too. Internet 
> > Explorer and Firefox fail the testcase. Personally I would be in favor 
> > of changing the specification to be compatible with Opera 9 and Safari 
> > given that what they do is sensible.
> Why can't this be left undefined?

The goal is to achieve interoperability with all content.

> what does it matter to have interopable rendering on invalid DOM 
> changes?

These are common things to have happen.

> Surely forcing code changes on anyone is just a waste of implementation 
> time here, not updating the page when the DOM is changed to an invalid 
> number is a good optimisation?

The goal is to make sure authors get a consistent development experience 
whether they only walk the path of valid content or not.

> IE for example simply rejects the update (the size remains at 2), that 
> seems like a sensible approach, as does normalizing it to 1.
> I simply don't see the value in standardising the error behaviour here.

The value is the same as everywhere. Interoperability.

On Sat, 11 Feb 2006, Jim Ley wrote:
> Oh but if you do, I don't believe the Opera method of having the 
> appearance of a 1, but a value of a size of 0 or -1 is correct.  If 
> corrections are made, the DOM should reflect the actual value used - 
> after all that is the only thing useful to the user.
> Mozilla seems to correct -1 to 0 but nothing else.

On Sat, 11 Feb 2006, Jim Ley wrote:
> On 2/11/06, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> > Quoting Jim Ley <jim.ley at gmail.com>:
> > > D you mean the bug in Opera 9 that means changing the size of the 
> > > select selects an entry?  Surely that's just a bug in Opera 9 
> > > preview (but not 8.5), changing the size should have no effect on 
> > > what's selected.
> >
> > I agree (with the part after the comma of the last sentence). Given 
> > that per 
> > <http://whatwg.org/specs/web-forms/current-work/#select-check-default> 
> > the option should be selected, it should remain selected when changing 
> > some factor that has nothing to do with that.
> but in your test case, they are not selected, and then become selected 
> in Opera 9.0 when you give them an invalid attribute - was Opera 9.0 not 
> the behaviour you were looking to standardise too?  (I assumed that as 
> the Opera 8.5 behaviour was absurd with -1 making it about 40 high)
> For me ignoring the invalid DOM change makes more sense that the 
> inconsistent proposals you're advocating.  The cited part of the spec 
> linked to above certainly does not say what should happen when a DOM 
> changes to a single select in any case, it is still undefined even in 
> the size=1 case.

I don't really know which error condition we're talking about here, but 
hopefully the spec is to your liking on this issue now.

[snip threads on JSONRequest that aren't about any specs I edit]

On Wed, 10 May 2006, Jim Ley wrote:
> On 10/05/06, Dean Edwards <dean at edwards.name> wrote:
> > On 10/05/06, Jorgen Horstink <mail at jorgenhorstink.nl> wrote:
> > > I just had a little chat with Anne and he thinks a rendering change 
> > > event (ie: before printing, generate a table of contents) will be 
> > > usefull. I think he is right.
> > 
> > I suggested onbeforeprint/onafterprint events a while back. It got 
> > shot down. :-(
> How disappointing, let's hope the webapi wg look at it... there's 
> certainly existing implementations to just copy.  They're useful events.

I've since specced those events.

On Thu, 11 May 2006, Jim Ley wrote:
> On 11/05/06, Anne van Kesteren <fora at annevankesteren.nl> wrote:
> > My suggestion would be to have a renderingMode event (or something 
> > like that) which in some way exposes a mediaList of the current 
> > rendering modes (mostly just one). If you go to print preview mode for 
> > example the event is dispatched and the mediaList contains 'print'. If 
> > you go to projection mode it contains 'print' etc.
> The issue with this is that we've struggled to find any situations where 
> there genuinely are multiple modes being rendered simultaneously, the 
> print preview in IE doesn't (the afterprint returns immediately and 
> there are no screen updates during the intermediate times)  So I think 
> we should avoid anything that actually considers different views being 
> considered available simultaneously, it's a red herring without 
> implementations, so we can ignore it :-)

Opera actually has multiple modes (voice and handheld at once, for 
instance). But in general I agree.

On Wed, 20 Sep 2006, Jim Ley wrote:
> On 20/09/06, Hallvord R M Steen <hallvors at gmail.com> wrote:
> > <a href="" onfocus="this.blur()">
> > 
> > This coding is very common because IE adds a small outline border to 
> > focused links. Authors who do not like this will blur links when they 
> > are activated to avoid this cosmetic issue. (Mea culpa: I've done 
> > exactly this myself in sites I coded as a newbie, for that very 
> > reason.)
> The reason being you'd not heard of the hidefocus attribute :-)  or 
> onfocus="this.hideFocus=true" if you want to be free.

Seems like CSS would be a better solution.

> > In Opera, when keyboard navigation hits this link, focus is removed. 
> > Thus the link can not be activated from the keyboard and navigation 
> > may have to start from the top of the document again.
> Right so ignore it.

Ignore what?

> > We need some prose in a spec that allows a user agent to ignore blur() 
> > for accessibility reasons.
> Why do you?  there's no prose in any spec which says you have to support 
> any script etc., and if there is, I would encourage you to break it 
> anyway, obviously anything that harms accessibility to your users is 
> something that it is your duty as a web-browser company to not do.

Why would scripts intrinsicly harm accessibility? Surely disabled users 
have as much right to use applications as everyone else?

> I can appreciate you'd rather point to some other place and go "look, 
> look they said it was okay", but in that case you already have it UAAG 
> is fine for that.  I don't think it's good to spell out and make 
> specifications even longer just to give you somewhere to point pretty 
> deluded authors.

Not really sure what you mean here.

> > 'scripts must not alter focus-related issues in a way that hinder 
> > keyboard operation, and user agents may override any such use of 
> > focus-related scripting operations.'
> I don't like this, it doesn't define hinder well enough for a MUST, 
> can't you just take it as read that you're allowed to?
> I can't foresee any realistic collateral damage from actually blocking 
> the behaviour - but if that genuinely is the case, then removing blur 
> entirely would be a more appropriate solution.

The spec does take this issue into account now, if I am not mistaken.

Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
Received on Tuesday, 14 June 2011 00:03:55 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:33 UTC