- From: Charles McCathie Nevile <chaals@yandex-team.ru>
- Date: Tue, 14 May 2013 12:24:21 +0100
- To: "Anne van Kesteren" <annevk@annevk.nl>
- Cc: "public-webapps WG" <public-webapps@w3.org>
On Mon, 13 May 2013 17:25:23 +0100, Anne van Kesteren <annevk@annevk.nl> wrote: > On Sun, May 12, 2013 at 8:34 PM, Charles McCathie Nevile > <chaals@yandex-team.ru> wrote: >> So far I have done nothing at all about an API, and am waiting for some >> formal confirmation from people who implement stuff that they would >> like to standardise an API for dealing with URLs. It seems to be a >> common task, judging from the number of people who seem to have some >> scrap of code lying around for it, so I expect to hear people say "Yes, >> great idea" - although I have been surprised before. > > Given that you're questioning this, I am not questioning whether it is a good idea. I am checking that it will actually get implementation - since a spec of "things we think people should do, but they don't and probably won't" is just an idea written down. Personally I don't think the Web is just "what browsers will implement", since there are things (like microdata) that don't really need the browser at all in order to be important to the web. But in this case, as with most APIs I think browser implementations are particularly important. > maybe you want to study HTML's dependencies. Sure, but making life easier for spec authors is not directed at the highest priority group in the "hierarchy of audiences" (much as I want to do it). > It seems that's a problem overall. This draft doesn't go into > detail about any of the problems for which HTML started defining > URLs by itself in the first place. Sure. As I noted, it is extremely rough, and it is definitely missing more than it includes at this stage. > What's wrong with the http://url.spec.whatwg.org/ URL standard. 1. It is apparently not intended to become a stable reference that can be used in situations where fixing every edge case is less important than fixing the content we agree we are looking at. 2. It provides extremely detailed algorithms that certain classes of tools require to work with URLs, at the expense of an "easily-read" explanation of what is and isn't a URL. 3. It does not provide any kind of license commitment from anyone likely to have patents on the technology described. The first two of these are only problems from a specific set of perspectives, but those perspectives happen to be ones that match real existing needs. I believe that the third is a non-issue in practical terms, given that most of what is specified has been around for long enough to ensure the existence of prior art, but it doesn't hurt to be surer about this. > Not invented here? Ironically (given the history of URLs as used on the Web today), that is indeed a defensible explanation of one issue. For reasons including stability of the document content, WHAT-WG "living standards" are not suitable as normative references for W3C Recommendations. As you note, HTML essentially depends on having a sound reference. (For other mostly technical reasons I believe that RFC 3986 and perhaps to a lesser extent RFC 3987 are not especially suitable either, because the question is equally valid as to what is wrong with them). It is quite possible that the whole problem will go away, and one or more of these specifications will just happily disappear in deference to the rest. However, that's not currently the world we live in, and meeting W3C's interim need seemed a useful investment of some time. cheers Chaals -- Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex chaals@yandex-team.ru Find more at http://yandex.com
Received on Tuesday, 14 May 2013 08:25:00 UTC