- From: Ian Hickson <ian@hixie.ch>
- Date: Tue, 25 Aug 2009 23:24:30 +0000 (UTC)
Drew Wilson wrote: > > Currently, SharedWorkers accept both a "url" parameter and a "name" > parameter - the purpose is to let pages run multiple SharedWorkers using > the same script resource without having to load separate resources from > the server. > > Per section 4.8.3 of the SharedWorkers spec, if a page loads a shared > worker with a url and name, it is illegal for any other page under the > same origin to load a worker with the same name but a different URL -- > the SharedWorker name becomes essentially a shared global namespace > across all pages in a single origin. This causes problems when you have > multiple pages under the same domain (ala geocities.com) - the pages all > need to coordinate in their use of "name". Additionally, a typo in one > page (i.e. invoking SharedWorker("mypagescript?", "name") instead of > SharedWorker("mypagescript", "name") will keep all subsequent pages in > that domain from loading a worker under that name so long as the > original page resides in the page cache. I'd* like to propose changing > the spec so that the name is not associated with the origin, but instead > with the URL itself. > > So if a page wanted to have multiple instances of a SharedWorker using > the same URL, it could do this: > new SharedWorker("url.js", "name"); > new SharedWorker("url.js", "name2"); > > Nothing would prevent a page from also doing this, however: > new SharedWorker("other_url.js", "name"); The idea here is that if you have an app that does database manipulation, you might want to ensure there is only ever one shared worker doing the manipulation, so you might decide on a shared worker name that is in charge of that, and then you can be sure that you don't accidentally start two workers with that name using different copies of a script (e.g. because you have two installations of WordPress and they both use relative URLs to the same script in their respective locations). On Sat, 15 Aug 2009, Jim Jewett wrote: > > > > Currently, SharedWorkers accept both a "url" parameter and a "name" > > parameter - the purpose is to let pages run multiple SharedWorkers > > using the same script resource without having to load separate > > resources from the server. > > > > [ request that name be scoped to the URL, rather than the entire > > origin, because not all parts of example.com can easily co-ordinate.] > > Would there be a problem with using URL fragments to distinguish the > workers? > > Instead of: > new SharedWorker("url.js", "name"); > > Use > new SharedWorker("url.js#name"); > and if you want a duplicate, call it > new SharedWorker("url.js#name2"); > > The normal semantics of fragments should prevent the repeated server fetch. That seems like abuse of the fragment identifier syntax. On Mon, 17 Aug 2009, Michael Nordman wrote: > > What purpose the the 'name' serve? It's intended to prevent two scripts from being opened for the same purpose by mistake. > Can the 'name' be used independently of the 'url' in any way? No. > * Is 'name' visible to the web developer any place besides those two? No. On Tue, 18 Aug 2009, Drew Wilson wrote: > > An alternative would be to make the "name" parameter optional, where > omitting the name would create an unnamed worker that is > identified/shared only by its url. > > So pages would only specify the name in cases where they actually want > to have multiple instances of a shared worker. Done. On Sun, 16 Aug 2009, Mike Wilson wrote: > Drew Wilson wrote: > > Per section 4.8.3 of the SharedWorkers spec, if a page loads a shared > > worker with a url and name, it is illegal for any other page under the > > same origin to load a worker with the same name but a different URL -- > > the SharedWorker name becomes essentially a shared global namespace > > across all pages in a single origin. This causes problems when you > > have multiple pages under the same domain (ala geocities.com) - the > > pages all need to coordinate in their use of "name". > > I agree with you that this is a problem, and the same problem exists in > WebStorage (storage areas are set up per origin). F ex, the sites > http://www.google.com/calendar and http://www.google.com/reader, and > every other site based off www.google.com will compete for the same keys > in one big shared storage area. > > It seems lately everything is being based on having unique host names, > and path is not being considered anymore, which I think it should. The reason it's not is that it would mislead people into thinking that you could do things safely based just on the path, which you can't. A script could trivially poke into another path's databases or cookies, e.g. On Mon, 17 Aug 2009, Laurence Ph. wrote: > > | If?worker global scope's?location?attribute represents an?absolute > | URL?that is not *exactly equal* to the resulting?absolute URL, then > | throw a?URL_MISMATCH_ERR?exception and abort all these steps. > > Seems the #name part will break this line and throw a URL_MISMATCH_ERR > with the duplicated #name2 one. > Shall we ignore minor difference between urls? e.g # fragments? I haven't changed this, so creating two SharedWorkers without a "name" argument but using different fragIDs instead will create two workers. -- Ian Hickson U+1047E )\._.,--....,'``. fL http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Received on Tuesday, 25 August 2009 16:24:30 UTC