- From: Jonas Sicking <jonas@sicking.cc>
- Date: Thu, 30 Dec 2010 12:44:42 -0800
- To: Axel Rauschmayer <axel@rauschma.de>
- Cc: public-webapps@w3.org
Even if we decide to make the environment in which we run webpage script multithreaded the current API will work fine. Generally speaking in multithreaded environments you do callbacks on the same thread as which the initial function is called. Alternatively you'd want to pass in the thread on which you want callbacks, along with the callbacks you want called. But in that case using EventTargets doesn't make sense as you don't know if a callback has already happened by the time you call addEventListener. Likewise, the readyState property also would need to be removed as by the time you check it it can already be out of date. In short, a complete revamping of the API would be needed, the small modification you are proposing would be nowhere near enough. However most of all I'm not terribly worried that we'll make the browser scripting environment multitheaded. Multithreading is extremely complicated. To this day research is still happening on how to implement even the most simple datastructures, such as queues and hash tables, effectively in a mulithreaded environment. See the discussions on the WhatWG list which took place when we designed the workers API. I find it much more likely that we'll stick with the approach that workers have introduced of having separate environments which run on different threads and with no shared state. Communication between threads happen through message passing. This is similar to languages such as Google's Go and Mozilla's Rust. / Jonas On Thu, Dec 30, 2010 at 12:45 AM, Axel Rauschmayer <axel@rauschma.de> wrote: > Right. But is there anything one loses by not relying on it, by making the API more generic? > > On Dec 30, 2010, at 7:58 , Jonas Sicking wrote: > >> On Wed, Dec 29, 2010 at 2:44 PM, Axel Rauschmayer <axel@rauschma.de> wrote: >>> Can someone explain a bit more about the motivation behind the current design of the async API? >>> >>>> var request = window.indexedDB.open(...); >>>> request.onsuccess = function(event) { ... }; >>> >>> The pattern of assigning the success continuation after invoking the operation seems to be to closely tied to JavaScript’s current run-to-completion event handling. But what about future JavaScript environments, e.g. a multi-threaded Node.js with IndexedDB built in or Rhino with IndexedDB running in parallel? Wouldn’t a reliance on run-to-completion unnecessarily limit future developments? >>> >>> Maybe it is just me, but I would like it better if the last argument was an object with the error and the success continuations (they could also be individual arguments). That is also how current JavaScript RPC APIs are designed, resulting in a familiar look. Are there any arguments *against* this approach? >>> >>> Whatever the reasoning behind the design, I think it should be explained in the spec, because the current API is a bit tricky to understand for newbies. >> >> Note that almost everyone relies on this anyway. I bet that almost all >> code out there depends on that the code in for example onload handlers >> for XHR requests run after the current thread of execution has fully >> finished. >> >> Asynchronous events isn't something specific to javascript. >> >> / Jonas >> > > -- > Dr. Axel Rauschmayer > Axel.Rauschmayer@ifi.lmu.de > http://hypergraphs.de/ > ### Hyena: organize your ideas, free at hypergraphs.de/hyena/ > > > >
Received on Thursday, 30 December 2010 20:52:42 UTC