W3C home > Mailing lists > Public > whatwg@whatwg.org > March 2013

Re: [whatwg] asynchronous JSON.parse

From: David Rajchenbach-Teller <dteller@mozilla.com>
Date: Fri, 08 Mar 2013 11:51:28 +0100
Message-ID: <5139C2B0.6010607@mozilla.com>
To: Tobie Langel <tobie.langel@gmail.com>
Cc: whatwg@whatwg.org
Let me answer your question about the scenario, before entering the
specifics of an API.

For the moment, the main use case I see is for asynchronous
serialization of JSON is that of snapshoting the world without stopping
it, for backup purposes, e.g.:
a. saving the state of the current region in an open world RPG;
b. saving the state of an ongoing physics simulation;
c. saving the state of the browser itself in case of crash/power loss
(that's assuming a FirefoxOS-style browser implemented as a web
application);
d. backing up state and history of the browser itself to a server
(again, assuming that the browser is a web application).

Cases a., b. and d. are hypothetical but, I believe, realistic. Case c.
is very close to a scenario I am currently facing.

The natural course of action would be to do the following:
1. collect data to a JSON object (possibly a noop);
2. send the object to a worker;
3. apply some post-treatment to the object (possibly a noop);
4. write/upload the object.

Having an asynchronous JSON serialization to some Transferable form
would considerably the task of implement step 2. without janking if data
ends up very heavy.

Note that, in all the scenarios I have mentioned, it is generally
difficult for the author of the application to know ahead of time which
part of the JSON object will be heavy and should be transmitted through
an ad hoc protocol. In scenario c., for instance, it is quite frequent
that just one or two pages contain 90%+ of the data that needs to be
saved, in the form of form fields, or iframes, or Session Storage.

So far, I have discussed serializing JSON, not deserializing it, but I
believe that the symmetric scenarios also hold.

Best regards,
 David

On 3/7/13 11:34 PM, Tobie Langel wrote:
> I'd like to hear about the use cases a bit more. 
> 
> Generally, structured data gets bulky because it contains more items, not because items get bigger.
> 
> In which case, isn't part of the solution to paginate your data, and parse those pages separately?
> 
> Even if an async API for JSON existed, wouldn't the perf bottleneck then simply fall on whatever processing needs to be done afterwards?
> 
> Wouldn't some form of event-based API be more indicated? E.g.:
> 
> var parser = JSON.parser();
> parser.parse(src);
> parser.onparse = function(e) {
>   doSomething(e.data);
> };
> 
> And wouldn't this be highly dependent on how the data is structured, and thus very much app-specific?
> 
> --tobie 
> 


-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla
Received on Friday, 8 March 2013 10:51:51 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 8 March 2013 10:51:51 GMT