Re: The Structured Clone Wars

On Fri, Jul 15, 2011 at 1:30 PM, Allen Wirfs-Brock <allen@wirfs-brock.com>wrote:

>
> On Jul 15, 2011, at 10:00 AM, Jonas Sicking wrote:
> >
> > Except that you don't want to do that for host objects. Trying to
> > clone a File object by cloning its properties is going to give you an
> > object which is a whole lot less useful as it wouldn't contain any of
> > the file data. Once we define support for cloning ArrayBuffers the
> > same thing will apply to it.
> >
> > This might in fact be a big hurdle to implementing structured cloning
> > in javascript. How would a JS implementation of structured clone
> > determine if an object is a host object which would loose all its
> > useful semantics if cloned, vs. a "plain" JS object which can usefully
> > be cloned?
> >
> > / Jonas
> >
>
>
> And a cloned JS object is a lot less useful if it has lost it's original
> [[Prototype]].



Didn't you just argue you could communicate this kind of information with a
schema? You couldn't share the actual [[Prototype]] anyway. So you'd have to
pass the expected behaviors along with the object (this is why a Function
serialization would be wonderful, but this could be done in a schema too).

Sure, it won't be terribly efficient since (without mutable __proto__ or a
<| like mechanism in JSON) your worker would have to another key pass to
tack on the appropriate behaviors. There's no benefit to the branding info
(again, no shared memory) so I don't really see the problem. Why would this
JS object be substantially less useful? It just requires a slightly
different paradigm -- but this is to be expected. The only alternatives I
can imagine would require some kind of spec. assistance (e.g. a specified
schema format or a JSON++), which I gather you were trying to avoid.



> Generalizations about host objects  are no more or less valid than
> generalizations about pure JS objects.
>
> This issue applies to pure JS object graphs or any serialization scheme.
>  Sometimes language specific physical clones won't capture the desired
> semantics.  (Consider for example, an object that references a resource by
> using a symbolic token to access a local resource registry). That is why the
> ES5 JSON encoder/decoder includes extension points such as the toJSON
> method.  To enable semantic encodings that are different form the physical
> object structure.
>
> The structured clone algorithm, as currently written allows the passing of
> strings, so it is possible to use in to transmit anything that can be
> encoded within a string. All it takes needs is an application specific
> encoder/decoder.  It seems to me the real complication is a desire for some
> structured clone use cases to avoid serialization and permit sharing via a
> copy-on-right of a real JS object graph.



There are alternatives to CoW (dherman alluded to safely transferring
ownership in his post, for instance).


If you define this sharing in terms of serialization then you probably
> eliminate some of the language-specific low level sharing semantic issues.
>  But you are still going to have higher lever semantic issues such as what
> does it mean to serialize a File. It isn't clear to me that there is a
> general solution to the latter.
>


Why does it matter what it means to serialize a File? For the use cases in
question (IndexedDB and WebWorkers) there are various paths an app could
take, why would this have to be spec'ed? What does toJSON do? And does a
file handle really need to make it across this serialization boundary and
into your IDB store for later retrieval? I suspect not.

Received on Friday, 15 July 2011 17:57:14 UTC