Re: Stream-based processing!?

On Oct 2, 2011, at 22:33 , Markus Lanthaler wrote:
> 
> 
>>> We could also require serializations ensure that @context is listed
>>> first. If it isn't listed first, the processor has to save each
>>> key-value pair until the @context is processed. This creates a memory
>>> and complexity burden for one-pass processors.
> 
> Agree. I think that would make a lot of sense since you can see the context
> as a kind of header anyway.

I must admit I do not really understand that, but that probably shows my ignorance of the wider JSON world.

However... the standard JSON parser in Python parses a JSON object into a dictionary. However, at least in Python, you cannot rely on the order of the keys within the dictionary (it is determined by some hashing algorithm, if I am not mistaken, but that is internal to the interpreter anyway). Ie, whether @context appears first or last does not make any difference. 

Worse: if you then use such a structure to generate JSON using again the 'dump' feature of the standard Python parser, there is no way to control the order of those keys. In other words, if we impose such an order in JSON-LD, that means that a Python programmer must bypass the standard JSON library module and do the dump by hand. I do not think that would be acceptable...

Ivan



----
Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF: http://www.ivan-herman.net/foaf.rdf

Received on Monday, 3 October 2011 11:00:54 UTC