Re: Stream-based processing!?

On 09/28/2011 07:57 AM, Markus Lanthaler wrote:
> The current spec states that "[JSON-LD] is intended to be easy to parse,
> efficient to generate, stream-based and document-based processing
> compatible, and require a very small memory footprint in order to operate."
>
> What is meant with stream-based processing?

I wanted to ensure that conversion to RDF processing was able to be 
performed as a one-pass process, without access to the full data 
structure, by a SAX-like JSON processor. This is important for embedded 
and low-memory environments. It also ensures that the processors stay 
lean and simple to implement via a recursive processing algorithm.

One pass is not possible for some of the other algorithms, such as 
normalization and framing... but for conversion to RDF, we can still do 
one-pass. I called this stream-based processing, but perhaps we should 
re-name it to one-pass processing. What word captures the requirement 
that conversion to RDF only requires one pass and a very small memory 
footprint?

We could also require serializations ensure that @context is listed 
first. If it isn't listed first, the processor has to save each 
key-value pair until the @context is processed. This creates a memory 
and complexity burden for one-pass processors.

> An object has no implied order
> in JSON and so the @context might be the last element to be parsed. This
> makes it impossible to do anything with all the other elements parsed
> before. So how exactly JSON-LD supports stream-based processing and how is
> it intended to work?

Does the above answer your question?

-- manu

-- 
Manu Sporny (skype: msporny, twitter: manusporny)
Founder/CEO - Digital Bazaar, Inc.
blog: Standardizing Payment Links - Why Online Tipping has Failed
http://manu.sporny.org/2011/payment-links/

Received on Saturday, 1 October 2011 19:41:14 UTC