RE: Stream-based processing!?

> > I wanted to ensure that conversion to RDF processing was able to be
> > performed as a one-pass process [...]
> > one-pass. I called this stream-based processing, but perhaps we should
> > re-name it to one-pass processing. What word captures the requirement
> > that conversion to RDF only requires one pass and a very small memory
> > footprint?

I think one-pass conversion to RDF would be much better, yes.


> > We could also require serializations ensure that @context is listed
> > first. If it isn't listed first, the processor has to save each
> > key-value pair until the @context is processed. This creates a memory
> > and complexity burden for one-pass processors.

Agree. I think that would make a lot of sense since you can see the context
as a kind of header anyway.


> I can go along with requiring @context to be listed first in a
> serialization of JSON. But if we're going to say that, we should also
> say that @subject (were we going change it to just @iri?) MUST also
> precede other key/value pairs. @subject is also required to generate
> triples and should therefore precede any other uses of it. We could
> then infer that if a key is found which is not @context or @subject,
> that it represents an unlabeled node.

I think that goes too far. I see the context as a header as said above but I
wouldn't like to have to worry about the order of other elements.


> > Does the above answer your question?

Yes.

+1 to change it from stream-based processing to one-pass conversion to RDF.

Received on Sunday, 2 October 2011 20:33:47 UTC