- From: Dave Longley <dlongley@digitalbazaar.com>
- Date: Wed, 09 Nov 2011 13:12:58 -0500
- To: public-linked-json@w3.org
- CC: Joshua.Mandel@childrens.harvard.edu
I added an optimization to the normalization implementation that speeds up Josh's test case and those similar to it. On the machine I tested the optimization on, the normalization time for the test case decreased from ~12 secs to ~2 secs. I don't know that we necessarily need to update the normalization spec's algorithm prose, however, as it seems to be an implementation detail; it does not (or at least should not) change the normalization output. The optimization was to avoid resorting unlabeled nodes, during the canonical naming phase, when it is known that the sort order will not change. I also updated the json-ld.org playground with the latest jsonld.js implementation. -Dave On 11/08/2011 10:29 PM, Josh Mandel wrote: > I mentioned to Dave on IRC that I was seeing some not-huge graphs that > are very slow to frame. He pointed out that this is likely because > they're slow to normalize, and the current framing algorithm > pre-normalizes. I don't know if this a common use case, but for me > framing is important, but normalization is less so. I promised to > send a complete example of this kind of graph, so here goes: > > --- > https://github.com/jmandel/jsonld.js/tree/slowness/smart > > Here's a JSON document representing ~1800-triples that takes 7 seconds > to normalize. Lots of blank nodes :-) > > $ node smart-test.js > 8 Nov 19:15:51 - Normalization time: 7021 > 8 Nov 19:15:51 - Framing time: 7189 > 8 Nov 19:15:51 - 1876 Triples time: 6585 > --- > > Incidentally, the sample data come from an RDF ontology I'm building > for medical data (lab results in this case). All the data in this > example are synthetic and/or anonymized. (More info at > http://smartplatforms.org). > > -Josh > > -- Dave Longley CTO Digital Bazaar, Inc.
Received on Wednesday, 9 November 2011 18:13:27 UTC