W3C home > Mailing lists > Public > public-linked-json@w3.org > December 2011

RE: Multi-pass @context (ISSUE #41)

From: Markus Lanthaler <markus.lanthaler@gmx.net>
Date: Sun, 4 Dec 2011 15:23:32 +0800
To: "'Linked JSON'" <public-linked-json@w3.org>
Cc: "'Gregg Kellogg'" <gregg@kellogg-assoc.com>
Message-ID: <005201ccb255$a3fc7fa0$ebf57ee0$@lanthaler@gmx.net>
Gregg Kellogg wrote:

> Well, you could have a dependency chain of term definitions, so it
> could be confusing as to how to do this, but that's a real corner-case. 

As long as there are no cycles, everything is fine. But as you've said
that's a real corner case which we should document but not be worried too
much about.


> We'd need to say, first look for term definitions. Then, resolve any
> IRIs that don't depend on these term definitions using the active
> context. Subsequently, resolve term definitions that do have these
> dependencies (ignoring, for the moment, the pathological cascading
> dependency issue). Then resolve @datatype IRIs.

An alternative would be to describe the algorithm in a recursive way; it
just follows the "links" (i.e., prefixes). That would also explain the
issues with cycles. But I agree, describing it the way you suggested is
probably much easier to understand.


> This was why I suggested not covering this case, but it's really an
> issue for processor implementers to worry about, not authors. Not too
> clean, though.

Why do you think it's not clean? You have the same issues on almost all
technologies that allow references..


> > I would prefer the second form as it really eases the life of users
> while
> > the algorithms aren't getting much more complex. I quickly hacked a
> demo in
> > PHP [1] that can process prefix:suffixes in @iri and in @datatype.
> 
> This would be the most consistent, if the added complexity to the
> algorithm is acceptable.

Based on my little experiments with the feature I think the added complexity
shouldn't be an issue at all.


> >> As an alternative, we could define a set of terms that are defined
> in a
> >> global "default" context, similar to RDFa's default context. There
> we
> >> could define mappings for xsd, rdf, dc, foaf, schema and others.
> This
> >> would eliminate most need for doing two-pass processing of contexts.
> >
> > I have a really strong opinion of not doing this. It just makes
> things more
> > complicated than they need to be. We can have a reference context
> defined
> > somewhere at w3.org or json-ld.org that can be imported with those
> defaults
> > but not hardcoded in the implementation.
> 
> Yes, of course, as RDFa does it. That was what I meant.

No, I think you misunderstood me. In RDFa you don't have to include
anything. foaf, e.g., is included out of the box. I wouldn't like to see
something similar in JSON-LD. What I proposed was to create a "default"
context and host it somewhere so that authors can either download it and
host it themselves or just reference it in their JSON-LD documents.



--
Markus Lanthaler
@markuslanthaler
Received on Sunday, 4 December 2011 07:24:09 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:18:32 UTC