- From: Rob Atkinson <ratkinson@ogc.org>
- Date: Mon, 10 Nov 2025 22:46:13 +0000
- To: Benjamin Young <byoung@bigbluehat.com>, Alastair Parker <alastair@jargon.sh>, "public-json-ld@w3.org" <public-json-ld@w3.org>
- Message-ID: <SJ0PR01MB744883704A15BBA798008DB9BCCEA@SJ0PR01MB7448.prod.exchangelabs.com>
Hi I hope to look at this in the new year, when a few other project pressures allow However a couple of things: 1. The idea of bringing JSON-LD and semantic models to the existing JSON ecosystem mirrors closely the challenges faced in the OGC and other adopters of OpenAPI 2. We have OWL, UML, XML and other types of artefacts in a technology-agnostic standards ecosystem 3. Our focus is on CI/CT for such models 4. We have some (and intend more) support FAIR transforms between alternative representations 5. We need to support communities profiling base standards in their domain 6. We see layers of abstraction - addressing interoperability of different aspects of a data supply chain - APIs, infrastructure, software, libraries, developer skills etc. To this end we are building and testing libraries of "Building Blocks" based on standards, with a emphasis on JSON-LD to join schemas, semantic models, controlled vocabulary registers and rules (e.g. SHACL). https://ogcincubator.github.io/bblocks-docs/ One of the big strengths is inheritance of SHACL rules from dependencies - these can handle things that turn JSON schema into spaghetti when multiple options can co-exists. They can also provide ways to handle content without unmaintainable enumerations in schemas I'm interested in exploring how such normative content could be exploited by tools such as Jargon to support scalable approaches to modelling the diverse communities of practice that use standards, but need to specialise with specific content. Also, the extent to which Jargon could be used to support the development of such libraries, by offering profiles based on existing components to solve specific domain requirements. Note - for the *-LD community there remain open questions regarding how to handle impedance mismatch between JSON structures and semantic models - for example GeoJSON and GeoSPARQL. We have some experiments here: Https://ogcincubator.github.io/bblocks-examples/bblock/ogc.bbr.examples.feature.geosparqlFeature/examples<https://ogcincubator.github.io/bblocks-examples/bblock/ogc.bbr.examples.feature.geosparqlFeature/examples> The problem of where processing takes place is one of great interest - but I think the current approach of "someone else's problem" for such a common use case is suboptimal. I think strong guidance about options, and a mechanism to define extended processing with examples would help. I emphasise this here, because this is the fundamental problem for something like Jargon, and JSON-LD in general - how to encapsulate and re-use the complex bits so people can do extensions to meet thier needs simply - and in this I agree with Alastairs comment about "developers don’t even need to be aware they’re working with JSON-LD unless they choose to". I would extend that principle to: * Specification of any additional processing * Mechanisms to profile using application specific content * Setting up CI/CT testing frameworks * SHACL or other rules for logical consistency Rob Atkinson Senior Research Engineer | Open Geospatial Consortium (OGC) Mobile: +61 419 202973 ratkinson@ogc.org<mailto:ratkinson@ogc.org> | ogc.org<http://ogc.org/> | @opengeospatial [https://ci3.googleusercontent.com/meips/ADKq_NYvdzG5CnR-DX7WeJyPCnJuhbJ_G7lH1yVbaql72titKCiG-t4HQ92DiCCRM2jU42bDT20Ge7sBVIodm8VpmVBlrnTKCg=s0-d-e1-ft#https://portal.ogc.org/files/?artifact_id=99287] ________________________________ From: Benjamin Young <byoung@bigbluehat.com> Sent: Tuesday, November 11, 2025 8:31 AM To: Alastair Parker <alastair@jargon.sh>; public-json-ld@w3.org <public-json-ld@w3.org> Subject: Re: Sharing some real-world use of JSON-LD You don't often get email from byoung@bigbluehat.com. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> Thanks for sharing this, Alastair! I've been able to put some more significant time into trying Jargon out, and I'm enjoying it! Having the central visual of what I'm describing is helping shape the JSON-LD context's I'm trying to describe—which is very handy! I do wonder if anyone else here has given it a try. My primary use right now is exporting JSON-LD contexts for Verifiable Credentials, but I'm certain there would be other interesting modeling use cases. Anyone else kick the tires yet? Cheers! Benjamin -- https://bigbluehat.com/ https://linkedin.com/in/benjaminyoung ________________________________ From: Alastair Parker <alastair@jargon.sh> Sent: Monday, October 6, 2025 10:10 PM To: public-json-ld@w3.org <public-json-ld@w3.org> Subject: Sharing some real-world use of JSON-LD Hello all, Benjamin suggested I introduce myself. I’m Al, founder of Jargon, a modelling tool that we use to generate JSON Schema, JSON-LD contexts, and related artefacts from composable domain models. Here are two examples of how we’re using JSON-LD in practice: United Nations Transparency Protocol (UNTP): While I can’t speak on behalf of the UNTP team, I can share that the team use Jargon to model trade and supply-chain domains - a mix of UN-specific properties and references to established vocabularies like schema.org<http://schema.org>. From these models, the team generate both JSON Schema and JSON-LD contexts, and Jargon ensures they work together: the schema enforces mechanical @type properties that the context file then relies on for expansion. Jargon follows Domain-Driven Design, and those principles flow through to the @context file - with entities (things with business identity) represented as top-level named items that resolve to @types, and value objects (things without business identity) declared in nested @context entries beneath their owning entities. The goal is to let ordinary web developers keep working with JSON and their existing tooling, while still participating in a semantic ecosystem - but keeping their JSON feeling familiar to how it’s normally structured. In practice, this means things like correct @type values “just happen” when they generate code from the schemas - developers don’t even need to be aware they’re working with JSON-LD unless they choose to. Enterprise data provenance: We have enterprise customers who aren’t interested in JSON-LD or semantics at all, but care deeply about identifying data provenance in their JSON. Jargon uses Domain-Driven Design to model data that draws from multiple domains into developer artefacts like JSON Schema that tend to be monolithic, without borders resembling the input domains. As a result, similarly named concepts aren’t easily distinguishable in the JSON alone - for example, “customer” in billing vs. “customer” in support lose their provenance once serialised. By expanding into JSON-LD, each usage is grounded with a unique identifier, allowing teams to extract the provenance back out again. Teams rarely care what the IRIs resolve to - if anything - and tend only to care that they are unique enough to namespace them. Some teams then consume the expanded JSON-LD directly, while others simply check provenance in the expanded graph before discarding it and processing the unexpanded JSON. For us - and most of our clients, who come from more JSON, API, and object-oriented exchange backgrounds - JSON-LD has proven to be the simplest and most effective way to carry “just enough” semantics alongside JSON for consumers who want it, even if that’s where semantics ends, and never touches RDF, triples, or graphs. This also joins up well with how these customers are designing and governing the individual domains in Jargon - giving them strong alignment between design and implementation that smooths over many bumps in shared understanding. We’ve also found that these approaches haven’t ruffled too many feathers among JSON purists, with the artefacts working seamlessly in typical JSON pipelines. Al
Received on Monday, 10 November 2025 22:46:21 UTC