- From: Alastair Parker <alastair@jargon.sh>
- Date: Tue, 7 Oct 2025 13:10:47 +1100
- To: public-json-ld@w3.org
- Message-ID: <CANoMpB4scbQUuB3LGOqLYa1Kg+V3PtCckSemP1yEEZu-FuLXMg@mail.gmail.com>
Hello all, Benjamin suggested I introduce myself. I’m Al, founder of Jargon, a modelling tool that we use to generate JSON Schema, JSON-LD contexts, and related artefacts from composable domain models. Here are two examples of how we’re using JSON-LD in practice: United Nations Transparency Protocol (UNTP): While I can’t speak on behalf of the UNTP team, I can share that the team use Jargon to model trade and supply-chain domains - a mix of UN-specific properties and references to established vocabularies like schema.org. From these models, the team generate both JSON Schema and JSON-LD contexts, and Jargon ensures they work together: the schema enforces mechanical @type properties that the context file then relies on for expansion. Jargon follows Domain-Driven Design, and those principles flow through to the @context file - with entities (things with business identity) represented as top-level named items that resolve to @types, and value objects (things without business identity) declared in nested @context entries beneath their owning entities. The goal is to let ordinary web developers keep working with JSON and their existing tooling, while still participating in a semantic ecosystem - but keeping their JSON feeling familiar to how it’s normally structured. In practice, this means things like correct @type values “just happen” when they generate code from the schemas - developers don’t even need to be aware they’re working with JSON-LD unless they choose to. Enterprise data provenance: We have enterprise customers who aren’t interested in JSON-LD or semantics at all, but care deeply about identifying data provenance in their JSON. Jargon uses Domain-Driven Design to model data that draws from multiple domains into developer artefacts like JSON Schema that tend to be monolithic, without borders resembling the input domains. As a result, similarly named concepts aren’t easily distinguishable in the JSON alone - for example, “customer” in billing vs. “customer” in support lose their provenance once serialised. By expanding into JSON-LD, each usage is grounded with a unique identifier, allowing teams to extract the provenance back out again. Teams rarely care what the IRIs resolve to - if anything - and tend only to care that they are unique enough to namespace them. Some teams then consume the expanded JSON-LD directly, while others simply check provenance in the expanded graph before discarding it and processing the unexpanded JSON. For us - and most of our clients, who come from more JSON, API, and object-oriented exchange backgrounds - JSON-LD has proven to be the simplest and most effective way to carry “just enough” semantics alongside JSON for consumers who want it, even if that’s where semantics ends, and never touches RDF, triples, or graphs. This also joins up well with how these customers are designing and governing the individual domains in Jargon - giving them strong alignment between design and implementation that smooths over many bumps in shared understanding. We’ve also found that these approaches haven’t ruffled too many feathers among JSON purists, with the artefacts working seamlessly in typical JSON pipelines. Al
Received on Tuesday, 7 October 2025 11:30:24 UTC