- From: Aaron Coburn <acoburn@apache.org>
- Date: Sun, 23 Jan 2022 12:10:14 -0500
- To: Sebastian Hellmann <hellmann@informatik.uni-leipzig.de>
- Cc: public-webid <public-webid@w3.org>
- Message-ID: <CAD4uyLe1mKntcrcaEbhmrY1TDL2HZeMK6NayhVj4qSbBRyKdhg@mail.gmail.com>
I spent 20 minutes this morning writing a JSON-LD 1.1 context document for WebID resources. This is not meant as an actual proposal but merely as an example of what a well-defined context could achieve for a future JSON-LD-based WebID specification document. The context itself is at https://home.apache.org/~acoburn/context/webid.json With this, a JSON-LD WebID document might look like this: { "@context": [ "https://home.apache.org/~acoburn/context/webid.json" ], "id": "https://id.example/acoburn#i", "type": ["Person"], "name": "Aaron Coburn", "primaryTopicOf": { "id": "https://id.example/acoburn", "type": ["PersonalProfileDocument"] } } One could also reverse the Information Resource -- WebID relationship: { "@context": [ "https://home.apache.org/~acoburn/context/webid.json" ], "id": "https://id.example/acoburn", "type": ["PersonalProfileDocument"], "primaryTopic": { "id": "https://id.example/acoburn#i", "type": ["Person"], "name": "Aaron Coburn" } } If you want to layer another system -- say, for example, Solid -- on top of this, it is easy with a separate context: { "@context": [ "https://home.apache.org/~acoburn/context/webid.json", "https://home.apache.org/~acoburn/context/solid.json" ], "id": "https://id.example/acoburn#i", "type": ["Person"], "name": "Aaron Coburn", "storage": ["https://solid.example/1", "https://solid.example/2"], "oidcIssuer": ["https://idp.example"], "primaryTopicOf": { "id": "https://id.example/acoburn", "type": ["PersonalProfileDocument"] } } As such, this sort of structured representation has several advantages: it is easy to extend (as shown with the Solid case), it is easy to deploy as a static resource and (arguably) this format would be easy for constrained devices to consume, as was suggested in the IoT case. In fact, a client application would have no real need for an RDF parser at all -- it merely needs to read the document as JSON and verify the value(s) in the @context array; no full JSON-LD parser is necessary. But if a client wants all the semantic richness of the full RDF graph, that option is still open by using a JSON-LD 1.1 parser. -Aaron On Sun, Jan 23, 2022 at 11:06 AM Sebastian Hellmann < hellmann@informatik.uni-leipzig.de> wrote: > Hi Martynas, > > On 23.01.22 10:48, Martynas Jusevičius wrote: > > If one specific RDF serialization would be mandated, I can say already > > now that we would not support such WebID spec. Our servers can produce > > any format Jena supports, plus HTML, for every RDF resource, so that > > would not be possible even if we wanted to. > > I already suggested that we pick a definite list of 3-6 formats and > fixate that: > > 1. publisher MUST pick one and must follow all the MUSTs of the chosen > format incl. modalities > > 2. requesting agents/consumers/parser implementations MUST implement all > formats/modalities to be called conformant. > > > Top Linked Data researchers pretending not to understand content > > negotiation raises my eyebrows. It has been a feature of HTTP since > > forever. > > I didn't write that I don't understand it. I said not to assume that it > is common knowledge. Also, there is still lot's of liberty. A colleague > implemented it like this: return 200 plus payload and include a Location > header. With some prototypes I answered "Accept: text/html" with > "Content-Type: text/plain" although it was turtle/ntriples, because it > renders better in the browser (text/turtle offers the file as download > popup). Is this ok? I The .ttl# approach is more robust, so in the end, > we would receive good quality .ttl# or poor quality, heterogeneous > implementations for content negotiation (unless we describe it in detail > and build a validator) > > > The effort to dumb down RDF Linked Data to make it more accessible to > > some mythical "developers" continues to amaze me. Those developers > > most likely do not even need Linked Data as they don't have the sort > > of problems it addresses. > > We shouldn't be looking at easy solutions, we should be looking at > > first principles and the *right* solutions. > > dumb down != engineer down > > engineer down requires to take all the hard decisions considering a lot > of factors, i.e. publisher and receiver side, security and doing a > proper requirement analysis, then for the standard also compromise to > cover 80% well. > > Going from WebID to > https://w3id.org/atomgraph/linkeddatahub/admin/#Agent , then finding the > rds:subclassOf foaf:Agent triple and doing inference vs. mandating that > foaf:Agent has to be in the serialization. > > The former requires a full-fledged semantic web client. > > The second option also annoys me and normally I do not materialize the > schematic inference in the instance data, e.g. dataid:Dataset is > subclassOf dcat:Dataset . But it is more robust and easier to handle by > clients. > > > first principles and the *right* solutions. > Above example (i.e. type redundancy) goes against principles. But you > can gain something here, i.e. easier parsing and understanding, > pre-computed interoperability. > > -- Sebastian > > > > > > Martynas > > > > On Sun, Jan 23, 2022 at 2:23 AM Sebastian Hellmann > > <hellmann@informatik.uni-leipzig.de> wrote: > >> Hi Jonas, > >> > >> On 22.01.22 01:09, Jonas Smedegaard wrote: > >> > >> Quoting Sebastian Hellmann (2022-01-22 00:21:49) > >> > >> Hi Jonas, > >> > >> a question: I am having trouble finding the current spec. Also I can not > >> find anything about NetID. See more inline. > >> > >> Current draft of the WebID spec is this: > >> https://www.w3.org/2005/Incubator/webid/spec/identity/ > >> > >> Are you sure that this is a spec? I see it as an inspirational document > on how a spec could look like, if you spent the effort to work on it. > >> > >> I saw that you forked the spec into github, but I would actually > propose to start from scratch and just do cherry picking from this > document. When we implemented it, we had to rely mostly on personal > experience and things we remembered from Henry Story's presentations, when > he was on WebID tour over a decade ago, AKSW people and OpenLink docu. > >> > >> See .e.g: > >> > >> "3. The WebID HTTP URI" -> Is HTTPS not mandatory? Will we be able to > move forward by including HTTP in any form? > >> > >> "There are two solutions that meet our requirements for identifying > real-world objects: 303 redirects and hash URIs." -> how do 303 redirects > identify real-world objects? URIs that resolve to 303? hash URIs might also > resolve to 303. > >> > >> "Personal details are the most common requirement when registering an > account with a website. Some of these pieces of information include an > e-mail address, a name and perhaps an avatar image, expressed using the > FOAF [FOAF] vocabulary. This section includes properties that SHOULD be > used when conveying key pieces of personal information but are NOT REQUIRED > to be present in a WebID Profile:" > >> > >> <#me> a owl:Thing. > >> > >> 1. Hash URI ✅ > >> 2. Turtle ✅ > >> These are all MUST requirements, I could find. Doesn't even need the > foaf:PersonalProfileDocument declaration, so ✅ valid WebID > >> > >> "5.4 Privacy" -> is this in scope of "how to publish WebIDs"? > >> > >> 6. Processing the WebID Profile: The Requesting Agent needs to fetch > the document, if it does not have a valid one in cache. > >> > >> It is recommended that the Requesting Agent sets a qvalue for > text/turtle in the HTTP Accept-Header with a higher priority than in the > case of application/xhtml+xml or text/html, as sites may produce HTML > without RDFa markup but with a link to graph encoded in a pure RDF format > such as Turtle. > >> For an agent that can parse Turtle, rdf/xml and RDFa, the following > would be a reasonable Accept header: > >> > >> Accept: > text/turtle,application/rdf+xml,application/xhtml+xml;q=0.8,text/html;q=0.7 > >> > >> <rhetorical>What?</rhetorical> > >> > >> -- Sebastian > >> > >> > >> > >> > >> > >> > > > >
Received on Sunday, 23 January 2022 17:10:44 UTC