Re: Open Source Tools + Workflow for Publishing Linked Data

Hello Jason,
It is good to hear that you have been investigating Fedora 4.

Regarding #3 from your workflow description, and specifically the "dynamic
generation of alternate linked data serializations", Fedora 4 uses
content-negotiation to dynamically produce RDF responses. Any RDF resource
that is managed by a Fedora 4 repository can be returned in any of the
following serializations, given the appropriate "Accept" header:
* turtle
* json-ld
* n3
* rdf-xml
* n-triples

Conversely, in order to create a Fedora 4 RDF resource, one of the above
serializations must be provided. Although RDFa is not on the list above, it
would seem that other user-friendly mechanisms or template serializations
could be employed to help users create the initial RDF documents.
If however, RDFa is a firm requirement, it would be useful to understand
more specifically what you are trying to achieve. There are some potential
avenues within the context of Fedora 4 that may be worth exploring.

As you may know, the project's developer [1] and user [2] communities are
quite active. Please feel free to send any technical or user-related
questions to the respective lists directly.

Regards,
Andrew Woods
Fedora 4 Technical Lead
[1] https://groups.google.com/forum/#!forum/fedora-tech
[2] https://groups.google.com/forum/#!forum/fedora-community

> On 2015-07-24 14:47, Haag, Jason wrote:
>
>> Hello Semantic Web Community,
>>
>> I'm from the learning technology space and we have been investigating
>> the use of semantic web technology as part of our workflow for
>> publishing controlled vocabulary terms. These terms help provide the
>> specific meaning of verbs and activities supporting various learning
>> experiences. We've mostly been trying to leverage SKOS and PROV
>> ontologies for this effort.
>>
>> I'm interested in leveraging open source tools that might help our
>> Communities of Practice (CoPs) more easily publish these terms as linked
>> data. I envision a publishing tool or repository interface that would
>> bring the process together rather nicely, and also help compliment our
>> governance and maintenance concerns as well. We can't expect our
>> disparate CoPs to each have the resources or knowledge to configure
>> servers on their own to support content negotiation for the level of
>> granularity we are interested in for publishing our linked data.
>>
>> I envision a workflow that would support the following:
>>
>> 1) allow CoPs to utilize HTML/RDFa templates and simply populate those
>> with persistent URIs and the suggested metadata from SKOS and PROV.
>> 2) publish the RDFa to a web server or repository tool
>> 3) a service would dynamically generate alternate linked data
>> serializations (e.g., JSON-LD) of the RDFa/HTML based on the
>> dereferenced HTTP request
>> 4) any application could then consume linked data in any format in real
>> time based on the single source HTML/RDFa provided at each IRI
>>
>> RDFa seems to be the most user friendly for those that are not RDF
>> savvy. Also, rather than putting the responsibility on CoPs to embed
>> JSON-LD in HTML or configure / establish various rewrite rules it seems
>> a publishing server or service might handle this more efficiently. Does
>> this seem like a practical workflow for publishing linked data? Are
>> there any flaws with this proposed workflow process?
>>
>> Finally, is anyone from this community aware of any open source
>> applications that would support this type of workflow? Thank you in
>> advance for your responses and support.
>>
>> Warm Regards,
>>
>> J Haag

Received on Monday, 10 August 2015 20:30:26 UTC