Re: Safe manipulation of RDF data (from semantic-web)

Well that complicates the setup a bit. A proper solution would be to run
your Fuseki within Docker as well - just as the example does.

If that is not an option, I need to look into how to address this.

On Wed, 2 Oct 2019 at 15.05, Mikael Pesonen <mikael.pesonen@lingsoft.fi>
wrote:

>
> Everything is working on out intra net, so no public adresses.
>
> On 02/10/2019 16:04, Martynas Jusevičius wrote:
> > Is semantic-dev.lingsoft.fi some kind of internal hostname? It does
> > not seem to be publicly accessible.
> >
> > On Wed, Oct 2, 2019 at 2:56 PM Mikael Pesonen
> > <mikael.pesonen@lingsoft.fi> wrote:
> >>
> >> That works but then Fuseki at https://semantic-dev.lingsoft.fi is not
> >> found. Maybe there is another solution for that?
> >>
> >> On 02/10/2019 15:54, Martynas Jusevičius wrote:
> >>> Can you please remove the network_mode: host and see if it helps?
> >>>
> >>> On Wed, Oct 2, 2019 at 2:52 PM Mikael Pesonen
> >>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>> That one I tried and it still results the error
> >>>>
> >>>> processor_1  | 02-Oct-2019 12:50:18.057 SEVERE [main]
> >>>> org.apache.coyote.AbstractProtocol.init Failed to initialize end point
> >>>> associated with ProtocolHandler ["http-apr-8080"]
> >>>> processor_1  |  java.lang.Exception: Socket bind failed: [98] Address
> >>>> already in use
> >>>>
> >>>>
> >>>> yml:
> >>>>
> >>>> version: "2"
> >>>> services:
> >>>>      processor:
> >>>>        image: atomgraph/processor
> >>>>        ports:
> >>>>          - 8090:8080
> >>>>          - 8010:8000 # debugger
> >>>>        environment:
> >>>>          - JPDA_ADDRESS=8000 # debugger port
> >>>>          - ENDPOINT="https://semantic-dev.lingsoft.fi/fuseki/ds" #
> >>>> hostname equals service name
> >>>>          - GRAPH_STORE="https://semantic-dev.lingsoft.fi/fuseki/ds" #
> >>>> hostname equals service name
> >>>>          - ONTOLOGY="https://resource.lingsoft.fi/aabb#"
> >>>>        volumes:
> >>>>          -
> >>>>
> /home/text/cases/nimisampo/proxy/location-mapping.n3:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/custom-mapping.n3
> >>>>          -
> >>>>
> /home/text/cases/nimisampo/proxy/person.ttl:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/org/wikidata/ldt.ttl
> >>>>          -
> >>>>
> /home/text/cases/nimisampo/proxy/log4j.properties:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/log4j.properties
> >>>>        network_mode: host
> >>>>      nginx:
> >>>>        image: nginx
> >>>>        depends_on:
> >>>>          - processor
> >>>>        ports:
> >>>>          - 90:80
> >>>>        environment:
> >>>>          - PROXY_PASS=http://localhost:8080 # internal Processor URL
> >>>> (hostname equals docker-compose service name)
> >>>>          - PROXY_SET_HOST=https://resource.lingsoft.fi # the
> hostname set
> >>>> on the request URI before it's passed to Processor
> >>>>        volumes:
> >>>>          - ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
> >>>>        command: /bin/bash -c "envsubst '$$PROXY_PASS
> $$PROXY_SET_HOST' <
> >>>> /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g
> >>>> 'daemon off;'"
> >>>>
> >>>>
> >>>> On 02/10/2019 15:38, Martynas Jusevičius wrote:
> >>>>> Then you can try mapping for example 8090:8080 for processor and
> 90:80
> >>>>> for nginx.
> >>>>>
> >>>>> nginx will be available on http://localhost:90.
> >>>>>
> >>>>> If you only will use the nginx address, then the processor port
> >>>>> mapping is not necessary and can be removed (nginx will still
> >>>>> communicate with processor inside the container network, but
> processor
> >>>>> will not exposed to the host anymore).
> >>>>>
> >>>>> On Wed, Oct 2, 2019 at 2:29 PM Mikael Pesonen
> >>>>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>>>> Unfortunately we have other services running on ports 8080 and 80...
> >>>>>>
> >>>>>> On 02/10/2019 14:17, Martynas Jusevičius wrote:
> >>>>>>> Mikael,
> >>>>>>>
> >>>>>>> you shouldn't have changed the ports. Left-hand side is the host
> port,
> >>>>>>> so now you have both processor and nginx trying to get exposed on
> port
> >>>>>>> 8090 of your host, which fails.
> >>>>>>> 8080:8080 for processor and 80:80 for nginx should be adequate.
> Unless
> >>>>>>> you already have something running on host 80 on your host?
> >>>>>>>
> >>>>>>> Also network_mode: host should not be necessary.
> >>>>>>>
> >>>>>>> On Wed, Oct 2, 2019 at 12:18 PM Mikael Pesonen
> >>>>>>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>> Compose worked when started with sudo. Now I'm lost with the ports
> >>>>>>>>
> >>>>>>>>>> sudo docker-compose up
> >>>>>>>> Removing proxy_nginx_1
> >>>>>>>> proxy_processor_1 is up-to-date
> >>>>>>>> Recreating 17117cafe8ef_proxy_nginx_1
> >>>>>>>> Attaching to proxy_processor_1, proxy_nginx_1
> >>>>>>>> nginx_1      | envsubst: error while reading "standard input": Is
> a directory
> >>>>>>>> processor_1  | @prefix lm: <
> http://jena.hpl.hp.com/2004/08/location-mapping#> .
> >>>>>>>> processor_1  |
> >>>>>>>> processor_1  | [] lm:mapping
> >>>>>>>> processor_1  |
> >>>>>>>> processor_1  |    [ lm:name "https://www.w3.org/ns/ldt#" ;
>                            lm:altName "com/atomgraph/processor/ldt.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "
> https://www.w3.org/ns/ldt/core/domain#" ;                     lm:altName
> "com/atomgraph/processor/c.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "
> https://www.w3.org/ns/ldt/core/templates#" ;                  lm:altName
> "com/atomgraph/processor/ct.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "
> https://www.w3.org/ns/ldt/named-graphs/templates#" ;          lm:altName
> "com/atomgraph/processor/ngt.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "
> https://www.w3.org/ns/ldt/document-hierarchy/domain#" ;       lm:altName
> "com/atomgraph/processor/dh.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "
> https://www.w3.org/ns/ldt/topic-hierarchy/templates#" ;       lm:altName
> "com/atomgraph/processor/tht.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://rdfs.org/sioc/ns#" ;
>                            lm:altName "com/atomgraph/processor/sioc.owl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://rdfs.org/ns/void#" ;
>                            lm:altName "com/atomgraph/processor/void.owl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://www.w3.org/2011/http#" ;
>                            lm:altName "com/atomgraph/processor/http.owl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://www.w3.org/2011/http" ;
>                           lm:altName "com/atomgraph/processor/http.owl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "
> http://www.w3.org/2011/http-statusCodes#" ;                   lm:altName
> "com/atomgraph/processor/http-statusCodes.rdf" ] ,
> >>>>>>>> processor_1  |    [ lm:name "
> http://www.w3.org/2011/http-statusCodes" ;                    lm:altName
> "com/atomgraph/processor/http-statusCodes.rdf" ] ,
> >>>>>>>> processor_1  |    [ lm:name "
> http://www.w3.org/ns/sparql-service-description#" ;           lm:altName
> "com/atomgraph/processor/sparql-service.owl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://xmlns.com/foaf/0.1/" ;
>                            lm:altName "com/atomgraph/processor/foaf.owl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://spinrdf.org/sp#" ;
>                            lm:altName "etc/sp.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://spinrdf.org/sp" ;
>                           lm:altName "etc/sp.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://spinrdf.org/spin#" ;
>                            lm:altName "etc/spin.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://spinrdf.org/spin" ;
>                           lm:altName "etc/spin.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://spinrdf.org/spl#" ;
>                           lm:altName "etc/spl.spin.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "http://spinrdf.org/spl" ;
>                            lm:altName "etc/spl.spin.ttl" ]
> >>>>>>>> processor_1  | .@prefix lm: <
> http://jena.hpl.hp.com/2004/08/location-mapping#> .
> >>>>>>>> processor_1  |
> >>>>>>>> processor_1  | [] lm:mapping
> >>>>>>>> processor_1  |
> >>>>>>>> processor_1  |    [ lm:name "
> https://github.com/AtomGraph/Processor/blob/develop/examples/wikidata#" ;
> lm:altName "org/wikidata/ldt.ttl" ] ,
> >>>>>>>> processor_1  |    [ lm:name "https://resource.lingsoft.fi/aabb#"
> ; lm:altName "org/wikidata/ldt.ttl" ]
> >>>>>>>> processor_1  | .Listening for transport dt_socket at address: 8000
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.204 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Server version:
>   Apache Tomcat/8.0.52
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.206 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Server built:
>   Apr 28 2018 16:24:29 UTC
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.206 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Server number:
>  8.0.52.0
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.207 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log OS Name:
>  Linux
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.207 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log OS Version:
>   4.4.0-148-generic
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.207 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Architecture:
>   amd64
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.207 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Java Home:
>  /usr/lib/jvm/java-8-openjdk-amd64/jre
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.208 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log JVM Version:
>  1.8.0_171-8u171-b11-1~deb9u1-b11
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.208 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor:
>   Oracle Corporation
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.208 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE:
>  /usr/local/tomcat
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.208 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME:
>  /usr/local/tomcat
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.209 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -Djava.util.logging.config.file=/usr/local/to
>                                             mcat/conf/logging.properties
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.209 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -Djava.util.logging.manager=org.apache.juli.C
>                                             lassLoaderLogManager
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.210 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -Djdk.tls.ephemeralDHKeySize=2048
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.210 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -Djava.protocol.handler.pkgs=org.apache.catal
>                                             ina.webresources
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.210 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -agentlib:jdwp=transport=dt_socket,address=80
>                                             00,server=y,suspend=n
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.210 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -Dignore.endorsed.dirs=
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.211 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -Dcatalina.base=/usr/local/tomcat
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.211 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -Dcatalina.home=/usr/local/tomcat
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.212 INFO [main]
> org.apache.catalina.startup.VersionLoggerListener.log Command line
> argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.212 INFO [main]
> org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR
> based Apache Tomcat Native library 1.2.16 using AP
>                                       R version 1.5.2.
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.212 INFO [main]
> org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR
> capabilities: IPv6 [true], sendfile [true], accept filter
>                                               s [false], random [true].
>
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.217 INFO [main]
> org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL
> successfully initialized (OpenSSL 1.1.0f  25 May 2017)
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.295 INFO [main]
> org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler
> ["http-apr-8080"]
> >>>>>>>> processor_1  | 02-Oct-2019 10:15:33.303 SEVERE [main]
> org.apache.coyote.AbstractProtocol.init Failed to initialize end point
> associated with ProtocolHandler ["http-apr-8080"]
> >>>>>>>> processor_1  |  java.lang.Exception: Socket bind failed: [98]
> Address already in use
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> yml:
> >>>>>>>>
> >>>>>>>> version: "2"
> >>>>>>>> services:
> >>>>>>>>       processor:
> >>>>>>>>         image: atomgraph/processor
> >>>>>>>>         ports:
> >>>>>>>>           - 8090:8080
> >>>>>>>>           - 8010:8000 # debugger
> >>>>>>>>         environment:
> >>>>>>>>           - JPDA_ADDRESS=8000 # debugger port
> >>>>>>>>           - ENDPOINT="https://semantic-dev.lingsoft.fi/fuseki/ds"
> # hostname equals service name
> >>>>>>>>           - GRAPH_STORE="
> https://semantic-dev.lingsoft.fi/fuseki/ds" # hostname equals service name
> >>>>>>>>           - ONTOLOGY="https://resource.lingsoft.fi/aabb#"
> >>>>>>>>         volumes:
> >>>>>>>>           -
> /home/text/cases/nimisampo/proxy/location-mapping.n3:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/custom-mapping.n3
> >>>>>>>>           -
> /home/text/cases/nimisampo/proxy/person.ttl:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/org/wikidata/ldt.ttl
> >>>>>>>>           -
> /home/text/cases/nimisampo/proxy/log4j.properties:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/log4j.properties
> >>>>>>>>         network_mode: host
> >>>>>>>>       nginx:
> >>>>>>>>         image: nginx
> >>>>>>>>         depends_on:
> >>>>>>>>           - processor
> >>>>>>>>         ports:
> >>>>>>>>           - 8090:80
> >>>>>>>>         environment:
> >>>>>>>>           - PROXY_PASS=http://localhost:8080 # internal
> Processor URL (hostname equals docker-compose service name)
> >>>>>>>>           - PROXY_SET_HOST=https://resource.lingsoft.fi # the
> hostname set on the request URI before it's passed to Processor
> >>>>>>>>         volumes:
> >>>>>>>>           -
> ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
> >>>>>>>>         command: /bin/bash -c "envsubst '$$PROXY_PASS
> $$PROXY_SET_HOST' < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf
> && nginx -g 'daemon off;'"
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 02/10/2019 12:47, Martynas Jusevičius wrote:
> >>>>>>>>
> >>>>>>>> Mikael,
> >>>>>>>>
> >>>>>>>> I’ll try to help, but this is getting out of the realm of
> Processor.
> >>>>>>>>
> >>>>>>>> Have you completed the Docker post-installation steps for Linux?
> >>>>>>>> https://docs.docker.com/install/linux/linux-postinstall/
> >>>>>>>>
> >>>>>>>> Also check the suggestions here:
> >>>>>>>> https://github.com/docker/compose/issues/4181
> >>>>>>>>
> >>>>>>>> Regarding the nginx conf, just use the one from the example - it
> is controlled using the PROXY_PASS/PROXY_SET_HOST variables.
> >>>>>>>>
> >>>>>>>> On Wed, 2 Oct 2019 at 11.14, Mikael Pesonen <
> mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>>> I have, see the error messages at the end of my previous message.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On 01/10/2019 18:19, Martynas Jusevičius wrote:
> >>>>>>>>>
> >>>>>>>>> Hi Mikael,
> >>>>>>>>>
> >>>>>>>>> have you installed docker-compose? It’s a separate runtime:
> >>>>>>>>> https://docs.docker.com/compose/install/
> >>>>>>>>>
> >>>>>>>>> Also, are you running docker-compose from
> >>>>>>>>> the folder where docker-compose.yml file is located?
> >>>>>>>>>
> >>>>>>>>> On Tue, 1 Oct 2019 at 11.47, Mikael Pesonen <
> mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>>>> Hi Martynas,
> >>>>>>>>>>
> >>>>>>>>>> I have a compose file now:
> >>>>>>>>>>
> >>>>>>>>>> version: "2"
> >>>>>>>>>> services:
> >>>>>>>>>>        processor:
> >>>>>>>>>>          image: atomgraph/processor
> >>>>>>>>>>          ports:
> >>>>>>>>>>            - 8090:8080
> >>>>>>>>>>            - 8010:8000 # debugger
> >>>>>>>>>>          environment:
> >>>>>>>>>>            - JPDA_ADDRESS=8000 # debugger port
> >>>>>>>>>>            - ENDPOINT="
> https://semantic-dev.lingsoft.fi/fuseki/ds" #
> >>>>>>>>>> hostname equals service name
> >>>>>>>>>>            - GRAPH_STORE="
> https://semantic-dev.lingsoft.fi/fuseki/ds" #
> >>>>>>>>>> hostname equals service name
> >>>>>>>>>>            - ONTOLOGY="https://resource.lingsoft.fi/aabb#"
> >>>>>>>>>>          volumes:
> >>>>>>>>>>            -
> >>>>>>>>>>
> /home/text/cases/nimisampo/proxy/location-mapping.n3:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/custom-mapping.n3
> >>>>>>>>>>            -
> >>>>>>>>>>
> /home/text/cases/nimisampo/proxy/person.ttl:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/org/wikidata/ldt.ttl
> >>>>>>>>>>            -
> >>>>>>>>>>
> /home/text/cases/nimisampo/proxy/log4j.properties:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/log4j.properties
> >>>>>>>>>>        nginx:
> >>>>>>>>>>          image: nginx
> >>>>>>>>>>          depends_on:
> >>>>>>>>>>            - processor
> >>>>>>>>>>          ports:
> >>>>>>>>>>            - 80:80
> >>>>>>>>>>          environment:
> >>>>>>>>>>            - PROXY_PASS=http://localhost:8080 # internal
> Processor URL
> >>>>>>>>>> (hostname equals docker-compose service name)
> >>>>>>>>>>            - PROXY_SET_HOST=https://resource.lingsoft.fi # the
> hostname set
> >>>>>>>>>> on the request URI before it's passed to Processor
> >>>>>>>>>>          volumes:
> >>>>>>>>>>            -
> ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
> >>>>>>>>>>          command: /bin/bash -c "envsubst '$$PROXY_PASS
> $$PROXY_SET_HOST' <
> >>>>>>>>>> /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx
> -g
> >>>>>>>>>> 'daemon off;'"
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Not sure how nginx env should be, but running this results error
> >>>>>>>>>>
> >>>>>>>>>>      >> docker-compose up
> >>>>>>>>>> ERROR: Couldn't connect to Docker daemon at
> >>>>>>>>>> http+docker://localunixsocket - is it running?
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>      >> sudo service docker status
> >>>>>>>>>> ● docker.service - Docker Application Container Engine
> >>>>>>>>>>         Loaded: loaded (/lib/systemd/system/docker.service;
> enabled; vendor
> >>>>>>>>>> preset: enabled)
> >>>>>>>>>>         Active: active (running) since Tue 2019-09-17 16:06:30
> EEST; 1 weeks
> >>>>>>>>>> 6 days ago
> >>>>>>>>>>           Docs: https://docs.docker.com
> >>>>>>>>>>       Main PID: 15816 (dockerd)
> >>>>>>>>>>          Tasks: 13
> >>>>>>>>>>         Memory: 610.3M
> >>>>>>>>>>            CPU: 5min 15.038s
> >>>>>>>>>>         CGroup: /system.slice/docker.service
> >>>>>>>>>>                 └─15816 /usr/bin/dockerd -H fd://
> >>>>>>>>>> --containerd=/run/containerd/containerd.sock
> >>>>>>>>>>
> >>>>>>>>>> This is probably related to setting up docker but looks like we
> have
> >>>>>>>>>> limited knowledge on that here. Do you have an idea?
> >>>>>>>>>>
> >>>>>>>>>> Mikael
> >>>>>>>>>>
> >>>>>>>>>> On 27/09/2019 10:46, Martynas Jusevičius wrote:
> >>>>>>>>>>> Hi Mikael,
> >>>>>>>>>>>
> >>>>>>>>>>> On Tue, Sep 24, 2019 at 11:21 AM Mikael Pesonen
> >>>>>>>>>>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> On 23/09/2019 23:23, Martynas Jusevičius wrote:
> >>>>>>>>>>>>> No not exactly. Let me picture the basic setup:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> HTTP client -> [ nginx/localhost:8090 ->
> >>>>>>>>>>>>> Processor/resource.lingsoft.fi:80 ] -> SPARQL endpoint
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> You can continue using port 8090 if that is what you prefer,
> or you
> >>>>>>>>>>>>> can choose any different port. But what you have now is
> nginx fronting
> >>>>>>>>>>>>> Processor, just for the purpose of rewriting the URL base to
> >>>>>>>>>>>>> resource.lingsoft.fi before Processor receives the request,
> and that
> >>>>>>>>>>>>> way making sure that the queries from the LDT ontology will
> select
> >>>>>>>>>>>>> something from your dataset.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Nothing changes for the outside consumer -- since nginx and
> Processor
> >>>>>>>>>>>>> are both running as Docker containers, they communicate via
> the
> >>>>>>>>>>>>> internal Docker network and the host network is not affected.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I'm pretty sure nginx can do this, will try tomorrow.
> >>>>>>>>>>>> Okay thanks for this, forwarded to our tech support.
> >>>>>>>>>>> I've setup an example as promised. nginx is now a reverse
> proxy in
> >>>>>>>>>>> front of Processor in the Fuseki example:
> >>>>>>>>>>>
> https://github.com/AtomGraph/Processor#default-ontology-and-a-local-sparql-service
> >>>>>>>>>>>
> >>>>>>>>>>> Processor is now available on two different hostnames. In the
> second
> >>>>>>>>>>> case the request goes through nginx and the hostname is
> rewritten to
> >>>>>>>>>>> example.org before the Processor, and becomes BASE
> >>>>>>>>>>> <http://example.org/> in queries.
> >>>>>>>>>>>
> >>>>>>>>>>> $ curl http://localhost:8080/
> >>>>>>>>>>> <http://localhost:8080/>
> >>>>>>>>>>> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
> >>>>>>>>>>> <http://xmlns.com/foaf/0.1/Document> .
> >>>>>>>>>>> <http://localhost:8080/> <http://purl.org/dc/terms/title>
> "localhost:8080" .
> >>>>>>>>>>> <http://localhost:8080/> <http://purl.org/dc/terms/description>
> "This
> >>>>>>>>>>> is an RDF document served by AtomGraph Processor" .
> >>>>>>>>>>> <http://localhost:8080/>
> >>>>>>>>>>> <http://www.w3.org/2000/01/rdf-schema#seeAlso>
> >>>>>>>>>>> <http://localhost:8080/sparql> .
> >>>>>>>>>>>
> >>>>>>>>>>> $ curl http://localhost/
> >>>>>>>>>>> <http://example.org/>
> >>>>>>>>>>> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
> >>>>>>>>>>> <http://xmlns.com/foaf/0.1/Document> .
> >>>>>>>>>>> <http://example.org/> <http://purl.org/dc/terms/title> "
> example.org" .
> >>>>>>>>>>> <http://example.org/> <http://purl.org/dc/terms/description>
> "This is
> >>>>>>>>>>> an RDF document served by AtomGraph Processor" .
> >>>>>>>>>>> <http://example.org/> <
> http://www.w3.org/2000/01/rdf-schema#seeAlso>
> >>>>>>>>>>> <http://example.org/sparql> .
> >>>>>>>>>>>
> >>>>>>>>>>> The key configuration is PROXY_SET_HOST=example.org in
> >>>>>>>>>>> docker-compose.yml. In your case it would be
> >>>>>>>>>>> PROXY_SET_HOST=resource.lingsoft.fi.
> >>>>>>>>>>>
> >>>>>>>>>>>>> Re. parameters, you cannot supply the template URI itself as
> a
> >>>>>>>>>>>>> parameter -- only parameters for whichever template matches
> the path
> >>>>>>>>>>>>> of the request URI by ldt:match. In other words, the URL is
> a template
> >>>>>>>>>>>>> call, with ?this being the default argument, and then any
> parameters
> >>>>>>>>>>>>> from the URL query string.
> >>>>>>>>>>>>> An example of agent param (basically same as in Wikidata's
> example):
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> :PersonItem a ldt:Template ;
> >>>>>>>>>>>>>           rdfs:label "Person template" ;
> >>>>>>>>>>>>>           ldt:match "/{uuid}" ;
> >>>>>>>>>>>>>           ldt:param :AgentParam ;
> >>>>>>>>>>>>>           ldt:query [ a :PersonQueryTemplate ] ;
> >>>>>>>>>>>>>           rdfs:isDefinedBy : .
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> :AgentParam a ldt:Parameter ;
> >>>>>>>>>>>>>           rdfs:label "Agent parameter" ;
> >>>>>>>>>>>>>           spl:predicate :agent ; # parameter name in the URL
> query string
> >>>>>>>>>>>>>           spl:valueType rdfs:Resource ;
> >>>>>>>>>>>>>           spl:optional true ;
> >>>>>>>>>>>>>           rdfs:isDefinedBy : .
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> :PersonQueryTemplate a spin:Template ;
> >>>>>>>>>>>>>           rdfs:label "Person query template" ;
> >>>>>>>>>>>>>           spin:constraint :AgentParam ;
> >>>>>>>>>>>>>           spin:body :PersonQuery ;
> >>>>>>>>>>>>>           rdfs:isDefinedBy : .
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> :PersonQuery a ldt:Query, sp:Construct ;
> >>>>>>>>>>>>>           rdfs:label "Person query" ;
> >>>>>>>>>>>>>           sp:text """
> >>>>>>>>>>>>> CONSTRUCT
> >>>>>>>>>>>>> {
> >>>>>>>>>>>>>           ...
> >>>>>>>>>>>>> }
> >>>>>>>>>>>>>           """ ;
> >>>>>>>>>>>>>           rdfs:isDefinedBy : .
> >>>>>>>>>>>> So in this case, how does the query url look like?
> >>>>>>>>>>>> https://resource.lingsoft.fi/<uuid> plus something.
> >>>>>>>>>>> For example:
> >>>>>>>>>>>
> https://resource.lingsoft.fi/c5401732-c75d-4f44-b9d1-4e9e95297d9d?agent=https%3A%2F%2Fresource.lingsoft.fi%2F1ecec683-8df3-4e39-9ecd-64ed5617767a
> >>>>>>>>>>>
> >>>>>>>>>>> URL parameter values have to be percent-encoded:
> >>>>>>>>>>> https://en.wikipedia.org/wiki/Percent-encoding
> >>>>>>>>>>>
> >>>>>>>>>>>>> More info on parameters:
> >>>>>>>>>>>>>
> https://github.com/AtomGraph/Processor/wiki/Linked-Data-Templates#parameters
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On Mon, Sep 23, 2019 at 1:56 PM Mikael Pesonen
> >>>>>>>>>>>>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>>>>>>>> Okay so it's possible to have servers on same name? We have
> public
> >>>>>>>>>>>>>> server resource.lingsoft.fi serving content, and
> >>>>>>>>>>>>>> another server, say ldt.lingsoft.fi, where reverse proxy
> is redirecting
> >>>>>>>>>>>>>> calls to AtomGraph at localhost:8090 so that AtomGraph
> thinks it's
> >>>>>>>>>>>>>> running at resource.lingsoft.fi? I have to test that with
> our tech support.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Then there is still the question about passing more than
> one parameters.
> >>>>>>>>>>>>>> So when we make a query
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> ldt.lingsoft.fi/
> <uuid>?template=entire_person&agent=some_agent
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> what kind of ontology is needed for the mapping. Could we
> please have an
> >>>>>>>>>>>>>> example of that too?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Mikael
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> On 23/09/2019 13:37, Martynas Jusevičius wrote:
> >>>>>>>>>>>>>>> Mikael,
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I agree this is a common use case that needs a solution,
> but I don't
> >>>>>>>>>>>>>>> think the LDT specification is the right place to address
> it. More
> >>>>>>>>>>>>>>> like the Processor documentation.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I think the core issue here is information hiding:
> >>>>>>>>>>>>>>> https://en.wikipedia.org/wiki/Information_hiding
> >>>>>>>>>>>>>>> You want the Processor to work as if the request is coming
> from
> >>>>>>>>>>>>>>> https://resource.lingsoft.fi/, yet it is really coming
> from
> >>>>>>>>>>>>>>> http://localhost:8090/. You want to introduce an
> indirection that is
> >>>>>>>>>>>>>>> hidden from the data consumer.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> This is very much like having a webapp running on
> >>>>>>>>>>>>>>> http://localhost:8090/ but wanting to expose it as
> http://localhost/,
> >>>>>>>>>>>>>>> i.e. hiding the port number.
> >>>>>>>>>>>>>>> What you would normally do is put a reverse proxy server
> such as
> >>>>>>>>>>>>>>> Apache or nginx in front of the webapp that would rewrite
> the URL and
> >>>>>>>>>>>>>>> hide the port from the outside.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> The same solution applies here. You could have nginx in
> front of
> >>>>>>>>>>>>>>> Processor that rewrites the request URL from
> http://localhost:8090/ to
> >>>>>>>>>>>>>>> https://resource.lingsoft.fi/.
> >>>>>>>>>>>>>>> I think this is the cleanest approach - one simple
> component is
> >>>>>>>>>>>>>>> introduced and neither Processor nor LDT spec require
> changes.
> >>>>>>>>>>>>>>> More info:
> https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> If you want to try this, I can put together a
> docker-compose.yml with
> >>>>>>>>>>>>>>> nginx in front of Processor, using their Docker image:
> >>>>>>>>>>>>>>> https://hub.docker.com/_/nginx
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Martynas
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On Mon, Sep 23, 2019 at 11:35 AM Mikael Pesonen
> >>>>>>>>>>>>>>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>>>>>>>>>> Hi Martynas,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On 21/09/2019 12:20, Martynas Jusevičius wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Hi Mikael,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> OK there is quite a bit to unpack here :)
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> In relation to your first question, I added such a
> paragraph to documentation:
> >>>>>>>>>>>>>>>>
> https://github.com/AtomGraph/Processor/wiki/Linked-Data-Templates#execution
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> "Note that the base URI of the RDF dataset in the SPARQL
> service needs
> >>>>>>>>>>>>>>>> to be aligned with the base URI of the Processor
> instance. Considering
> >>>>>>>>>>>>>>>> the example above, the dataset should contain some
> >>>>>>>>>>>>>>>> http://localhost:8080/-based URIs, otherwise ?this will
> never match
> >>>>>>>>>>>>>>>> any resources and the query results will be empty,
> leading to a 404
> >>>>>>>>>>>>>>>> Not Found response."
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> So in your case the Processor base URI is
> http://localhost:8090/, but
> >>>>>>>>>>>>>>>> the issue is the same: the base URI of your dataset is
> totally
> >>>>>>>>>>>>>>>> different: http://resource.lingsoft.fi/.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> ?this URI has to have a direct match in your dataset. And
> its value is
> >>>>>>>>>>>>>>>> the *full* request URI (though without query string), not
> a URI
> >>>>>>>>>>>>>>>> provided in the path as you are attempting.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> In other words, if you want this LDT example to work on
> your
> >>>>>>>>>>>>>>>> http://resource.lingsoft.fi/-based dataset, the
> Processor should be
> >>>>>>>>>>>>>>>> deployed on http://resource.lingsoft.fi/, and then
> requests to
> >>>>>>>>>>>>>>>>
> https://resource.lingsoft.fi/286c384d-cd5c-4887-9b85-94c0c147f709
> >>>>>>>>>>>>>>>> would work.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> This is easier said than done, because you most likely
> already have a
> >>>>>>>>>>>>>>>> system running on that domain.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> You are right, we have this server
> https://resource.lingsoft.fi/ running for serving content. This is
> probably quite common scenario.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> So what we usually do is "rebase" the
> >>>>>>>>>>>>>>>> RDF dataset to the base URI of the Processor that we are
> testing on.
> >>>>>>>>>>>>>>>> You could do that by exporting your RDF dataset as
> N-Triples or
> >>>>>>>>>>>>>>>> N-Quads and simply replacing http://resource.lingsoft.fi/
> with
> >>>>>>>>>>>>>>>> http://localhost:8090/. Put the rebased dataset in a
> separate test
> >>>>>>>>>>>>>>>> triplestore.
> >>>>>>>>>>>>>>>> Then a request to
> >>>>>>>>>>>>>>>>
> http://localhost:8090/286c384d-cd5c-4887-9b85-94c0c147f709 should
> >>>>>>>>>>>>>>>> work.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> This seems quite a heavy solution, to keep in sync two
> datasets.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> There are workarounds without doing URI rebasing, for
> example you
> >>>>>>>>>>>>>>>> could still request
> >>>>>>>>>>>>>>>>
> http://localhost:8090/286c384d-cd5c-4887-9b85-94c0c147f709 and in the
> >>>>>>>>>>>>>>>> query do smth like (not tested)
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>            BIND (STRAFTER(STR(?this), STR(<>)) AS ?id)
> >>>>>>>>>>>>>>>>            BIND (URI(CONCAT("
> https://resource.lingsoft.fi/", ?id)) AS ?realThis)
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> and then use ?realThis in the query instead of ?this.
> What this code
> >>>>>>>>>>>>>>>> does is extract the ID from the request URI by stripping
> the
> >>>>>>>>>>>>>>>> http://localhost:8090/ base URI (which comes from BASE
> >>>>>>>>>>>>>>>> <http://localhost:8090/>) and concatenating it with the
> real base URI
> >>>>>>>>>>>>>>>> of your dataset, which is <https://resource.lingsoft.fi/
> >.
> >>>>>>>>>>>>>>>> This approach is not recommended however, because URIs
> are opaque
> >>>>>>>>>>>>>>>> identifiers, and their contents should not be parsed in
> order to
> >>>>>>>>>>>>>>>> extract information (such as the ID in this case):
> >>>>>>>>>>>>>>>> https://www.w3.org/DesignIssues/Axioms.html#opaque
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Alternatively, you could have a JAX-RS filter that
> changes the base
> >>>>>>>>>>>>>>>> URI in the UriInfo object. That way the LDT processor
> could use a
> >>>>>>>>>>>>>>>> ?this URI in queries which is different from the real
> request URI. But
> >>>>>>>>>>>>>>>> again, this is more of a hack.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Since the resource uri is most often used for serving
> content, it would seem that LDT spec needs some "official",  non hack way
> to handle the calls? One way that comes to mind is to use URL parameters so
> that for example call to
> >>>>>>>>>>>>>>>> https://resource.lingsoft.fi/<uuid>?ldt=person&agent=...
> >>>>>>>>>>>>>>>> would be parsed at our content server and forwarded to
> AtomGraph when ldt parameter is seen. But still LDT would need to support
> this in some official way. What do you think?
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Br,
> >>>>>>>>>>>>>>>> Mikael
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Lets address the rest of your questions when we have this
> figured out.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Fri, Sep 20, 2019 at 12:13 PM Mikael Pesonen
> >>>>>>>>>>>>>>>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Thanks, makes more sense now.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> We needed to add parameter --network=host to get
> connections to out
> >>>>>>>>>>>>>>>> local network, but that results a bind error, since we
> are already 8080
> >>>>>>>>>>>>>>>> in use on our servers. We can figure this out here
> first...
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> But I'm getting the SPARQL query now.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> So my ontology is now:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> @base         <https://resource.lingsoft.fi/aabb> . #
> just for testing
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> @prefix :     <#> .
> >>>>>>>>>>>>>>>> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
> >>>>>>>>>>>>>>>> @prefix owl:  <http://www.w3.org/2002/07/owl#> .
> >>>>>>>>>>>>>>>> @prefix ldt:  <https://www.w3.org/ns/ldt#> .
> >>>>>>>>>>>>>>>> @prefix sp:   <http://spinrdf.org/sp#> .
> >>>>>>>>>>>>>>>> @prefix spl:  <http://spinrdf.org/spl#> .
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> : a ldt:Ontology ;
> >>>>>>>>>>>>>>>>             owl:imports ldt:, sp: ;
> >>>>>>>>>>>>>>>>             rdfs:label "LDT ontology" .
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> :AdminPersonItem a ldt:Template ;
> >>>>>>>>>>>>>>>>               ldt:match "/{id}" ;
> >>>>>>>>>>>>>>>>               ldt:query :ConstructAdminPerson ;
> >>>>>>>>>>>>>>>>               rdfs:isDefinedBy : .
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> :ConstructAdminPerson a sp:Construct ;
> >>>>>>>>>>>>>>>>               sp:text """
> >>>>>>>>>>>>>>>>               CONSTRUCT
> >>>>>>>>>>>>>>>>               FROM <
> http://www.lingsoft.fi/graph/common_insight/>
> >>>>>>>>>>>>>>>>               WHERE
> >>>>>>>>>>>>>>>>               {
> >>>>>>>>>>>>>>>>                   ?this ?p ?o
> >>>>>>>>>>>>>>>>               }
> >>>>>>>>>>>>>>>>               """ ;
> >>>>>>>>>>>>>>>>               rdfs:isDefinedBy : .
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Query URL is
> >>>>>>>>>>>>>>>>
> http://localhost:8090/https%3A%2F%2Fresource.lingsoft.fi%2F286c384d-cd5c-4887-9b85-94c0c147f709
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Resulted SPARQL is
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> BASE    <http://localhost:8090/>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> CONSTRUCT
> >>>>>>>>>>>>>>>>           {
> >>>>>>>>>>>>>>>> <https%253A%252F%252Fresource.lingsoft.fi
> %252F286c384d-cd5c-4887-9b85-94c0c147f709>
> >>>>>>>>>>>>>>>> ?p ?o .
> >>>>>>>>>>>>>>>>           }
> >>>>>>>>>>>>>>>> FROM <http://www.lingsoft.fi/graph/common_insight/>
> >>>>>>>>>>>>>>>> WHERE
> >>>>>>>>>>>>>>>>           {
> >>>>>>>>>>>>>>>> <https%253A%252F%252Fresource.lingsoft.fi
> %252F286c384d-cd5c-4887-9b85-94c0c147f709>
> >>>>>>>>>>>>>>>>                       ?p  ?o
> >>>>>>>>>>>>>>>>           }
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> So resource id is still double encoded. Perhaps I'm still
> missing
> >>>>>>>>>>>>>>>> something about the parameter mapping.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> So now I'm bit further and know how to ask right
> questions :) So we need
> >>>>>>>>>>>>>>>> to be able to send 3 parameters:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> 1) Resource URL = in this case person's id
> >>>>>>>>>>>>>>>> (
> https://resource.lingsoft.fi/286c384d-cd5c-4887-9b85-94c0c147f709)
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> 2) Resource type for selecting correct template/SPARQL
> query. in this
> >>>>>>>>>>>>>>>> case a person.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> 3) Access level: how much details are you allowed the
> query of the person.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> So how are these 3 parameters mapped to the ontology and
> generated
> >>>>>>>>>>>>>>>> SPARQL - what kind of modifications are needed for the
> request URL and
> >>>>>>>>>>>>>>>> template ontology? I'm trying to read the examples but
> not getting this
> >>>>>>>>>>>>>>>> still...
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Br,
> >>>>>>>>>>>>>>>> Mikael
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On 19/09/2019 16:18, Martynas Jusevičius wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Hi Mikael,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> the -v and -e are docker run options:
> >>>>>>>>>>>>>>>> https://docs.docker.com/engine/reference/run/
> >>>>>>>>>>>>>>>> -v specifically mounts a file or folder from the host to
> the container
> >>>>>>>>>>>>>>>> (which is internally Ubuntu in this case):
> >>>>>>>>>>>>>>>> https://docs.docker.com/storage/bind-mounts/
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> ENDPOINT, ONTOLOGY etc. are defined in the Processor's
> Dockerfile
> >>>>>>>>>>>>>>>> and/or entrypoint:
> >>>>>>>>>>>>>>>>
> https://github.com/AtomGraph/Processor/blob/master/Dockerfile
> >>>>>>>>>>>>>>>>
> https://github.com/AtomGraph/Processor/blob/master/entrypoint.sh
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Different Docker images can use ENV variables and mounts
> in different
> >>>>>>>>>>>>>>>> ways and for different purposes. But if you see a
> container as a large
> >>>>>>>>>>>>>>>> function, they usually serve as user inputs.
> >>>>>>>>>>>>>>>> But Docker and Dockerfiles are large topics on their own
> :)
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> You could use curl to query an ontology from Fuseki (most
> likely a
> >>>>>>>>>>>>>>>> different instance than ENDPOINT), store it into a file
> and then mount
> >>>>>>>>>>>>>>>> it to Processor. Easy to script something like this in
> bash.
> >>>>>>>>>>>>>>>> We also have a Knowledge Graph management system that
> builds on top of
> >>>>>>>>>>>>>>>> Processor and provides a UI, for general RDF as well as
> ontology
> >>>>>>>>>>>>>>>> editing. But it is not open-source so far -- lets take it
> off-list if
> >>>>>>>>>>>>>>>> it sounds interesting.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> I need to check how GRAPH_STORE is used and whether it
> can be made
> >>>>>>>>>>>>>>>> optional. If you don't have one, just provide a bogus
> (but valid) URL
> >>>>>>>>>>>>>>>> for now, it shouldn't be a problem.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Thu, Sep 19, 2019 at 12:08 PM Mikael Pesonen
> >>>>>>>>>>>>>>>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Hi,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> already some more questions:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> About docker command line parameters, who defines the -e
> and -v command
> >>>>>>>>>>>>>>>> line parameters? Looking at the document -v displays the
> docker version.
> >>>>>>>>>>>>>>>> -e I'm guessing sets an environment variable but what -v
> does? Sets some
> >>>>>>>>>>>>>>>> input file locations? Couldn't find the document for
> those.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> How do I read the template ontologies (any RDF content)
> from Fuseki
> >>>>>>>>>>>>>>>> endpoint instead of a file(s)?
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> There are separate environment variables for SPARQL and
> GSP endpoints.
> >>>>>>>>>>>>>>>> Are both required?
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Mikael
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On 18/09/2019 16:33, Martynas Jusevičius wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Hurray! Thanks a lot for going through this. If you have
> any
> >>>>>>>>>>>>>>>> suggestions on how to improve the documentation or the
> setup, please
> >>>>>>>>>>>>>>>> let me know.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Now you have these basic options:
> >>>>>>>>>>>>>>>> - change ENDPOINT/GRAPH_STORE values to your own endpoint
> URLs
> >>>>>>>>>>>>>>>> - edit wikidata.ttl to change LDT templates (or their URI
> templates,
> >>>>>>>>>>>>>>>> or their queries)
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> If you have a public SPARQL endpoint, we can try the
> config here.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Remember that ?this is a "magic" variable in the queries,
> which is
> >>>>>>>>>>>>>>>> bound to the request URI. So in the case of the example
> (?this,
> >>>>>>>>>>>>>>>> <http://localhost:8080/birthdays>), although the
> Wikidata query does
> >>>>>>>>>>>>>>>> not use ?this variable.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Wed, Sep 18, 2019 at 3:25 PM Mikael Pesonen
> >>>>>>>>>>>>>>>> <mikael.pesonen@lingsoft.fi> wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Okay now it seems to work except for connection to
> wikidata as you
> >>>>>>>>>>>>>>>> mentioned earlier:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> 18-Sep-2019 13:21:43.730 WARNING [localhost-startStop-1]
> >>>>>>>>>>>>>>>>
> org.apache.catalina.util.SessionIdGeneratorBase.createSecureRandom
> >>>>>>>>>>>>>>>> Creation of SecureRandom instance for session ID
> generation using
> >>>>>>>>>>>>>>>> [SHA1PRNG] took [162,901] milliseconds.
> >>>>>>>>>>>>>>>> 18-Sep-2019 13:21:43.742 INFO [localhost-startStop-1]
> >>>>>>>>>>>>>>>> org.apache.catalina.startup.HostConfig.deployDescriptor
> Deployment of
> >>>>>>>>>>>>>>>> configuration descriptor
> >>>>>>>>>>>>>>>> /usr/local/tomcat/conf/Catalina/localhost/ROOT.xml has
> finished in
> >>>>>>>>>>>>>>>> 164,533 ms
> >>>>>>>>>>>>>>>> 18-Sep-2019 13:21:43.745 INFO [main]
> >>>>>>>>>>>>>>>> org.apache.coyote.AbstractProtocol.start Starting
> ProtocolHandler
> >>>>>>>>>>>>>>>> ["http-apr-8080"]
> >>>>>>>>>>>>>>>> 18-Sep-2019 13:21:43.753 INFO [main]
> >>>>>>>>>>>>>>>> org.apache.coyote.AbstractProtocol.start Starting
> ProtocolHandler
> >>>>>>>>>>>>>>>> ["ajp-apr-8009"]
> >>>>>>>>>>>>>>>> 18-Sep-2019 13:21:43.756 INFO [main]
> >>>>>>>>>>>>>>>> org.apache.catalina.startup.Catalina.start Server startup
> in 164576 ms
> >>>>>>>>>>>>>>>> 18-Sep-2019 13:21:43.918 INFO [http-apr-8080-exec-1]
> >>>>>>>>>>>>>>>>
> com.sun.jersey.server.impl.application.WebApplicationImpl._initiate
> >>>>>>>>>>>>>>>> Initiating Jersey application, version 'Jersey: 1.19
> 02/11/2015 03:25 AM'
> >>>>>>>>>>>>>>>> 13:21:44,148 DEBUG Jena:189 - Jena initialization
> >>>>>>>>>>>>>>>> 13:21:44,341 DEBUG FileManager:157 - Add location:
> LocatorFile
> >>>>>>>>>>>>>>>> 13:21:44,342 DEBUG FileManager:157 - Add location:
> ClassLoaderLocator
> >>>>>>>>>>>>>>>> 13:21:44,346 DEBUG LocationMapper:152 - Failed to find
> configuration:
> >>>>>>>>>>>>>>>>
> file:location-mapping.rdf;file:location-mapping.n3;file:location-mapping.ttl;file:etc/location-mapping.rdf;file:etc/location-mapping.n3;file:etc/location-mapping.ttl
> >>>>>>>>>>>>>>>> 13:21:44,346 DEBUG FileManager:157 - Add location:
> LocatorFile
> >>>>>>>>>>>>>>>> 13:21:44,348 DEBUG FileManager:157 - Add location:
> LocatorURL
> >>>>>>>>>>>>>>>> 13:21:44,348 DEBUG FileManager:157 - Add location:
> ClassLoaderLocator
> >>>>>>>>>>>>>>>> 13:21:44,354 DEBUG StreamManager:142 - Found:
> location-mapping.n3
> >>>>>>>>>>>>>>>> (ClassLoaderLocator)
> >>>>>>>>>>>>>>>> 13:21:44,731 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://www.w3.org/2011/http-statusCodes =>
> >>>>>>>>>>>>>>>> com/atomgraph/processor/http-statusCodes.rdf
> >>>>>>>>>>>>>>>> 13:21:44,732 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> https://www.w3.org/ns/ldt/named-graphs/templates# =>
> >>>>>>>>>>>>>>>> com/atomgraph/processor/ngt.ttl
> >>>>>>>>>>>>>>>> 13:21:44,732 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://www.w3.org/2011/http# =>
> com/atomgraph/processor/http.owl
> >>>>>>>>>>>>>>>> 13:21:44,733 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://spinrdf.org/sp => etc/sp.ttl
> >>>>>>>>>>>>>>>> 13:21:44,733 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> https://www.w3.org/ns/ldt# =>
> com/atomgraph/processor/ldt.ttl
> >>>>>>>>>>>>>>>> 13:21:44,733 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://www.w3.org/2011/http-statusCodes# =>
> >>>>>>>>>>>>>>>> com/atomgraph/processor/http-statusCodes.rdf
> >>>>>>>>>>>>>>>> 13:21:44,734 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://spinrdf.org/sp# => etc/sp.ttl
> >>>>>>>>>>>>>>>> 13:21:44,734 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> https://www.w3.org/ns/ldt/topic-hierarchy/templates# =>
> >>>>>>>>>>>>>>>> com/atomgraph/processor/tht.ttl
> >>>>>>>>>>>>>>>> 13:21:44,736 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://spinrdf.org/spin => etc/spin.ttl
> >>>>>>>>>>>>>>>> 13:21:44,737 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> https://www.w3.org/ns/ldt/core/templates# =>
> com/atomgraph/processor/ct.ttl
> >>>>>>>>>>>>>>>> 13:21:44,737 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>>
> https://github.com/AtomGraph/Processor/blob/develop/examples/wikidata#
> >>>>>>>>>>>>>>>> => org/wikidata/ldt.ttl
> >>>>>>>>>>>>>>>> 13:21:44,738 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://www.w3.org/ns/sparql-service-description# =>
> >>>>>>>>>>>>>>>> com/atomgraph/processor/sparql-service.owl
> >>>>>>>>>>>>>>>> 13:21:44,738 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://rdfs.org/ns/void# =>
> com/atomgraph/processor/void.owl
> >>>>>>>>>>>>>>>> 13:21:44,741 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://spinrdf.org/spl => etc/spl.spin.ttl
> >>>>>>>>>>>>>>>> 13:21:44,741 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> https://www.w3.org/ns/ldt/document-hierarchy/domain# =>
> >>>>>>>>>>>>>>>> com/atomgraph/processor/dh.ttl
> >>>>>>>>>>>>>>>> 13:21:44,741 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://spinrdf.org/spin# => etc/spin.ttl
> >>>>>>>>>>>>>>>> 13:21:44,742 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> https://www.w3.org/ns/ldt/core/domain# =>
> com/atomgraph/processor/c.ttl
> >>>>>>>>>>>>>>>> 13:21:44,742 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://www.w3.org/2011/http =>
> com/atomgraph/processor/http.owl
> >>>>>>>>>>>>>>>> 13:21:44,742 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://xmlns.com/foaf/0.1/ =>
> com/atomgraph/processor/foaf.owl
> >>>>>>>>>>>>>>>> 13:21:44,743 DEBUG JenaIOEnvironment:119 - Mapping:
> >>>>>>>>>>>>>>>> http://rdfs.org/sioc/ns# =>
> com/atomgraph/processor/sioc.owl
> >>>>>>>>>>>>>>>> 13:21:44,

Received on Wednesday, 2 October 2019 13:21:14 UTC