W3C home > Mailing lists > Public > public-xg-lld@w3.org > June 2010

Re: Introductions

From: Guenther Neher <g.neher@fh-potsdam.de>
Date: Wed, 09 Jun 2010 10:57:37 +0200
Message-ID: <4C0F5781.6080700@fh-potsdam.de>
To: public-xg-lld@w3.org
Hello Antoine and all others,

Am 08.06.2010 09:53, schrieb Antoine Isaac:
> If you could, give your name, your organization and some background. One
> word on your motivation or hope for joining the group is also much welcome!

I am Guenther Neher from the University of Applied Sciences in Potsdam,
Germany [1]. I am working as professor for web technologies and semantic
web applications within the faculty of information sciences, where we
educate students to later work as librarians, archivists, and
information specialists respectively [2].

The focus of our department is mainly non-technical and
IT-skills and -ambitions of our students are quite limited.
I strongly feel that a sound hands-on knowledge of semantic web
technologies and linked data infrastructures will become more and more
an important and necessary part of our alumnis' future qualification -
just in combination with their main skills: construction and use of
controlled vocabularies and classifiaction systems, knowledge management
and knowledge organization, etc.

My main interests in working within this incubator group is to learn
(and hopefully to contribute one or another cent) on educational and
didactical aspects - specifically with librarians, archivists (more
general: "Cultural Heritage (CH) people") in mind. Further questions of
interest to me are for example (always in the sense of best-practices
and educational apsects):
How to find and evaluate useful information items within the linked data
cloud to link with ?
How to best model connections to the cloud (sameAs, subClassOf, ...) ?
How to find simple but useful inference patterns, especially in the CH

To make the basic concepts of the semantic web and linked data clear and
practical to my students and to give them hands-on experience I exploit
the use case to make a given bibliographic dataset indexed with
controlled vocabulary (thesaurus) "linked-data-ready" - step-by-step.

In a first step bibliographic identifiers are transformed into an
DC-like RDF-vocabulary and given an individual namespace. Then the
thesaurus is transformed into SKOS and the descriptors of the RDFized
dataset are connected with the respective SKOS-concepts.
Finally for testing purposes some example queries are performed on this
In a second step the students have to analyze datasets from the linked
data cloud, try to find out which datasets potentially contain reliable
and useful information items to be linked with ...

My current experience is that many of our students have difficulties to
make the step - RDF/S and OWL-modelling, inference, even the concept of
namespaces seems too technical and too far away from their current
practice ...

So much for my first introduction.



[1] http://iw.fh-potsdam.de/personen0.html
[2] http://iw.fh-potsdam.de
Received on Wednesday, 9 June 2010 12:57:37 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:35:54 UTC