W3C home > Mailing lists > Public > public-sdw-wg@w3.org > August 2015

'area profiles' - use case for back links

From: Bill Roberts <bill@swirrl.com>
Date: Wed, 5 Aug 2015 13:32:27 +0100
Message-Id: <E327B3B2-40C5-4FBC-AA06-1CE1D75E7C4C@swirrl.com>
To: public-sdw-wg@w3.org
Hi all

In last week's call I mentioned a use case for 'back links' to places - the question of what resources are linked to my location of interest, or in RDF terminology, which triples exist with my location as the object.  Something that comes up frequently in our work for local government is 'area profiles' - selecting and presenting data about a place.  The data typically covers topics like demographics, health, economy, environment etc. and in our work is usually represented as statistical data in linked data form, using the RDF Data Cube vocabulary.  The RDF links generally go from an 'observation' to the place.

The area profile usually this incorporates some kind of simple map of the place, plus simple charts of selected data.  See http://profiles.hampshirehub.net/profiles/E06000045 for an example

This is straightforward in principle if all the available data is in a single database - you can retrieve the things you want by SPARQL query.  A more general and challenging problem is to answer a user question along the lines of 'what data is available about location X' drawing from distributed data sources.  A practical solution to that would generally involve some manual discovery and integration - becoming aware through various means of the existence of a relevant data collection (by web search, or personal recommendation, or social media or whatever), deciding if it holds info about a place then adding it to a list of services that could be queried to pull back the data.

Sometimes this could be more complicated if we are interested not only in data that links directly to our place identifier, but to related identifiers (other names for same thing, a sub-area or super-area of the place in question etc).

The challenge in question is one of discovery.  The most practical solution might be 'just google it' (having allowed search engines to crawl the data collections).  Perhaps more targeted indexes for specific domains of interest could meet the same need with less noise.  Querying metadata of data catalogues might be another option.

Best regards

Bill
Received on Wednesday, 5 August 2015 12:32:57 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:17 UTC