Call for Participation: LHD+SemQuant at ISWC 2012

Apologies for cross-posting

Call for Participation for the LHD+SemQuant joint workshop on 12 November
2012 (am), held in conjunction with the International Semantic Web
Conference ( in Boston, USA:

The 2nd Workshop on Discovering Meaning On the Go in Large & Heterogeneous Data
The 1st International Workshop on Quantitative Formalization in the Semantic Web


The workshop will feature a panel session with panel members from
industry, government, military and academia to discuss these issues
from their own particular perspective.

One part of this workshop (LHD) is designed to bring together people
from different fields working in the area of dynamic matching,
interpretation, and integration of heterogeneous data, so that ideas,
techniques and problems can be shared and discussed in a broad
context. A key part of this workshop is bringing together those from
industry and government as well as those from academia.  In order to
interact successfully in an open and heterogeneous environment, being
able to dynamically and adaptively integrate data from other systems
“on the go” is necessary. This may not be a precise process but a
matter of finding a good enough understanding to allow interaction to
proceed successfully. With the advent of the Web, there are massive
amounts of information available online that can assist in this task,
but this information is often chaotically organised, stored in a wide
variety of data-formats, and difficult to interpret.

The other part of this workshop (SemQuant) aims to unlock a new
paradigm in Semantic Web research, and even in KR research in general,
by adding a quantitative research paradigm to the traditionally
predominant qualitative logic-based paradigm. This is motivated in
part by the significant growth in Semantic Web data, including
ontologies and Linked Data, over recent years. To efficiently manage
the vast quantities of knowledge and data on the Semantic Web, we need
theories and tools to address questions like:
	•	How can we measure knowledge?
	•	How are these measurements different from measurements of information?
	•	How can we efficiently store knowledge?
	•	How can we efficiently and accurately transform knowledge on noisy channels like the Web?
	•	How can we measure the quality of ontologies and other forms of knowledge?
	•	How can we determine the quality of approximate methods for inference, similarity, soundness, completeness, etc.?
	•	How can such quantitative formalization help the engineering and realization of the Semantic Web?

Fiona McNeill (University of Edinburgh)
Harry Halpin (W3C)
Andriana Gkaniatsou (University of Edinburgh)
Mike Dean (Raytheon BBN Technologies)
James Hendler (Rensselaer Polytechnic Institute)
Frank van Harmelen (Vrije Universiteit Amsterdam)
Jie Bao (Samsung Information Systems America)

Received on Wednesday, 5 September 2012 15:12:19 UTC