Applying the lens metaphor to semantic data (Was: tFacet)

Adam / list,

we also considered lenses to be a powerful yet intuitive interaction 
methaphor, leading to the creation of SemLens. SemLens is, of course, 
not the first approach that applies this metaphor to the Semantic Web - 
other popular examples include Fresnel or Haystack. But we think we 
found a very generic and intuitive solution with SemLens. The "Semantic 
Lenses" in SemLens are based on the Magic Lens idea (that you mentioned, 
Adam) and apply it to the Semantic Web. In contrast to other approaches, 
the lenses are binary, i.e. an object either meets the filter criterion 
or not, indicated by a change of its color in the scatter plot 
visualization. This allows for an easy and explorative construction of 
Boolean expressions on semantic data, simply by combining lenses and 
setting the logical operators accordingly.

We are currently writing a paper on this approach that will explain it 
in more detail. We also motivate and contrast it with faceted browsing, 
since - as you mentioned - the lenses are a nice alternative to static 
lists and global check boxes; as are the scatter plots. We will link 
this paper on the SemLens website as soon as it is published. For now, 
we can only invite everyone to try out the live demo at 
http://semlens.visualdataweb.org and experience the ease of interaction.

Cheers,
Steffen

--
On 13.05.2011 12:01, adasal wrote:
> List, I asked if SemLens and tFacet will be available on google code.
> Philip replied "tFacet will be on google code in the next weeks.
> SemLens is not sure yet."
>
> Philip thanks, I thought maybe to share with list.
> Funnily enough it is SemLens that attracts me.
>
> I have always thought that a lens is a very powerful and under used 
> visual metaphor. (Because a lens implies the passage of something 
> through it. I think used most powerfully when capturing movement or 
> transition.)
> My thinking is to use something like SemLens to browse faceted results 
> where I understand faceted results to be objects returned with 
> properties or types enumerated. This seems more intuitive than check 
> box selectors, and leads to discovery in a way that check boxes do 
> not. Anyway it would be a beginning use, but this would not capture 
> movement as I suggest though. Perhaps that depends on the collection 
> of properties?
>
> It reminds me of the work at http://cpntools.org/
> discussed in many papers including this [1] .
> The work on Toolglass and Magic Lens by Xerox Parc [2] (Xerox 
> Corporation own the trade marks for these names.)
> I think there is also the work on e.g. Piccolo from University of 
> Maryland [3], which covers issues associated with lenses.
>
> Adam
>
> ---------------
> [1]:Redesigning Design/CPN: Integrating Interaction and Petri Nets In Use
> [2]:http://www2.parc.com/istl/projects/MagicLenses/93Siggraph.html
> [3]:http://www.piccolo2d.org/
>
> On 13 May 2011 08:50, Philipp Heim <heim.philipp@googlemail.com 
> <mailto:heim.philipp@googlemail.com>> wrote:
>
>     Hi,
>
>     tFacet will be on google code in the next weeks.
>     SemLens is not sure yet.
>
>     Regards
>     Philipp
>
>     Am 12.05.2011 23:52, schrieb adasal:
>>     Hi,
>>     tFacet and SemLens are not on google code. Are they planned to be?
>>
>>     Best,
>>
>>     Adam Saltiel
>>
>>     On 11 May 2011 10:43, Philipp Heim
>>     <philipp.heim@vis.uni-stuttgart.de
>>     <mailto:philipp.heim@vis.uni-stuttgart.de>> wrote:
>>
>>         Hi all,
>>
>>         we are happy to announce the release of tFacet, a new tool
>>         that applies known interaction concepts to allow
>>         hierarchical-faceted exploration of RDF data.
>>
>>         More information can be found on:
>>         http://tfacet.visualdataweb.org
>>
>>         Best
>>         Sören & Philipp
>>
>>         -- 
>>         Philipp Heim . Visualization and Interactive Systems Group (VIS)
>>         University of Stuttgart . Universitaetstrasse 38 . D-70569
>>         Stuttgart
>>         Room 1.061 (Computer Science Building, 1st floor)
>>         Tel.: +49 (711) 685-88364  .  Fax: +49 (711) 685-88340
>>         Email: philipp.heim@vis.uni-stuttgart.de
>>         <mailto:philipp.heim@vis.uni-stuttgart.de>
>>         Web: http://www.vis.uni-stuttgart.de
>>         Current research: http://www.vis.uni-stuttgart.de/~heimpp/
>>         <http://www.vis.uni-stuttgart.de/%7Eheimpp/>
>>
>>
>>
>>
>
>
>     -- 
>     ______________________________________________________
>     Philipp Heim . Visualization and Interactive Systems Group (VIS)
>     University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
>     1.061 (Computer Science Building, 1st floor)
>     Tel.: +49 (711) 7816-364  .  Fax: +49 (711) 7816-340
>     E:philipp.heim@vis.uni-stuttgart.de  <mailto:philipp.heim@vis.uni-stuttgart.de>   .http://www.vis.uni-stuttgart.de
>
>


-- 
Steffen Lohmann - DEI Lab
Computer Science Department, Universidad Carlos III de Madrid
Avda de la Universidad 30, 28911 Leganés, Madrid (Spain), Office: 22A20
Phone: +34 916 24-9419, http://www.dei.inf.uc3m.es/slohmann/

Received on Monday, 16 May 2011 11:31:43 UTC