Re: GUI evaluation practices?

Hi Alvaro, all,

My recommendation would be to first clearly identify intended users and
tasks. With this you can perform some tests with users, where you ask them
to perform representative tasks while you record the interaction. Then, you
can analyse a set of user test (10 or even slightly less might suffice) to
produce some measures for quantitative evaluation plus many evident things
you will observe just from a qualitative standpoint, for instance where
they get lost...

For the quantitative part, you can use common Quality in Use metrics, like
time to complete the task, success rate,... plus other specific to the tool
being evaluated. We have proposed a Quality in Use framework for Semantic
Web Exploration tools that might be useful. No specific for graph-based
visualization but it might be a starting point:

Using SWET-QUM to Compare the Quality in Use of Semantic Web Exploration
Tools
http://www.jucs.org/jucs_19_8/using_SWET_QUM_to

Feel free to contact me if you want to discuss proposed metrics, new ones,
etc. We are also interested in metrics for RDF visualization by the way.

Best,


Roberto



On Fri, Feb 7, 2014 at 3:47 AM, Alvaro Graves <alvaro@graves.cl> wrote:

> Hi there,
>
> I'm trying to evaluate a system for studying RDF documents, mainly
> vocabularies and ontologies. I'm not sure if there are best practices,
> guidelines or any other information that may be related to visualizing RDF
> graphs (or graphs in general). Any pointer?
>
> Thanks in advance!
>
> Alvaro Graves-Fuenzalida, PhD
> Web: http://graves.cl - Twitter: @alvarograves
>

Received on Friday, 7 February 2014 08:31:07 UTC