Re: Next call

Dear Felix, all,
> Hi all,
>
> just a reminder that our next call will be Tuesday, October 14 at
> 21:00:00 UTC time, see
> http://www.w3.org/2008/WebVideo/Annotations/admin
> http://www.timeanddate.com/worldclock/fixedtime.html?month=10&day=14&year=2008&hour=21&min=00&sec=0&p1=0
>   
Thanks a lot for the reminder.

Over the last week I was thinking about the various 'use scenarios'
which we designed until now. Reading them carefully I see quite some
overlap between them ..... and also feel reminded about how the metadata
problem was addressed by MPEG-7 ..... very similarly (use cases =>
requirements => development). At the end it turned out that this is not
very useful, as the overall picture was left out of sight. The use cases
ended up in flat lists, the overall view was not visible any longer,
resulting in very complicated structures that tried to support any
application under the sun in much detail.

The variety of use cases in combination with their internal overlap
(e.g. larger parts of the 'mobile' use case can also be covered by the
'adaptation' and 'presentation' use cases) suggests that a different
approach might be more useful, namely the analysis versus different
trajectories:

* the media trajectory: which media particularities do we have to
describe so that humans can be supported in their working processes. The
media are different in their expression strength (e.g. visuals are
strong on their denotative power, where audio or haptics are better in
stimulating feelings, text is stong on paradigmatic processes). Taking
in consideration what the cognitive power of a medium is might help us
to destil the basics to be described to achieve the widest coverage.

* the context trajectory: which information elements are necessary to
achieve the correct context? In the 'mobile' scenario this means: we
think about what is essential about location and once that is clear we
determine how that can be minimally described so that a larger variety
of processes/actions can be performed (I assume we do not model the
processes but rather design metadata that allow them (the applications)
to access the appropriate material).

* the task trajectory: how should, whatever we design, support the
processes users perform on and with media? Here the questions are:
- which processes (e.g. search, manipulation, generation, .....) would
we like to support?
- do we make a distinction between general and specific tasks (general
are those that can be found in a number of task processes, such as search)?
- do we have to model the process or is it enough to provide structures
so that this process can be performed?
- which are the essential terms/tags/description structures we have to
come up with?

Based on the above we might be able to establish a 'content trajectory'
with the aim to establish a basic semantic core set of 'tags'.

Finally, during our discussions about the various use cases we already
saw that there are more general concepts / processes to be described
(search is one of them) and then quite specific ones. The question we
would have to answer is - do we actually wish to go into the details or
rather leave that to the domains - so that we define a basic semantic
layer that can be used by everybody, enabling the definition of detailed
substructures underneath (aiming for particular applications).

Not sure what you think about that but look forward to hearing your opinion.
I can try to work these ideas out in a bit more detail for the face to
face in Cannes if the group thinks that is worthwhile..

Talk to you tomorrow.

Best wishes

frank

PS: At the moment the uses cases (also mine I admit) still look like a
bundle of use case, solution, wished for functionality. Should be better

-- 
Dr. Frank Nack					
Human-Computer Studies Laboratory (HCS)
Institute for Informatics
University of Amsterdam	
Kruislaan 419
1098 VA Amsterdam
The Netherlands
Tel:   +31 (0)20 525 6377
Fax:   +31 (0)20 525 6896
Mobil: +31 (0)6 1810 8902
Url: http://fnack.wordpress.com/


 

Received on Monday, 13 October 2008 17:24:49 UTC