use case: content adaptation

Dear all,

You find below the use case on content adaptation that I mentioned in the
last telecon, but didn't yet send around by email.


Use case: automated adaptation of content (media) presentation
to users and the context.

We assume sensors to exist that conclude which objects (which people,
for example) are in a certain room/space (a 'simple' way could involve 
tagging 
the objects).  The objects are described in an ontology.  Also metadata
for content is described in an ontology.
It is relevant to include information involving people's likes 
and dislikes, concerning media content, for example.
Sensor detection and such descriptions form the basis for drawing
conclusions about the context (living room, office, mobile situation) 
and to adapt the presentation of content to this context.
This may also include the specification of actions, for example used
to personalize certain equipment, possibly in a context-dependent
way.  A natural connection can be made to the subject of collaborative 
filtering.
Ultimately, it is desirable to allow individual modes of expression
of user profiles, while being able to make comparisons between
different user profiles.

Regards,
Herman
========================================
Dr. H.J. ter Horst
Philips Research Laboratories
Prof. Holstlaan 4, 5656 AA Eindhoven, The Netherlands

E-mail:    herman.ter.horst@philips.com
Tel:    + 31 40 27 42026

Received on Wednesday, 5 December 2001 09:19:08 UTC