Re: Next steps

On 21/08/2010 22:08, Thomas Wrobel wrote:
> 2010/8/20 Hermodsson, Klas <Klas.Hermodsson@sonyericsson.com>:
>> On Aug 20, 2010, at 9:14 , Jens de Smit wrote:
>>
>> On 19/08/2010 09:49, Hermodsson, Klas wrote:
>> I think two levels (i.e. [criteria]<>[data]) is too simplistic. I would like to see a three level approach:
>> [criteria]<>[representation]<>[actual data]
>>
>> I'm not really seeing this (yet). They way you put it, isn't the
>> representation implicit in the type of data that's being linked? As in,
>> if the [actual data] is X3D we're dealing with a "visual" representation
>> (of the subtype "3D model") and if it's an OGG container with a Vorbis
>> stream inside it's an "aural" representation, etc.
>>
>> I may be using some terms that are not really suitable above. Let's take a concrete example to illustrate:
>>
>> - Company A has a sign with their logo on outside their stores
>> - When this logo is detected the company wants a spinning sphere with the logo on to be displayed while a music piece is playing
>> - If you select/activate/click this spinning sphere the latest ad is played back as a video
>>
>> Criteria: if computer vision detection of the logo occurs (criteria expressed in suitable markup language)
>> Representation: a spinning sphere + music (layout and resources of this representation expressed in suitable markup language)
>> Actual data: the video ad (content in specific format stored reached through some URI and over suitable protocol)
>> ...Maybe what I call representation is what other people call data? Note that both representation and data may need "layout" markup to explain how it should appear in our real world.
> 
> Yes, I see both of them as data.  The "representation" is, presumably,
> a 3d file set spinning and a wave file of some sort.
> When I say [data] <> [criteria] I am enclosure of those formats.
> 
> The fact that that data can, in tern, act as a trigger to further data
> (when the user clicks it) imho doesn't change the fact the original
> information popping up is also itself data formats that have been
> linked to the trigger. So to me your scenario is two separate
> [data]<>[criteria] associations.
> 
> I think the difference here is the first (the automatic appearing of
> the sphere+music), is a passive/auto triggered event, and the "user
> selects" is a manual one.
> You could just as easily have a video file as the "representation" in
> this scenario, that then pops up a big 3d file for the "data". So I'm
> not really seeing the separation myself.

I fully agree with Thomas here that these are two separate
[data]<>[criteria] associations. In this use case they happen to be
related in some way but they could just as well occur separate from
eachother.

> I do think there is variably presentation issues, but I see this akin
> to feeding device's different data based on their specifications. Akin
> to the @media in css. So, as part of the criteria, we should also be
> able to specify its intended for a certain device.

Having multiple representations associated with a single entity where
the use agent selects the most appropriate to display is occuring in a
lot of places. An ancient example is the "alt" text attribute for the
<img> tag, more recent ones are the built-in support for multiple
formats in the <audio> and <video> tag and the use of full-detail and
reduced-detail 3D models in Layar. With the amount of (current) variety
in input and output capabilities of networked devices this seems like
something we should consider as well (but I think we already had some
consensus on that :)

Regards,

Jens

Received on Monday, 23 August 2010 09:30:52 UTC