Re: [a11y-metadata-project] Re: Schema.org accessibility proposal Review...

Thanks Madeleine,
comment edited in below.

> (Gerardo's reply, which arrived when I was almost done writing this,
> covers some of the same issues, but I will send this anyway in case a
> differently worded explanation is helpful to anyone.)
>
> The solution to knowing which of the accessModes listed for a given
> resource are required for understanding the resource and which are not has
> traditionally (in Access for All usage) been that the matching system
> analyzes several pieces of metadata together to draw a conclusion. Here's
> the example of a video with captions and audio description, which Charles
> McN correctly marks up as:
>
> <div itemscope=²² itemtype=²http://schema.org/Movie²>
> <meta itemprop=²accessMode² content=²visual²/>
> <meta itemprop=²accessMode² content=²auditory²/>
> <meta itemprop=²mediaFeature² content=²audioDescription²/>
> <meta itemprop=²mediaFeature² content=²captions²/>
> </div>
>
>
> We have resources like this in Teachers' Domain. And the solution there to
> deciding which resources are well-suited a particular user's requirements
> is to analyze the whole of the metadata in comparison to the user's
> preferences. If a user cannot see, then if a resource contains "visual"
> media we look to see if there is a mediaFeature that can substitute for
> visual. "audioDescription" is an auditory substitute for visual material,
> so this resource taken as a whole meets this user's needs.
>
> There is knowledge encoded in these terms that is not explicit, and Andy
> has been reminding us for years that it would be useful to encode that
> knowledge in a machine-readable way. For right now, the only solution I am

Or technical best practices in one place that is used by everyone for 
now. I'm not sure the world is ready for that to be in machine-readable 
form though (though there is a precedent in the IEEE RAMLET work in a 
slightly different domain but that wouldn't be appropriate for here imho 
as this needs to be simple).  The difficulty is that this information is 
very difficult to express in a data model without making it very complex 
(and difficult to understand and not adaptable to changing 
technologies). Similar issues apply to some of the mediaFeatures types.
My view would be that each needs carefully worked out detailed (and 
technical) practices that is appropriate for each - in *one* accepted 
place. And if we want systems to be interoperable (we do) this kind of 
knowledge needs to be visible not buried in business rules.
This is a good place for it - as is happening.

andy heath
axelafa.com
DiversityNet
> aware of is to have business logic in the code for matching resources to
> users that KNOWS what audioDescription is. Some of the adaptation
> relationships are straightforward and could be described in logic. Others
> are not as obvious and have multiple possibilities.
>
> This is discussed in the IMS AfA v3 Best Practices and Implementation
> Guide. See section 7.2 for relationships that are well defined (note that
> the name of the property in IMS is "adaptationType" which the Schema.org
> proposal has renamed "mediaFeature"):
> <http://imsglobal.org/accessibility/afav3p0pd/AfA3p0_BestPractice_v1p0pd.ht
> ml#_Toc324315321>
>
> The more difficult relationships are discussed in Appendix B of that
> document, in section B2:
> <http://imsglobal.org/accessibility/afav3p0pd/AfA3p0_BestPractice_v1p0pd.ht
> ml#_Toc324315337>
>
> For the Schema.org simplest use case, where individual users see a group
> of search results and wish to filter them for features they know they can
> use, I expect that a user who cannot see and who is searching through a
> lot of videos would say "Oh, these ones have audio description. I'll look
> at only these." So they would select that filter from the list because
> they have in their heads the knowledge of what audioDescription is.
>
> The more advanced use case, where a system does complete matching to a
> user's profile and finds all resources good for them (in this example of a
> person who can't see, it would be those that don't have any visuals, as
> well as images that have text description and videos that have audio
> description) requires a smart system that can analyze all the metadata on
> each resource and sort out which of the accessModes have been adapted by a
> mediaFeature and which have not. AccessModes that are present and do not
> have any adaptation are required for full understanding of the resource.
>
> I can't think of a simple way to enable that advanced use case by only
> looking at accessModes. You could have a complex term that says "visuals
> present but only required if you can't hear the mediaFeature
> audioDescription" but then we would need a huge number of permutations of
> terms to cover all possible use cases. Is it necessary to encode that into
> a single property instead of taking the same info from the combination of
> different properties as currently structured? Or have I misunderstood, and
> that is not what this thread has been getting at?
>
> -Madeleine
>
> On 9/8/13 9:45 AM, "Charles McCathie Nevile" <chaals@yandex-team.ru> wrote:
>
>> On Sat, 07 Sep 2013 12:49:32 -0000, Andy Heath
>> <andyheath@axelrod.plus.com> wrote:
>>
>>> Chaals quoted (and wrote a little bit of):
>>>>>> = accessMode =
>>>>>
>>>>>> It should be possible for a "single resource" to be available with
>>>>>> more than one *set* of accessModes.
>>>>>
>>>>> I agree and this is the design.  A single resource can require one or
>>>>> more accessMode(s).
>>>>
>>>> Yes, but...
>>>>
>>>>> Å  the accessMode property describes "Human sensory perceptual system
>>>>> or cognitive faculty through which a person may process or perceive
>>>>> information." [Å ]
>>>>> We have also published a best practices and an implementation guide on
>>>>> the use of accessMode at:
>>>>>
>>>>> <http://www.a11ymetadata.org/wp-content/uploads/2013/04/A11yMetadataPro
>>>>> jectBestPracticesGuide_V.6.pdf>
>>>>>
>>>>>
>>>>> <https://wiki.benetech.org/display/a11ymetadata/Practical+Properties+Gu
>>>>> ide>
>>>>>
>>>
>>> Chaals wrote:
>>>
>>>> Yep. But that has an example which I'll use:
>>>>
>>>> A movie with captions and extended audio description would be encoded
>>>> as
>>>> follows
>>>> <div itemscope=²² itemtype=²http://schema.org/Movie²>
>>>> <meta itemprop=²accessMode² content=²visual²/>
>>>> <meta itemprop=²accessMode² content=²auditory²/>
>>>> <meta itemprop=²mediaFeature² content=²audioDescription²/>
>>>> <meta itemprop=²mediaFeature² content=²captions²/>
>>>> </div>
>>>>
>>>> My first impression is that if the video has good audio description,
>>>> then claiming it has accessMode "visual" seems wrong, since you don't
>>>> need to see it. Likewise, since it is captioned, it seems you don't
>>>> need
>>>> to hear it.
>>>>
>>>> So it doesn't have a single *required* accessMode. On the other hand,
>>>> you need to *either* see (clearly enough) or hear, in order to get the
>>>> content.
>>>
>>> The interpretation we had in AfA 3.0 of each property like this was
>>> that
>>> each specified not "accessMode required" but instead "accessMode
>>> available".  Did this project take a different interpretation ?
>>
>> I got that impression by reading the best practices guide Gerardo pointed
>>
>> to above:
>> <http://www.a11ymetadata.org/wp-content/uploads/2013/04/A11yMetadataProjec
>> tBestPracticesGuide_V.6.pdf>
>>
>>> I haven't yet read the rest of this because I'm trying to focus on the
>>> same ISO meeting Chaals is in (but will do so) - this interpretation is
>>>
>>> so crucial as to change the whole emphasis
>>
>> Indeed.
>>
>> On the other hand, if we don't take the view that an accessMode is
>> required, I dont understand the logic that lets us match a resource to a
>> user.
>>
>>> so I wanted to reply quickly on this one point.
>>>
>>> andy
>>> axelrod access for all
>>> DiversityNet
>>> http://axelafa.com
>>
>>
>> --
>> Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex
>>        chaals@yandex-team.ru         Find more at http://yandex.com
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Accessibility Metadata Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to a11y-metadata-project+unsubscribe@googlegroups.com.
>> To post to this group, send email to
>> a11y-metadata-project@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>




andy
andyheath@axelrod.plus.com
-- 
__________________
Andy Heath
http://axelafa.com

Received on Sunday, 8 September 2013 15:31:06 UTC