W3C home > Mailing lists > Public > public-vocabs@w3.org > October 2013

Re: [a11y-metadata-project] Reminder of accessibility metadata call coming up in an hour (9:00 AM PDT)

From: Liddy Nevile <liddy@sunriseresearch.org>
Date: Fri, 1 Nov 2013 07:48:50 +1100
Cc: Madeleine Rothberg <madeleine_rothberg@wgbh.org>, Charles Myers <charlesm@benetech.org>, "a11y-metadata-project@googlegroups.com" <a11y-metadata-project@googlegroups.com>, "public-vocabs@w3.org" <public-vocabs@w3.org>
Message-Id: <46228F3A-D1F8-4BFB-A183-50D0CA65CBFA@sunriseresearch.org>
To: Andy Heath <andyheath@axelrod.plus.com>
Andy
at no point am I saying what is for you - I am providing an example of  
a resource with its description and the stated needs of a sample user  
and showing how they match ????

Liddy
On 31/10/2013, at 9:40 PM, Andy Heath wrote:

> Liddy - this is "Access for All" not "Access for Disabled People".
> You are NOT entitled to say what is accessible to me (and I get  
> angry when people try to do so). Its *my* choice as to what I can  
> use - that has always been a tenet of our work.
>
> andy
>> mmm... now I think you have misunderstood or misread me, Andy.
>>
>> The particular video being described is only accessible with the
>> declared combinations - so audio alone will simply not cut it - it  
>> does
>> not have what that requires... so ????
>>
>>
>> On 30/10/2013, at 9:05 PM, Andy Heath wrote:
>>
>>> Liddy,
>>>
>>> I think your example is a good one to explain exactly why it *won't*
>>> work like that.  The problem is it gives too much weight to the  
>>> author
>>> and not the context.  For example, for a video with captions your
>>> example gives the metadata
>>>
>>> visual + auditory
>>> visual
>>> visual + text
>>>
>>> as describing the modalities that "is required to be able to
>>> comprehend and use a resource."
>>>
>>> This is *as the author sees it*.
>>> So what other ways are there to see it ?
>>>
>>> Well what about using the auditory mode alone ? (I do this very  
>>> often
>>> with the kind of videos that are just talking heads - the bbc don't
>>> think of that usage but I still do it - I even turn the brightness
>>> down to black to save battery while doing that).  Similarly for  
>>> text.
>>> So the full set of accessModes required to understand it here would
>>> need to include
>>>
>>> auditory
>>> text
>>>
>>> But authors don't think of these things - only users do. And in
>>> general we won't think of all the ways people might want to use the
>>> content. Expanding all the accessModes exhaustively would be  
>>> pointless
>>> as an algorithm could do that trivially.  And even now, I just went
>>> back and re-read it and realised I didn't think of "auditory +  
>>> text".
>>> This seems to me has been a central point of our work over the  
>>> years -
>>> to NOT project onto users how they should use things but instead to
>>> give users control.  Author ideas of how to use stuff is not giving
>>> users control in my view.
>>>
>>> Charles (Myers) - the point ascribed to me as the need for a common
>>> data model in the other email - I'm afraid I haven't expressed  
>>> myself
>>> clearly enough - my point was subtly different to what its reported
>>> as. My point was that we need a common data model yes, but we should
>>> use different fields for the physical access modes present and the
>>> author's view of how that resource "should be used".  For example,  
>>> if
>>> we *do* decide to provide author-deteremined-usage info (which I  
>>> don't
>>> support but ..) then using this same example of Liddy's the metadata
>>> might be something like
>>>
>>> accessMode = visual
>>> accessMode = auditory
>>> accessMode = text
>>>
>>> accessModeUsage = visual + auditory
>>> accessModeUsage = visual
>>> accessModeUsage = visual + text
>>>
>>> This is repetitious and has redundant information and doesn't look
>>> good - there may be more economical ways to express it but mixing  
>>> the
>>> accessMode usage and the physical accessModes in the same fields  
>>> will
>>> lead people towards the mixed model - i.e. we will have to explain  
>>> the
>>> "+" calculus of values relating to accessMode and this will
>>> overcomplicate the simple description. So my point was, even though
>>> the two different ways to use accessMode *could* use the same fields
>>> i.e. they could just be alternative ways to use those fields, we
>>> should still separate them.  The fact is that the meaning of say
>>> "visual" is different in each case - in one case it means  
>>> "physically
>>> present" and in the other it means "how I think you might use it".
>>> There is no case in my mind to use the same fields for these very
>>> different uses.
>>>
>>> andy
>>>
>>>> Madeleine,
>>>> you seem to have misunderstood me.
>>>>
>>>> I am saying, as Charles Nevile also understands it, I believe,  
>>>> that when
>>>> stating the accessMode, one states what is required to be able to
>>>> comprehend and use a resource.
>>>>
>>>> If there are a range of things available, say video (incl audio)  
>>>> and
>>>> captions, some users will use the audio and some the captions -  
>>>> correct?
>>>> In this case, the video could have assessModes:
>>>>
>>>> visual + auditory
>>>> visual
>>>> visual + text
>>>>
>>>> A user who wants captions would probably have visual + captions  
>>>> in their
>>>> profile. It is easy to infer that they want the video with the  
>>>> captions
>>>> on the screen (however they get there) - they might also get the  
>>>> sound
>>>> but as they have not included it, that is not an accessMode they  
>>>> are
>>>> asking for. Clearly they will want this resource - no?
>>>>
>>>> A person who does not have vision might also be interested in this
>>>> resource. They will probably say their accessModes are text and  
>>>> auditory
>>>> and so they are not likely to want this resource - they have not
>>>> included visual and the resource is, apparently, incomplete  
>>>> without it.
>>>>
>>>> What is different about this?  I think I was just adding, in my  
>>>> email,
>>>> that this can be done so the resource description and user needs
>>>> statements of accessModes must not get concatenated, which would  
>>>> make
>>>> them useless, and that this prohibition is possible - contrary to  
>>>> what
>>>> normally happens with metadata.
>>>>
>>>> Liddy
>>>>
>>>> On 30/10/2013, at 3:32 AM, Madeleine Rothberg wrote:
>>>>
>>>>> Liddy,
>>>>>
>>>>> I can't write a full response because I am in another meeting,  
>>>>> but I
>>>>> want to stress that the idea you have raised of a minimum  
>>>>> complete set
>>>>> of accessModes is useful but should not replace access mode as
>>>>> previously defined. I believe we must retain the access mode field
>>>>> that lists the access modes a resource uses to communicate. When
>>>>> alternatives are added or linked then more access mode combos  
>>>>> become
>>>>> viable and that can feed into the list of various minimum complete
>>>>> sets of accessModes.
>>>>>
>>>>> Madeleine
>>>>>
>>>>> On 2013-10-29, at 12:04 PM, "Liddy Nevile" <liddy@sunriseresearch.org 
>>>>> >
>>>>> wrote:
>>>>>
>>>>>> My comments...
>>>>>>
>>>>>> Charles Nevile ...
>>>>>> Charles raised the question of whether these attributes are a
>>>>>> declaration of conformance (as in alternativeText means that  
>>>>>> "all of
>>>>>> the photographs and other media have alternate text") or just  
>>>>>> whether
>>>>>> the author of the content (or adapted version of the content)  
>>>>>> used
>>>>>> alternate text on the significant parts of the content to the  
>>>>>> best of
>>>>>> their abilities. The intent of these are the latter. Since this
>>>>>> metadata is being added by people who care about accessibility,  
>>>>>> we
>>>>>> have to trust that they will apply their best efforts before  
>>>>>> they'd
>>>>>> add the attribute.
>>>>>>
>>>>>> It has long been a tradition in the DC world of metadata to  
>>>>>> assume
>>>>>> that people have good intentions - they don't always, but those  
>>>>>> who
>>>>>> do make it worthwhile trusting...
>>>>>>
>>>>>> then there is a discussion about mediaFeature.... I am developing
>>>>>> some fairly strong feelings baout this. First, I don't think
>>>>>> 'mediaFeature' is anything like as good a name as accessFeature '
>>>>>> given that we are mostly describing things that are done to  
>>>>>> increase
>>>>>> accessibility - and we have accessMode...  Then Jutta wanted us  
>>>>>> to
>>>>>> add in 'adaptation' or the equivalnet. I think that a feature  
>>>>>> implies
>>>>>> something special but taking Jutta's position it might be  
>>>>>> better to
>>>>>> have them called accessAdaptation - ie for things like captions
>>>>>> etc??? Certainly I would not want both feature and adaptation  
>>>>>> in a
>>>>>> single name - that would be introducing redundancy, I think...
>>>>>>
>>>>>> Next, I think the idea that we should label things because  
>>>>>> someone
>>>>>> tried to fix it is absurd - to be honest. We are asking people to
>>>>>> make assertions about the resource, or their needs, not to tell  
>>>>>> us
>>>>>> how nice they are. An assertion, made in good faith, should  
>>>>>> mean that
>>>>>> something has been achieved - eg alt tags for all images,  
>>>>>> etc ....
>>>>>>
>>>>>> Next, I want us to be clear about accessMode. As Charles Nevile  
>>>>>> and I
>>>>>> understand it, this will be a set of assertions that tell us  
>>>>>> what is
>>>>>> the minimum complete set of accessModes that will convey all the
>>>>>> content of a resource. So we might get visual + text, visual +  
>>>>>> audio,
>>>>>> text, etc ... ie more than one statement. This can be done and it
>>>>>> involves a trick - generally the value of RDF means that if I  
>>>>>> make an
>>>>>> assertion and then you add another, both bits of info can be put
>>>>>> together to make a richer statement. In this case, we certainly  
>>>>>> do
>>>>>> not want that to happen! In RDF the merging of statements can be
>>>>>> avoided by using what is known as a 'blank node'.
>>>>>> I am writing all this because I think  both being clear about  
>>>>>> the use
>>>>>> of accessMode and knowing that it will work is really  
>>>>>> important :-)
>>>>>>
>>>>>>
>>>>>> On 23/10/2013, at 1:53 AM, Charles Myers wrote:
>>>>>>
>>>>>>> I'm back and caught up on accessibility metadata from the  
>>>>>>> calls of
>>>>>>> two weeks ago.  The eganda for today's meeting cal be seen  
>>>>>>> below and
>>>>>>> at
>>>>>>> https://wiki.benetech.org/display/a11ymetadata/Next+Accessibility+Metadata+Meeting+Agenda
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I also wrote our minutes from the last two meetings at
>>>>>>> https://wiki.benetech.org/pages/viewpage.action? 
>>>>>>> pageId=58853548 and
>>>>>>> the issue tracker has been updated on the mediaFeature
>>>>>>> issue.http://www.w3.org/wiki/WebSchemas/Accessibility/Issues_Tracker#What_is_the_goal_of_mediaFeature.3F_.28conforming_or_informational.29_Do_we_have_this_right.3F
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Note that we have a new conference call number this week.  And  
>>>>>>> we
>>>>>>> will be back on a regular weekly schedule from this point on.
>>>>>>> October 22, 2013 Accessibility Metadata working group call
>>>>>>> Weekly Meeting
>>>>>>> Schedule: The next call will be Tuesday, October 22, 9:00am PDT
>>>>>>> (California), 12:00am EDT (Ontario, New York), 5:00PM in  
>>>>>>> London and
>>>>>>> 6:00 PM on the continent, 3:00 AM in Australia
>>>>>>> Conference call: +1-866-906-9888 (US toll free), +1-857-288-2555
>>>>>>> (international), Participant Code: 1850396#
>>>>>>> Etherpad: (10/22/2013)
>>>>>>> IRC: Freenode.net #a11ymetadata (although more of the collab  
>>>>>>> seems
>>>>>>> to happen in the etherpad)
>>>>>>> The goal of the call will be a review of the open issues on  
>>>>>>> the w3c
>>>>>>> wiki and get to closure on these issues and work these with
>>>>>>> schema.org representatives.  See issues and accessMode/ 
>>>>>>> mediaFeature
>>>>>>> matrix. There will also be a discussion of the use of these
>>>>>>> attributes for search, as shown in the blog article.
>>>>>>>
>>>>>>> The next call will be October 22 and then will settle into  
>>>>>>> weekly
>>>>>>> meetings as required.
>>>>>>>
>>>>>>> The public site is http://www.a11ymetadata.org/ and our twitter
>>>>>>> hashtag is #a11ymetadata.
>>>>>>>
>>>>>>> Overall Agenda
>>>>>>> New Business - We will start discussing this promptly at the  
>>>>>>> top of
>>>>>>> the hour.
>>>>>>>
>>>>>>>  mediaFeature - our goal is to get agreement on the  
>>>>>>> mediaFeature
>>>>>>> properties, as noted in the issue list.  As noted in the last  
>>>>>>> call's
>>>>>>> minutes, we did a deep dive into visual and textual transform
>>>>>>> features last time. I've editted the list down to reflect both  
>>>>>>> new
>>>>>>> properties that we decided on last time and some of the
>>>>>>> simplifications that come with the extension mechanism. I'd  
>>>>>>> like to
>>>>>>> reach a conclusion on those, both for the specific names but  
>>>>>>> also
>>>>>>> for the general framework, so that one can see the extension
>>>>>>> mechanism.  I'd like to propose even that we segment this  
>>>>>>> discussion
>>>>>>> into two parts... agreement on the current properties and then
>>>>>>> consideration of new properties (I want to see the discussion  
>>>>>>> make
>>>>>>> progress)
>>>>>>>      transformFeature - do we mike that name (against the
>>>>>>> "content feature")
>>>>>>>          Finish discussion on visualTransformFeature and
>>>>>>> textualTransformFeature
>>>>>>>          Consider auditoryTransformFeature (structural  
>>>>>>> Navigation
>>>>>>> will be covered in textualTransform) and tactileTransform
>>>>>>>      Review contentFeature side of the mediaFeatures starting
>>>>>>> from the proposed table in the issues list
>>>>>>>          textual (note the removal of desacribedMath) -
>>>>>>> alternativeText, captions, chemML, laTex, longDescription,  
>>>>>>> mathML,
>>>>>>> transcript
>>>>>>>          tactile (note the simplication of braille to be the
>>>>>>> extended form) - braille, tactileGraphic, tactileObject
>>>>>>>          auditory - audiDescription
>>>>>>>          visual - signLanguage, captions/open
>>>>>>>  ATCompatible
>>>>>>>  ControlFlexibility and accessAPI (we'll be lucky if we get to
>>>>>>> this point)
>>>>>>>  accessMode and the three proposals for the available access
>>>>>>> modes (this is a topic for a future call)
>>>>>>>  is/hasAdaptation
>>>>>>> --
>>>>>>> You received this message because you are subscribed to the  
>>>>>>> Google
>>>>>>> Groups "Accessibility Metadata Project" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from  
>>>>>>> it,
>>>>>>> send an email to a11y-metadata-project+unsubscribe@googlegroups.com 
>>>>>>> .
>>>>>>> To post to this group, send email to
>>>>>>> a11y-metadata-project@googlegroups.com.
>>>>>>> For more options, visit https://groups.google.com/groups/ 
>>>>>>> opt_out.
>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the  
>>>>>> Google
>>>>>> Groups "Accessibility Metadata Project" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to a11y-metadata-project+unsubscribe@googlegroups.com 
>>>>>> .
>>>>>> To post to this group, send email to
>>>>>> a11y-metadata-project@googlegroups.com.
>>>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Accessibility Metadata Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from  
>>>>> it, send
>>>>> an email to a11y-metadata-project+unsubscribe@googlegroups.com.
>>>>> To post to this group, send email to
>>>>> a11y-metadata-project@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>
>>>
>>>
>>>
>>>
>>> andy
>>> andyheath@axelrod.plus.com
>>> --
>>> __________________
>>> Andy Heath
>>> http://axelafa.com
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Accessibility Metadata Project" group.
>>> To unsubscribe from this group and stop receiving emails from it,  
>>> send
>>> an email to a11y-metadata-project+unsubscribe@googlegroups.com.
>>> To post to this group, send email to
>>> a11y-metadata-project@googlegroups.com.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>
>
>
>
> andy
> andyheath@axelrod.plus.com
> -- 
> __________________
> Andy Heath
> http://axelafa.com
>
Received on Thursday, 31 October 2013 20:49:36 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:29:32 UTC