Re: [a11y-metadata-project] Reminder of accessibility metadata call coming up in an hour (9:00 AM PDT)

Charles (Myers), just a quick question ( a fuller answer later)

 > Trying to explain whether an access mode conveys useful or necessary
 > information is a difficult task. I don't believe that we can take this
 > on and have success in a finite period.

I completely agree. I'm not sure how an argument in favour of doing so 
has been ascribed to me. I have I believe argued against doing that. 
That in my view is what the "+" calculus Liddy is proposing actually 
does.  What I was saying was "If we MUST do that then keep it separate"
and my reasons for suggesting that are exactly because I think "how a 
user chooses to use something" or "what information content is a 
replacement for other" should be well beyond our scope.

Have I been misattributed ? Is my point that "if we're doing that do it 
separately" clear ?

andy
> Andy,
>    Your email raises a two points.  I agree with one and disagree with
> another.
>
> I agree with the need to express the source access modes, the
> mediafeatures, and then the access modes that the the content is made
> available by.   We have two views of this expressed in the issue tracker at
> http://www.w3.org/wiki/WebSchemas/Accessibility/Issues_Tracker#accessMode_and_accessibilityMode_subtype.2C_proposal_1
>
> and
> http://www.w3.org/wiki/WebSchemas/Accessibility/Issues_Tracker#accessMode_and_mediafeature_use_cases
>
> and I think that we'll end up with some ways to express both the start
> and the augmented access modes.  Assuming that the mediaFeature item is
> resolved (or close to it, this is the next logical part to discuss). And
> one of the outcomes of the call on Tuesday is that we need to tackle the
> description of the ways to do this, from simple access modes that are
> implied by media type to the more complex sets that should be possible
> to encode.  I think I heard myself volunteering for that, and to oput
> the proposal on the wiki for collaboration.  This email thread gives me
> one more driver for that.
>
> The point that I disagree with is the application of this metadata to
> describe degrees of utility of the access modes.  I think that this is a
> slippery slope, as it requires the metadata to express judgement on the
> importance. If user context means judgement of which delivery method is
> "best" for the user, as opposed to what is possible, I think that we
> have a scope problem.  Not that it's not an interesting problem; I just
> don't think that we can tackle this in the current effort to get this
> into schema.org.
>
> And I'd like to take this out of the theoretical into an example of two
> videos... from TED talks.
> So, here are two videos (and I picked ones for their attributes, not the
> content).
> I'll note that these videos have both video and audio as their basic
> access modes, and then have closed caption available (text, selectecable
> as the language in the lower right of the video pane) and a transcript,
> which appears under the video on the web page if selected.  Both of
> these videos make the same information available.
>
> The way that I see this metadata is that it starts as
> Visual + auditory
> and then has captions (auditory available as text, synchronized with the
> video) and a transcript
>
> So I have now added
> Visual+Textual
> Textual
>
> So we have three ways that the content can be used:
> Visual + Auditory
> Visual+Textual
> Textual
> (note that closed captions are textual, and depend on the player to
> become visual, open captions, or sign language, which are "burned into"
> the visual plane are visual.
>
> The first is a typical TED video that is very dependent on the images.
> Carolyn Porco: This is Saturn
> http://www.ted.com/talks/carolyn_porco_flies_us_to_saturn.html
> This is very visually dependent, as it is full of pictures from
> spacecraft of Saturn.  As a sighted person, I would choose to watch this
> video... it would lose too much meaning to me.
>
> The second video comes from Amanda Bennett, where she describes her
> husband's experience with death. The talk is just her talking, moving
> around the stage and gesturing.
> Amanda Bennett: We need a heroic narrative for death
> http://www.ted.com/talks/amanda_bennett_a_heroic_narrative_for_letting_go.html
>
>
> The line I'm afraid that you're crossing is one of the utility of one
> set of access modes for the user over another.  If I had to give grades
> (using the scale of A - F, with F being a fail) to the usefuless of
> these talks, I'd rate them like this:
> Carolyn Porco:
> Visual + Auditory (A)
> Visual+Textual (A)
> Textual (D)
> And, if I chose to just do the video as auditory, it'd be a (D)
>
> Amanda Bennett:
> Visual + Auditory (A)
> Visual+Textual (A, but you lose some of the passion)
> Textual (A, but you lose even a bit more of the passion)
> And, if I chose to just do the video as auditory, it'd be an (A)
>
> Trying to explain whether an access mode conveys useful or necessary
> information is a difficult task. I don't believe that we can take this
> on and have success in a finite period.
>
> On a personal note, I DO wish that TED talks had flags to tell me which
> ones depended on visual, so I could listen to the ones that were not
> visual-dependent as I drove. But I'll accept an internet and podcasts
> that don't tell me that, for now.
>
> On 10/30/2013 3:05 AM, Andy Heath wrote:
>> Liddy,
>>
>> I think your example is a good one to explain exactly why it *won't*
>> work like that.  The problem is it gives too much weight to the author
>> and not the context.  For example, for a video with captions your
>> example gives the metadata
>>
>> visual + auditory
>> visual
>> visual + text
>>
>> as describing the modalities that "is required to be able to
>> comprehend and use a resource."
>>
>> This is *as the author sees it*.
>> So what other ways are there to see it ?
>>
>> Well what about using the auditory mode alone ? (I do this very often
>> with the kind of videos that are just talking heads - the bbc don't
>> think of that usage but I still do it - I even turn the brightness
>> down to black to save battery while doing that). Similarly for text.
>> So the full set of accessModes required to understand it here would
>> need to include
>>
>> auditory
>> text
>>
>> But authors don't think of these things - only users do. And in
>> general we won't think of all the ways people might want to use the
>> content. Expanding all the accessModes exhaustively would be pointless
>> as an algorithm could do that trivially.  And even now, I just went
>> back and re-read it and realised I didn't think of "auditory + text".
>> This seems to me has been a central point of our work over the years -
>> to NOT project onto users how they should use things but instead to
>> give users control.  Author ideas of how to use stuff is not giving
>> users control in my view.
>>
>> Charles (Myers) - the point ascribed to me as the need for a common
>> data model in the other email - I'm afraid I haven't expressed myself
>> clearly enough - my point was subtly different to what its reported
>> as. My point was that we need a common data model yes, but we should
>> use different fields for the physical access modes present and the
>> author's view of how that resource "should be used".  For example, if
>> we *do* decide to provide author-deteremined-usage info (which I don't
>> support but ..) then using this same example of Liddy's the metadata
>> might be something like
>>
>> accessMode = visual
>> accessMode = auditory
>> accessMode = text
>>
>> accessModeUsage = visual + auditory
>> accessModeUsage = visual
>> accessModeUsage = visual + text
>>
>> This is repetitious and has redundant information and doesn't look
>> good - there may be more economical ways to express it but mixing the
>> accessMode usage and the physical accessModes in the same fields will
>> lead people towards the mixed model - i.e. we will have to explain the
>> "+" calculus of values relating to accessMode and this will
>> overcomplicate the simple description. So my point was, even though
>> the two different ways to use accessMode *could* use the same fields
>> i.e. they could just be alternative ways to use those fields, we
>> should still separate them.  The fact is that the meaning of say
>> "visual" is different in each case - in one case it means "physically
>> present" and in the other it means "how I think you might use it".
>> There is no case in my mind to use the same fields for these very
>> different uses.
>>
>> andy
>>
>>> Madeleine,
>>> you seem to have misunderstood me.
>>>
>>> I am saying, as Charles Nevile also understands it, I believe, that when
>>> stating the accessMode, one states what is required to be able to
>>> comprehend and use a resource.
>>>
>>> If there are a range of things available, say video (incl audio) and
>>> captions, some users will use the audio and some the captions - correct?
>>> In this case, the video could have assessModes:
>>>
>>> visual + auditory
>>> visual
>>> visual + text
>>>
>>> A user who wants captions would probably have visual + captions in their
>>> profile. It is easy to infer that they want the video with the captions
>>> on the screen (however they get there) - they might also get the sound
>>> but as they have not included it, that is not an accessMode they are
>>> asking for. Clearly they will want this resource - no?
>>>
>>> A person who does not have vision might also be interested in this
>>> resource. They will probably say their accessModes are text and auditory
>>> and so they are not likely to want this resource - they have not
>>> included visual and the resource is, apparently, incomplete without it.
>>>
>>> What is different about this?  I think I was just adding, in my email,
>>> that this can be done so the resource description and user needs
>>> statements of accessModes must not get concatenated, which would make
>>> them useless, and that this prohibition is possible - contrary to what
>>> normally happens with metadata.
>>>
>>> Liddy
>>>
>>> On 30/10/2013, at 3:32 AM, Madeleine Rothberg wrote:
>>>
>>>> Liddy,
>>>>
>>>> I can't write a full response because I am in another meeting, but I
>>>> want to stress that the idea you have raised of a minimum complete set
>>>> of accessModes is useful but should not replace access mode as
>>>> previously defined. I believe we must retain the access mode field
>>>> that lists the access modes a resource uses to communicate. When
>>>> alternatives are added or linked then more access mode combos become
>>>> viable and that can feed into the list of various minimum complete
>>>> sets of accessModes.
>>>>
>>>> Madeleine
>>>>
>>>> On 2013-10-29, at 12:04 PM, "Liddy Nevile" <liddy@sunriseresearch.org>
>>>> wrote:
>>>>
>>>>> My comments...
>>>>>
>>>>> Charles Nevile ...
>>>>> Charles raised the question of whether these attributes are a
>>>>> declaration of conformance (as in alternativeText means that "all of
>>>>> the photographs and other media have alternate text") or just whether
>>>>> the author of the content (or adapted version of the content) used
>>>>> alternate text on the significant parts of the content to the best of
>>>>> their abilities. The intent of these are the latter. Since this
>>>>> metadata is being added by people who care about accessibility, we
>>>>> have to trust that they will apply their best efforts before they'd
>>>>> add the attribute.
>>>>>
>>>>> It has long been a tradition in the DC world of metadata to assume
>>>>> that people have good intentions - they don't always, but those who
>>>>> do make it worthwhile trusting...
>>>>>
>>>>> then there is a discussion about mediaFeature.... I am developing
>>>>> some fairly strong feelings baout this. First, I don't think
>>>>> 'mediaFeature' is anything like as good a name as accessFeature '
>>>>> given that we are mostly describing things that are done to increase
>>>>> accessibility - and we have accessMode...  Then Jutta wanted us to
>>>>> add in 'adaptation' or the equivalnet. I think that a feature implies
>>>>> something special but taking Jutta's position it might be better to
>>>>> have them called accessAdaptation - ie for things like captions
>>>>> etc??? Certainly I would not want both feature and adaptation in a
>>>>> single name - that would be introducing redundancy, I think...
>>>>>
>>>>> Next, I think the idea that we should label things because someone
>>>>> tried to fix it is absurd - to be honest. We are asking people to
>>>>> make assertions about the resource, or their needs, not to tell us
>>>>> how nice they are. An assertion, made in good faith, should mean that
>>>>> something has been achieved - eg alt tags for all images, etc ....
>>>>>
>>>>> Next, I want us to be clear about accessMode. As Charles Nevile and I
>>>>> understand it, this will be a set of assertions that tell us what is
>>>>> the minimum complete set of accessModes that will convey all the
>>>>> content of a resource. So we might get visual + text, visual + audio,
>>>>> text, etc ... ie more than one statement. This can be done and it
>>>>> involves a trick - generally the value of RDF means that if I make an
>>>>> assertion and then you add another, both bits of info can be put
>>>>> together to make a richer statement. In this case, we certainly do
>>>>> not want that to happen! In RDF the merging of statements can be
>>>>> avoided by using what is known as a 'blank node'.
>>>>> I am writing all this because I think  both being clear about the use
>>>>> of accessMode and knowing that it will work is really important :-)
>>>>>
>>>>>
>>>>> On 23/10/2013, at 1:53 AM, Charles Myers wrote:
>>>>>
>>>>>> I'm back and caught up on accessibility metadata from the calls of
>>>>>> two weeks ago.  The eganda for today's meeting cal be seen below and
>>>>>> at
>>>>>> https://wiki.benetech.org/display/a11ymetadata/Next+Accessibility+Metadata+Meeting+Agenda
>>>>>>
>>>>>>
>>>>>>
>>>>>> I also wrote our minutes from the last two meetings at
>>>>>> https://wiki.benetech.org/pages/viewpage.action?pageId=58853548 and
>>>>>> the issue tracker has been updated on the mediaFeature
>>>>>> issue.http://www.w3.org/wiki/WebSchemas/Accessibility/Issues_Tracker#What_is_the_goal_of_mediaFeature.3F_.28conforming_or_informational.29_Do_we_have_this_right.3F
>>>>>>
>>>>>>
>>>>>>
>>>>>> Note that we have a new conference call number this week. And we
>>>>>> will be back on a regular weekly schedule from this point on.
>>>>>> October 22, 2013 Accessibility Metadata working group call
>>>>>> Weekly Meeting
>>>>>> Schedule: The next call will be Tuesday, October 22, 9:00am PDT
>>>>>> (California), 12:00am EDT (Ontario, New York), 5:00PM in London and
>>>>>> 6:00 PM on the continent, 3:00 AM in Australia
>>>>>> Conference call: +1-866-906-9888 (US toll free), +1-857-288-2555
>>>>>> (international), Participant Code: 1850396#
>>>>>> Etherpad: (10/22/2013)
>>>>>> IRC: Freenode.net #a11ymetadata (although more of the collab seems
>>>>>> to happen in the etherpad)
>>>>>> The goal of the call will be a review of the open issues on the w3c
>>>>>> wiki and get to closure on these issues and work these with
>>>>>> schema.org representatives.  See issues and accessMode/mediaFeature
>>>>>> matrix. There will also be a discussion of the use of these
>>>>>> attributes for search, as shown in the blog article.
>>>>>>
>>>>>> The next call will be October 22 and then will settle into weekly
>>>>>> meetings as required.
>>>>>>
>>>>>> The public site is http://www.a11ymetadata.org/ and our twitter
>>>>>> hashtag is #a11ymetadata.
>>>>>>
>>>>>> Overall Agenda
>>>>>> New Business - We will start discussing this promptly at the top of
>>>>>> the hour.
>>>>>>
>>>>>>   • mediaFeature - our goal is to get agreement on the mediaFeature
>>>>>> properties, as noted in the issue list.  As noted in the last call's
>>>>>> minutes, we did a deep dive into visual and textual transform
>>>>>> features last time. I've editted the list down to reflect both new
>>>>>> properties that we decided on last time and some of the
>>>>>> simplifications that come with the extension mechanism. I'd like to
>>>>>> reach a conclusion on those, both for the specific names but also
>>>>>> for the general framework, so that one can see the extension
>>>>>> mechanism.  I'd like to propose even that we segment this discussion
>>>>>> into two parts... agreement on the current properties and then
>>>>>> consideration of new properties (I want to see the discussion make
>>>>>> progress)
>>>>>>       • transformFeature - do we mike that name (against the
>>>>>> "content feature")
>>>>>>           • Finish discussion on visualTransformFeature and
>>>>>> textualTransformFeature
>>>>>>           • Consider auditoryTransformFeature (structural Navigation
>>>>>> will be covered in textualTransform) and tactileTransform
>>>>>>       • Review contentFeature side of the mediaFeatures starting
>>>>>> from the proposed table in the issues list
>>>>>>           • textual (note the removal of desacribedMath) -
>>>>>> alternativeText, captions, chemML, laTex, longDescription, mathML,
>>>>>> transcript
>>>>>>           • tactile (note the simplication of braille to be the
>>>>>> extended form) - braille, tactileGraphic, tactileObject
>>>>>>           • auditory - audiDescription
>>>>>>           • visual - signLanguage, captions/open
>>>>>>   • ATCompatible
>>>>>>   • ControlFlexibility and accessAPI (we'll be lucky if we get to
>>>>>> this point)
>>>>>>   • accessMode and the three proposals for the available access
>>>>>> modes (this is a topic for a future call)
>>>>>>   • is/hasAdaptation
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "Accessibility Metadata Project" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to a11y-metadata-project+unsubscribe@googlegroups.com.
>>>>>> To post to this group, send email to
>>>>>> a11y-metadata-project@googlegroups.com.
>>>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Accessibility Metadata Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>> send an email to a11y-metadata-project+unsubscribe@googlegroups.com.
>>>>> To post to this group, send email to
>>>>> a11y-metadata-project@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Accessibility Metadata Project" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to a11y-metadata-project+unsubscribe@googlegroups.com.
>>>> To post to this group, send email to
>>>> a11y-metadata-project@googlegroups.com.
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>
>>
>>
>>
>> andy
>> andyheath@axelrod.plus.com
>




andy
andyheath@axelrod.plus.com
-- 
__________________
Andy Heath
http://axelafa.com

Received on Wednesday, 30 October 2013 13:26:12 UTC