Re: longdesc quality statistics

On Fri, 21 Sep 2012 18:03:39 +0200, David Singer <singer@apple.com> wrote:

>
> On Sep 21, 2012, at 7:03 , Charles McCathie Nevile  
> <chaals@yandex-team.ru> wrote:
>
>> TL;DR: I believe the "longdesc lottery" conclusion that a lot of  
>> longdescs were hopelessly bad, and that longdesc is often terrible
>> in top-X sites.
>> It roughly matches my research (and my expectations). I expect serious
>> careful research to show things getting better. I do not believe that  
>> data justifies the conclusion "so longdesc is broken and should be
>> removed".
>
> But…
>
> a) why would anyone now implement longdesc knowing that the descriptions  
> that they'd expose to users were, for the vast majority, 'hopelessly  
> bad'?

1. It's really quite easy.
2. I believe the situation was *worse* when JAWS implemented it, and I  
presume their rationale was user demand. They're not noted for randomly  
implementing things on a proactive basis...

> b) why would any end user needing more information bother to look at the  
> longdesc, knowing that the overwhelming majority of the time, they'd be  
> wasting their time getting something hopelessly bad?

Because the price of discovering something pointless and skipping it is  
low, and the value of finding something really helpful is extremely high.  
It's the same rationale I use for email - despite a lot of spam, the  
content makes it worth having email services.

>> And most developers are apparently not true believers,
>> don't *test* the long descriptions they make.
>
> Right, we've had long discussions on having features that are hidden  
> from ordinary users -- and hence, from most web developers.

These are separate issues.

The hidden metadata problem is real, and increases the chance that a given  
longdesc (or alt, for that matter) will not be very good.

But not having an implementation to test with significantly increases this  
problem, because it makes it difficult to develop a workflow that  
incorporates a useful test - something that many professional developers  
can do that resolves the "I didn't see the problem in casual observation  
so I won't worry about it" problem which makes hidden metadata unreliable.

I *believe* (without careful research) that developers' use of iCab is  
roughly in line with general usage, and testing with Opera is a relatively  
small multiple if general usage, while testing with screen readers is a  
fraction of normal usage.

>> I note 15 years ago when I began working seriously in accessibility, the
>> alt attribute was something people generally thought was unreasonable,
>> couldn't be done, was almost always missing, and when it was present it
>> was almost always done so badly as to be a waste of time. I would
>> characterise the situation now as about 10 times as good - maybe a
>> majority of people accept it as a good idea, it is often present, and in
>> many cases it isn't useless (although I honestly doubt that good use of
>> alt has become the statistical norm, the almost 2 decades since it was
>> introduced have seen significant improvement).
>
>
> Is that perhaps because of ordinary browsers exposing it during hover,  
> do you think?

Partially. When that used to happen it was certainly a cause of all sorts  
of rubbish being put into alt. But the improvements were not only related  
to getting rid of that behaviour, but also a general improvement in  
developers' understanding and knowledge.

cheers

Chaals

-- 
Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex
       chaals@yandex-team.ru         Find more at http://yandex.com

Received on Friday, 21 September 2012 16:53:36 UTC