Re: ALT as required attribute

As near as I can tell the question is not about what alt is good for, 
but whether there is any reason to worry about ensuring there is always 
an alt - for example, why use alt="" instead of just leaving out the 
alt attribute?

I give two answers here - one about the alt attribute in particular, 
and the other about the idea of validation in general.

alt attribute:

Always including it is something of a sign that you cared about getting 
it right. If you have thousands of images in a site, and the first 
dozen I look at have reliable alt text, then I will assume (pro tempore 
- I have seen too many sites that are overall badly designed or 
maintained to give total credence based on a couple of good pages) that 
where you have alt="" it is because that is appropriate.  If you just 
leave it off I assume you aren't sure how to code HTML properly, and 
suspect (pro tempore again - you might just be using bad tools) that 
your site reflects overall this lack of fundamental knowledge.

As an author, this validation  of alt attributes is your friend. It 
gives you an easy tool to find out what things on your site haven't 
been checked yet, so the process error of not determining the 
appropriate alternative for some image is matched by the validation 
error of there being none, which is easy to test for.

This principle is picked up by the Authoring Tool Accessibility 
Guidelines, which prohibit putting in default alt text for this reason. 
see http://www.w3.org/TR/ATAG10/atag10.html#check-no-default-alt

The reason to be concerned about validation in general is one of 
principle for HTML, less so for XHTML. The history of browsers was to 
make things that handled assorted tags that authors might write in 
something that looked like HTML. This left authors to think it didn't 
really matter what they wrote, and authoring tool producers to make 
software that produced any old rubbish.

For browser makers like Netscape and Microsoft, building massive pieces 
of software to run on desktop machines and primarily oriented towards 
visual rendering this just meant adding more error-correction code. The 
story goes that mainstream browsers consist of a relatively small 
engine for rendering code that is valid to their internal rules, and 
most of the program is to try and massage content found on the web into 
code that can be processed by the engine.

As people build other browsers, they want to avoid spending their life 
writing error-corrections and actually put some useful features in. On 
small platforms such as mobile devices (outside the US this is a very 
large market, and it is growing everywhere, if not at the spectacular 
rates promised by its most hyped early promoters) or for 
special-purpose tools such as a braille note-taker, or a communicator 
for someone who cannot speak easily, this is due to machine constraints 
as well as programmer preferences. But to do this in the real world 
they have to rely on adoption of standards to a reasonable degree.

In the XML world, processors are meant to stop if they run into an 
error. This is to provide the robustness required for doing things like 
automatic transactions - using the old HTML rule of ignore what you 
don't understand is a bad way to tell a robot to work through 
instructions for heart surgery - if it doesn't understand everything it 
shouldn't start the process. (A less extreme example is comparing 
privacy agreements to deiced whether to submit your name and phone 
number...) Bringing HTML to the XML world through the development of 
XHTML means that it is possible to use XML parsers and systems to 
interpret a lot of existing Web content with only minimal change 
required, and that change can be automated in many cases. At the same 
time it is close enough to HTML to be understood by legacy browsers 
with minimal changes. For a content management system it should be very 
easy to provide an application/xhtml+xml version of any document as 
XHTML, and provide an automatically generated text/html version to old 
fashioned browsers.

An important feature of standardisation is that it might not be the 
best possible approach to a particular problem, but there is value in 
the fact that we all use the same rules. Having an ongoing development 
process means that if we find that we made bad decisions we can change 
them (I claim the switch in the CSS cascade order between CSS1 and CSS2 
is an example of doing this) but if we don't have a standard to begin 
with then we are left trying to guess what other people will do and 
follow that. For very simple stuff (like visually rendering HTML) it 
more or less works, but for using the Web at anything  like its full 
potential it doesn't - the same companies that claim it isn't that 
important to use standardised HTML or XML are often the ones working 
very hard to standardise ways of providing and describing services over 
the Web, in recognition of the fact that interoperability is crucial.

my 2 cents worth

Chaals

On Tuesday, Feb 4, 2003, at 04:15 Australia/Melbourne, Terry Thompson 
wrote:

> I'm trying to grasp the implications of ALT being a required attribute
> in various if not all DTD's of both HTML and XHTML.

it is required in all XHTML DTDs, and in HTML 4 and up. I am curious 
about why it was not required in DTDs prior to HTML 4...

> If I load the following code in Netscape 7, IE 6, Opera 6.01, or Amaya
> 7.2, each of these browsers is forgiving of my having omitted an ALT
> attribute from my IMG element. This same code doesn't validate, but why
> should I be concerned about validation, if so many user agents 
> seemingly
> aren't enforcing the requirement?
>
--
Charles McCathieNevile           charles@sidar.org
Fundación SIDAR                       http://www.sidar.org

Received on Monday, 3 February 2003 21:45:07 UTC