Re: Accessibility - A perfect solution?

> 
> Now, admittedly, this is an imperfect solution.  Please refrain from any
> nitpicking of it to death.  It is presented as a theoretical solution.  As
> of today, I do not believe that the user agents 'send' any kind of
> identifying data with each file request.  Doesn't mean they won't in the
> future.  It is just an idea.

User Agents have sent:
- a string identifying themselves;
- a list of acceptable media types;
- a prioritised list of natural languages,

since the beginning of HTTP 1.0.

This, for example is the user agent string for the version of Lynx I
have on my Linux machine:

Lynx/2.8.1rel.2 libwww-FM/2.14

This identifies two components (roughly the user interface and core
libraries).

Unfortunately, this feature was abused from very early, so for example,
Internet Explorer identifies as Netscape and sends its true identity
as a comment, because, when Netscape held the lead, people started
serving a fallback page to IE.  This is the great problem with user
agent information, it tends to be used to exclude rather than include.
For example, because minority browser descriptions tend not to be updated
in the browsercap files used to indicate browser capabilities to scripts
generating dynamic pages, and these scripts sometimes refuse to talk
to browsers at all unless they meet certain requirements, many people
run Lynx claiming to be Netscape or IE.  (Also, browsercaps can take a
pessimistic interpretation of capabilities - Lynx has some ability to
cope with frames, but browsercaps will say no frames, and people may
be prepared to deal with a mangled table rather than have their request
rejected outright.)

The Accept header that indicates valid media types has, more or less,
been completely ignored by site designers, so most browsers don't try
to send a sensible one.  There is also a problem that the header can
get very long if it lists all the types the machine can handle.  There
is generally little control over it by the end user.

I haven't got a recent HTTP spec here, but I think you can prioritise
the media types, which in particular means that you can reject some by
giving a score of zero.  It think it also possible to use wildcards, i.e.

   Accept: audio/*;q=0.0

might indicate that there is no point in sending you any sound material.

Accept-Language is possibly the one best implemented, although even
this is rarely supported.  Windows Update uses it and so does the Google
search engine.  The protocol allows one to give floating point scores to
particular languages, but the big two only allow an order to be defined
and assign scores themselves.

(Because I noticed a USENET posting from Norway, with a header that
indicated the use of Netscape configured (default) for English only,
I tried Google with this Accept-Language header:  "no,en;q=0.5" and got
the Norwegian page, to confirm that it could be used as a configuration
test - I must remember to reset this to "en-gb,en;q=0.5", to prefer
"real" English :-).)

> A prediction I'll make is that any company producing this kind of in-process
> application would flat make tons of money.  And, if you need someone to be
> the developmental lead on it...well, you know how to reach me.  :)

Most web sites are designed by non-techies, who only know HTML, or
ASP generation of HTML.  Doing this sort of content negotiation requires
knowledge of HTTP and how to properly configure a web server.  People
running cheap web site normally don't have access to configure the server.

Apache does have significant support for this, but the latest IIS
documentation, I've seen, would require relatively low level ASP
programming, rather than the declaritive form used by Apache.

When this came up before, it was said that W3C were working on a protocol
to allow browsers to list individual capabilities in requests, but I'd
be surprised if the big 2 allowed users to customise them, and it will
take a long time before there is enough market penetration to make a
difference.

Received on Tuesday, 23 January 2001 19:01:48 UTC