unique titles / headings Re: 3.2

On Thu, 20 Nov 2003, Joe Clark wrote:

>> For example, I still do not agree that unique page titles can not be
>> level one, and I think there is a fair amount of agreement on this
>
>Let's see. You run a very large database-driven site whose pages do not
>exist until they are requested by the visitor. Please explain how every
>single page <title> can be unique, especially if the pages are search
>results.

CMN:
If titles for pages are stored (as with many large sites) then they can
simply be checked - the unix command uniq does this (as one example).

However, as Joe says, the interesting case is when they are dynamically
generated.

Search titles are simple enough for small children to work it out:

 "site foo search results for term1 term2 term3 phrase term4 term5, term6"

if the order is significant to the results, these should be in the order
tested. If it is not, then sorting the terms/phrases can be done with a tool
such as the unix command sort.

Generating this title is not difficult in many cases. For google - a
well-known example, the mechanism required could be done with a short set of
sed scripts based on the URI of the document.

I am not about to discuss the relative priority of the requirement. I just
want to point out that it is feasible in the cases I can imagine. (Asking for
uniqueness across the Web means adding information identifying the site -
easy, if it is useful to require uniqueness to this level.

My experience in the Semantic Web (which deals with a related problem,
providing a human-readable label for a potentially infinite and certainly
very large number of objects produced in a totally decentralised process
where there is likely to be a lot of overlap) suggests that some contextual
information can be used if people really need things that are completely
unique, so long as within real use contexts the information is distinct
enough to avoid confusion.

cheers

Chaals

>> also important maybe level 1:  provide headings and linked text that are
>> unique and clear when  read out of context
>
>Still an appallingly misguided and irrelevant concept. As has been
>demonstrated already, page authors cannot be expected to simultaneously
>write valid HTML and also write HTML that can be spontaneously remixed by
>some user agent or other. How many times do I have to tell you this before
>you believe it?

Once, with an example that is convincing and memorable. Perhaps I missed a
message that had such an example - do you happen to have a reference?

>Skill-testing question: If Freedom Scientific comes out with another
>whiz-bang feature, as they did with browsing by headings and links, will
>WAI WG obediently turn around and flirt with the idea of forcing authors
>to write their sites to facilitate this company-specific peccadillo?

Let me see. WCAG included a requirement for structured headers in 1999. JAWS
implemented the feature that uses it in 2001. Freedom Scientific invented a
"context-help" attribute (or some such) a couple of years ago. I don't see it
in W3C documents (although I understand the motivation, and the Xforms
specification introduced the functionality around 1999 or 2000). WCAG
required tagging language changes in 1999. I forget when Freedom Scientific
introduced the matching functionality, but it was later, and they were not
the first to do so.

So I guess the answer is "it doesn't seem likely, unless there are good
reasons for it and it gets general standardisation support". Which means it
is hard to interpret as obediently "turning around".

Don't mind me, I'm just a historian who worked on this stuff for a while.

Received on Friday, 17 September 2004 12:39:29 UTC