RE: Multiple versions of a web page

I agree that the quote: "Can a disabled person use a piece of technology
with as much efficiency, ease of use and accuracy as that experienced by a
non-disabled person using the technology?" is idealistic and a goal to
strive for. But the key word here is strive. 

Most developers would run screaming in the other direction if faced with
this statement as a requirement. A statement more in the lines of:
A person with a disability should be able to use the technology, preferably
with the same ease of use, efficiency and accuracy as a non-disabled person.
(I'm not happy with the word should, it doesn't sound enough like a
requirement - but you get the idea)
This sets up the goal - but doesn't confine a developer to it. 

Clients want websites that are visually interesting as well as usable, and
they aren't willing to give up one for the other. So solutions that provide
both are necessary.

I think there maybe some easier ways to maintain multiple versions of a web
page without knowing PERL or other programming languages, or making it a
maintanence nightmare. The ability to use Server-side includes may not be an
option for everyone, but they are very powerful. Consider a page from the
graphics version of the website, and a page from the text only version, as
"templates". All the navigation for the templates is done via includes so
there are only a couple of files to maintain. The content itself is an
include file. To change the content you update the content include and both
the graphics version and the text only version are updated. Granted it
requires a little extra effort to setup 2 templates, but the text only
version is so basic that it isn't excessive. I also think that the 3rd level
of XML that Scott mentioned is where this will all eventually end up. A
complete separation of content from presentation.

Eugenia


Eugenia Slaydon
Lead Content Developer
Beacon Technologies, Inc.
336-931-1295 ext. 225


-----Original Message-----
From: gian@stanleymilford.com.au [mailto:gian@stanleymilford.com.au]
Sent: Wednesday, December 19, 2001 9:44 PM
To: w3c-wai-gl@w3.org
Subject: RE: Multiple versions of a web page


Hi,

Not all users are benefitted by text-only or text-rich sites. It is
difficult to please everyone and thus the quote:

 Can a disabled person use a piece of technology with as much
    efficiency, ease of use and accuracy as that experienced by a
    non-disabled person using the technology?

I believe is idealistic and is also our idealogy (and rightly so), but
we will need to settle for people just being able to access the
information.

-----Original Message-----
From: phoenixl [mailto:phoenixl@sonic.net]
Sent: Thursday, 20 December 2001 1:39 PM
To: w3c-wai-gl
Subject: Multiple versions of a web page


Hi,

FYI  -  my recent posting to CHI-WEB.

Scott


------------------------------------------------------------------------
---

Hi,

I'll try to respond to several messages in this one note.

Someone brought up the issue of "Separate is Unequal".  There is no
consistant view of this within the disabled community.  Some people have
very strong philosophical points of view on assumption.  Other people
are more pragmatic.  For example, stairs and ramps are of equal value.
Stairs are more efficient to use by people who can walk while ramps are
more efficient for people who use wheelchairs.  One is not necessarily
any better than the other.  Nor would it be appropriate to require that
all people use only one, e.g. ramps.

A very, very major concern about multiple versions of a web page is
being sure that all versions have the same updated information.  I
believe that this problem could be addressed by using various tools or
technologies and I've been looking at three levels of approaches for
keeping multiple versions of the same web page in sync.

The simplest level is where a standard web page is created but the
developer
includes information in the web page which guides an 'extractor'
which develops an alternate version of a web page.

The second level is where the server works with self-configuring
web pages which use conditionals, macros, symbols, etc, for creating
different versions of itself.

The third level is where XML and XSLT is used to create the various
versions of a web page.

The simplest version has the advantage that it can probably be easily
learned by web designers without their needing to be programmers.
The third level takes advantage of the power of XML/XSLT, but its draw
back is that some programming background is almost required.  The
self-configuring web pages have some of the simplicity of the first
level, but avoids the complexity of implementing XML/XSLT of the third
level.



Here's a little more detail on the first level of approach.


First, I'm going to be looking at accessibility from the point of:

    Can a disabled person use a piece of technology with as much
    efficiency, ease of use and accuracy as that experienced by a
    non-disabled person using the technology?

rather than:

    Can the person with a disability use a piece of technology?

Traditionally, people have looked at accessibility from the second point
of view.  I believe the more appropriate way to look at accessibility
is from the first point of view mentioned.


I'm also going to propose that a universal web page be a basic web page
without a lot of frills that can be used by a variety of people.  (I'm
not wedded to the name "universal web page", but am open to other
names.)

A universal web page would have a pretty standard liner format like:

    TITLE

    INTRODUCTION

    INDEX OF LINKS TO SECTIONS OF WEB PAGE

    SECTION

    SECTION

    SECTION

      .
      .
      .


Each section of a universal web page would have the same basic format:

    Section name

    Link to previous section, link to index of sections, link to next
section

    Section body


In addition, the section body would primarily contain text.

This structure would have several advantages.  Little use of frills
means that the universal web page can be used with a variety of
browsers.
Also, the bandwidth wouldn't be very high.

The general structure would benefit blind users.  The index of
sections at the beginning would present an overview of the web page
which would help understanding the web page.  The linear format
would also help with understanding the organization of the web page.
The linear format, the index of links to sections and the ability
to link from section to section provide simple, but fairly complete
navigation of the web page.


The idea of the 'extractor' is that it is a tool used to extract a
universal web page from a standard web page.  The tool could be used by
the web page designer or could also be used by a server to automatically
create the 'universal' version of a web page.  (The extractor does NOT
live at the user's computer.)

The key issue is that the web page designer includes some directives
to the extractor in the web page in addition to the HTML.


This is going to be getting a little theoretical now.  Suppose that
there is a universal web page (UWP) whose design is pretty
accessibile.  Now suppose that the web page designer decides that
he/she wants to make it more visually appealing by using tables
to put the various sections of the UWP into a more interesting format.

By using tables, the web page designer has introduced accessibility
problems and created more work for himself/herself.  The web page
designer could use CSS, but that can be a lot more work than
using tables.  However, the problem with tables and accessibility
is that the web page designer would then have to figure out
what linearization is and then run various tests.

However, suppose that when the web page designer creates the web page,
additional information is stored in the web page to direct the extractor
on how to create a universal web page.  For example, the web page
designer
could include a special comment to the extractor like:

    <TABLE>
      <TR>
        <TD>
        .
        .
        .
        </TD>
        <TD>
          <!--  universal-extractor:  begin-extract  sectionname="Main
article" PRIORITY=4  -->
          <P>
          This article discusses
                .
                .
                .
          <!--  universal-extractor:  end-extract -->
        </TD>
        
The comment tells the extractor to extract the text contents of the cell
and make it a section on the UWP.

The combination of the special comments and the extractor avoids the
problem of the web page designer having to figure out how tables will
be linearized.

A number of extractor directives could be specified.  For example:

    <!--  universal-extractor:  include

    -->

    <!--  universal-extractor:  begin-ignore  -->
    <!--  universal-extractor:  end-ignore  -->

(The extractor directives could also be TAGS, but then that complicates
the process.)

I think that the extractor approach could be inplemented for some web
sites by using something like a perl script.

Scott

Received on Wednesday, 19 December 2001 22:39:12 UTC