Multiple versions of a web page

Hi,

FYI  -  my recent posting to CHI-WEB.  It doesn't address the problems
that some people have with text-only/text-rich web sites.  The
understand of these needs is still evolving and I was reluctant to bring
it up at CHI-WEB.

Scott


---------------------------------------------------------------------------

Hi,

I'll try to respond to several messages in this one note.

Someone brought up the issue of "Separate is Unequal".  There is no
consistant view of this within the disabled community.  Some people have
very strong philosophical points of view on assumption.  Other people
are more pragmatic.  For example, stairs and ramps are of equal value.
Stairs are more efficient to use by people who can walk while ramps are
more efficient for people who use wheelchairs.  One is not necessarily
any better than the other.  Nor would it be appropriate to require that
all people use only one, e.g. ramps.

A very, very major concern about multiple versions of a web page is
being sure that all versions have the same updated information.  I
believe that this problem could be addressed by using various tools or
technologies and I've been looking at three levels of approaches for
keeping multiple versions of the same web page in sync.

The simplest level is where a standard web page is created but the developer
includes information in the web page which guides an 'extractor'
which develops an alternate version of a web page.

The second level is where the server works with self-configuring
web pages which use conditionals, macros, symbols, etc, for creating
different versions of itself.

The third level is where XML and XSLT is used to create the various
versions of a web page.

The simplest version has the advantage that it can probably be easily
learned by web designers without their needing to be programmers.
The third level takes advantage of the power of XML/XSLT, but its draw
back is that some programming background is almost required.  The
self-configuring web pages have some of the simplicity of the first
level, but avoids the complexity of implementing XML/XSLT of the third
level.



Here's a little more detail on the first level of approach.


First, I'm going to be looking at accessibility from the point of:

    Can a disabled person use a piece of technology with as much
    efficiency, ease of use and accuracy as that experienced by a
    non-disabled person using the technology?

rather than:

    Can the person with a disability use a piece of technology?

Traditionally, people have looked at accessibility from the second point
of view.  I believe the more appropriate way to look at accessibility
is from the first point of view mentioned.


I'm also going to propose that a universal web page be a basic web page
without a lot of frills that can be used by a variety of people.  (I'm
not wedded to the name "universal web page", but am open to other
names.)

A universal web page would have a pretty standard liner format like:

    TITLE

    INTRODUCTION

    INDEX OF LINKS TO SECTIONS OF WEB PAGE

    SECTION

    SECTION

    SECTION

      .
      .
      .


Each section of a universal web page would have the same basic format:

    Section name

    Link to previous section, link to index of sections, link to next section

    Section body


In addition, the section body would primarily contain text.

This structure would have several advantages.  Little use of frills
means that the universal web page can be used with a variety of browsers.
Also, the bandwidth wouldn't be very high.

The general structure would benefit blind users.  The index of
sections at the beginning would present an overview of the web page
which would help understanding the web page.  The linear format
would also help with understanding the organization of the web page.
The linear format, the index of links to sections and the ability
to link from section to section provide simple, but fairly complete
navigation of the web page.


The idea of the 'extractor' is that it is a tool used to extract a
universal web page from a standard web page.  The tool could be used by
the web page designer or could also be used by a server to automatically
create the 'universal' version of a web page.  (The extractor does NOT
live at the user's computer.)

The key issue is that the web page designer includes some directives
to the extractor in the web page in addition to the HTML.


This is going to be getting a little theoretical now.  Suppose that
there is a universal web page (UWP) whose design is pretty
accessibile.  Now suppose that the web page designer decides that
he/she wants to make it more visually appealing by using tables
to put the various sections of the UWP into a more interesting format.

By using tables, the web page designer has introduced accessibility
problems and created more work for himself/herself.  The web page
designer could use CSS, but that can be a lot more work than
using tables.  However, the problem with tables and accessibility
is that the web page designer would then have to figure out
what linearization is and then run various tests.

However, suppose that when the web page designer creates the web page,
additional information is stored in the web page to direct the extractor
on how to create a universal web page.  For example, the web page designer
could include a special comment to the extractor like:

    <TABLE>
      <TR>
        <TD>
	  .
	  .
	  .
        </TD>
        <TD>
	    <!--  universal-extractor:  begin-extract  sectionname="Main article" PRIORITY=4  -->
	    <P>
	    This article discusses
                .
                .
                .
	    <!--  universal-extractor:  end-extract -->
        </TD>
	  
The comment tells the extractor to extract the text contents of the cell
and make it a section on the UWP.

The combination of the special comments and the extractor avoids the
problem of the web page designer having to figure out how tables will
be linearized.

A number of extractor directives could be specified.  For example:

    <!--  universal-extractor:  include

    -->

    <!--  universal-extractor:  begin-ignore  -->
    <!--  universal-extractor:  end-ignore  -->

(The extractor directives could also be TAGS, but then that complicates
the process.)

I think that the extractor approach could be inplemented for some web
sites by using something like a perl script.

Scott

Received on Wednesday, 19 December 2001 21:58:27 UTC