W3C home > Mailing lists > Public > public-html-a11y@w3.org > September 2010

Proposal for html5 drag and drop keyboard accessibility workflow

From: E.J. Zufelt <everett@zufelt.ca>
Date: Fri, 17 Sep 2010 05:33:11 -0400
Message-Id: <62E74394-86CC-4C82-89AD-A2A0FA07C7F6@zufelt.ca>
To: public-html-a11y@w3.org
Good morning,

Thanks to Gez for pointing me to the necessary documents to get started with this proposal.

Proposal for html5 drag and drop keyboard accessibility workflow

> My initial thoughts would be that since there appears to be a
> reasonably robust event model that this would not be too difficult to map
> over to a device agnostic implementation.  That being said, some of the
> language in the spec may be clarified to make it clear that a pointing
> device is not required.

> Looking at a copy and paste model, which I agree makes the most sense, copy
> is explicitly incorporated, however, move (which would traditionally be
> considered cut and paste) is perhaps a little unclear.  Nevertheless, I
> think that it is reasonable to assume that a "move" action is equivalent to
> a copy, paste, delete (dragStart, drop, dragEnd, respectively).

> 7.9.4 Drag-and-drop processing model
> When the user attempts to begin a drag operation, the user agent must first
> determine what is being dragged. If the drag operation was invoked on a selection, then it is the selection that is being dragged. Otherwise, it is the first
> element, going up the ancestor chain, starting at the node that the user
> tried to drag, that has the IDL attribute draggable set to true. If there is no
> such element, then nothing is being dragged, the drag-and-drop operation is
> never started, and the user agent must not continue
> with this algorithm.

> * User agents must, upon user initiated action, provide a list of all
> elements with draggable="true".

> 1. Action for requesting a list of dragable elements must be device
> agnostic.

> 2. List must be able to be navigated by a device agnostic method.

2.1 Where there is a parent-child relationship between draggable elements this should be reflected in the list (perhaps using a treelist).

> 3. UAs should provide some visual indication on the page for elements in the
> dragable list, when the list has focus.

> 4. UAs should provide a unique visual identification for the element in the
> dragable list that currently has focus in the list.

> 5. How to communicate which element has focus to visually impaired users?
> For simple elements (files, anchors, images) this could be more implicit (filename,
> anchor text, image alt attribute).  For complicated dragable elements this may need to be set
> explicitly using title (possibly aria-label or aria-labelledby) attribute.

> 6. Anchors and images are implicitly dragable, to provide less confusion for
> users UAs may choose to implement facetted lists of dragable elements,
> including one facet for explicitly dragable elements, one facet for anchors,
> and one facet for images.

> If the user agent determines that something can be dragged,
> a dragstart event must then be fired at the source node.
> * When a user selects a dragable element from the list provided by the UA
> fire a dragStart event at the source node.

> * 1. Activating the dragStart event should require explicit action from the
> user, navigating the list should not fire a dragStart event at each element
> that receives focus. The user should be required to press enter, or to
> perform some other action to select a dragable element, which in turn will
> fire the dragStart event.

> During the drag operation, the element directly indicated by the user as the
> drop target is called the immediate user selection. (Only elements can be selected by the user; other nodes must not be made available as drop targets.)
> However, the immediate user selection is not necessarily the current target
> element, which is the element currently selected for the drop part of the
> drag-and-drop operation. The
> immediate user selection
>  changes as the user selects different elements (either by pointing at them
> with a pointing device, or by selecting them in some other way). The current
> target element changes when the immediate user selection changes, based on
> the results of event listeners in the document, as described below.

> * User agents must, upon a user selecting a dragable element provide a list
> of all elements with dragEnter, dragOver and drop events defined.  Note:
> this would be easier to implement if any target element could have a
> droppable="true", or droptarget="true" attribute set.

> 1. When the UA presents the list of drop targets the first target in the
> list will be selected by default.

> 2. When any drop target in the list receives focus (including the default
> drop target), the UA will:

> 2.1 Fire a dragEnter event at the target.

> 2.2 Fire a dragLeave event at the previous target (if one exists).

> 2.3 Fire a dragOver event at the current target.

2.4 Inform the user of the default dropEffect (visually through modifying the mouse cursor, and through notifying assistive technology).

2.5 If more than one possible dropEffect is available:

2.5.1 Provide the user with a mechanism (context menu or list) through which they can select an alternative dropEffect.

> 3. We face the same challenge for communicating how the drop targets map
> visually to the page, and how to communicate this non-visually, as we face
> with the list of dragable elements.

> Then, regardless of whether the dragover event was canceled or not, the drag
> feedback (e.g. the mouse cursor) must be updated to match the current drag
> operation,as follows:
> table with 2 columns and 5 rows
> Drag operation
> Feedback
> "copy"
> Data will be copied if dropped here.
> "link"
> Data will be linked if dropped here.
> "move"
> Data will be moved if dropped here.
> "none"
> No operation allowed, dropping here will cancel the drag-and-drop operation.
> table end
> * Obviously UAs need a way to communicate the change in drag feedback to
> assistive technology.

I hope this helps us to move this issue along.  I am, as always,  happy to hear all comments and suggestions for improving this workflow,
Everett Zufelt

Follow me on Twitter

View my LinkedIn Profile

Received on Friday, 17 September 2010 09:33:51 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:55:44 UTC