Re: SIMILE Research Drivers

   Date: Tue, 8 Apr 2003 13:49:27 -0700
   From: Kevin Smathers <ks@micky.hpl.hp.com>
   Cc: john.erickson@hp.com, www-rdf-dspace@w3.org
   Content-Disposition: inline
   X-SBClass: Nonlocal Origin [156.153.255.214]
   X-Spam-Status: No, hits=-4.4 required=5.0 tests=IN_REP_TO version=2.20
   X-Spam-Level: 
   X-SpamBouncer: 1.5 (2/23/03)
   X-SBPass: NoBounce
   X-SBClass: OK
   X-Folder: Bulk

   Hi David,

   On Tue, Apr 08, 2003 at 02:50:50PM -0400, David R. Karger wrote:
   > 
   > Well, as a particular example, consider the drag and drop metaphor.
   > In haystack, huge amounts of information is input to the system this
   > way (eg, user drags "person" object onto "author" region of a
   > document; system records that person as author of that document).
   > This may well be possible to set up in javascript, but I suspect it is
   > only the tip of the iceberg.

   So I don't ever use IE and have no idea what it is capable of, but
   in Mozilla drag and drop works fine between anchored objects in the
   html text and form entry widgets, without even resorting to Javascript.
   But drag and drop isn't a normal metaphor for web applications; there 
   are better ways to represent this type of interaction on the web.

I'll have to disagree here.  Drag and drop is a wonderful interface
metaphor because it really reduces the cognitive load on a user.  It
is the closest thing we have to physical manipulation of objects.

   For example, to link two existing resources, the author could be entered 
   by name for example, 

If we liked typing names, we wouldn't need graphical desktops to move
our files around: we would just type directory paths where we wanted
things to be.

   or selected from a list of authors using an edit 
   form that allows you to select from the list of authors in the
   system 

won't work so well when there are several hundreds 

   or define a new one.  Or a workflow object similar to a shopping cart
   could be used to collect authors in a bundle (while browsing through
   authors) to be applied to a book at a later time.  

in some situations this would be the right metaphor, and in fact
haystack supports this approach.  It depends on the context in which I
stumble across the paper and the author I want to connect.

   > 
   > If we do feel forced to use a web browser paradigm, we may be able to
   > get some way just by generating pictures of the haystack UI and using
   > imagemaps to catch user clicks, but this is really just a poor man's
   > X-server.
   > 

   That would be extremely bad web UI design.  

agreed---I raised it only to indicate I don't want to do it.

   I think that the high points of web UI design are in its ability to
   map across device and people capabilities.  The abstraction of markup
   seperates display from data in a way that allows the same data to be
   reused in many different ways, potentially by blind users, by users of
   capability limited devices such as PDAs, by text terminals and other
   mouseless devices, by WebTVs, by web crawlers, and by computer displays
   of varying resolution.

Indeed, haystack aims to take this one step further.  Right now, html
is a markup in which data and display are incompletely separated.  We
want to improve this separation.

I only brought up drag and drop as a single example.  The haystack UI
offers a broader palette than that.  In outline, the haystack
framework goes something like the following.  Given an object to
display, haystack examines the rdf:class(es) of the object.  Then it
looks for appropriate "views"---resources in the rdf repository that
explain how to describe that object to a user.  "how to describe" is
actually subdivided; there is a layer of "which attributes are
important/meaningful" and a layer of "how are these different
attributes laid out within the space used to display the object".
These attributes may refer to other resources that need to be
recursively displayed.  "How to describe" may be defined in rdf or may
be represent by blobs of bytecode.  The haystack UI is a simple
framework that walks the RDF graph looking for rules on how to display
things and executing them.  In this sense it is a lot like a web
browser that reads html and invokes rules for rendering that html, but
it is more powerful because the rules themselves are represented in
rdf.  This generality offers a lot of power, I think---for example, it
makes it very easy to deal with the kind of multiple modalities you
mentioned (PDA, blind-user audio tools, text terminals, etc).  One
could, if desired, try to achieve this power by enhancing a web
browser, creating for example super-ultra-meta-cascading-style-sheets.
But I suspect that in the end one would end up with something a lot
like the haystack UI.  

Going out on a limb, I might say "I agree with you that a web browser
is the right interface framework, and the haystack UI is what we think
a web browser will look like in 5 years".

d

Received on Wednesday, 9 April 2003 00:38:32 UTC