W3C home > Mailing lists > Public > public-annotation@w3.org > November 2015

Re: [web-annotation] Justify each Motivation with a behavior

From: Doug Schepers via GitHub <sysbot+gh@w3.org>
Date: Wed, 25 Nov 2015 23:40:31 +0000
To: public-annotation@w3.org
Message-ID: <issue_comment.created-159755952-1448494830-sysbot+gh@w3.org>
> :-1: to requiring explicit UA functionality for every motivation.
> Behaviors of different clients could be very very different with 
> the same data, and we don't want to inhibit innovation and progress.

I didn't suggest that we created normative requirements on UAs to 
exhibit specific behaviors.

What I'm suggesting is that as a best practice for adding a motivation
 value to the list, we create a concrete use case (or set of use 
cases) that show how a UA behavior for each motivation value might be 
distinct from any of the other motivation values.

If a motivation value isn't behaviorally distinct, that wouldn't mean 
it's not valuable or useful, just that it should be left up to each 
user community to define which behavior is the best match for its 
intended use. 

Using this methodology, most custom motivation values would probably 
map to the simple "display" behavior of a `commenting` motivation. For
 example, @jjett's preference of `remark` could be a subset of 
`commenting`… distinct to the terminology of his user community and 
his toolchain, but still defined in a way that any generic annotation 
UA would know what to do with it (e.g. treat it like a `commenting` 

You could look on the "core" as a set of exemplars, with extensions as
 a loose typing (isKindOf) mechanism.

This doesn't inhibit innovation and progress, it enables it in a 
responsible and interoperable way.

> Also :-1: to spending time debating in the WG which motivation 
> is "core" and which is somehow not. 

I also don't us want to debate terms… I want us to establish a 
methodology to determine the motivation value list.

At TPAC, @azaroth42 suggested adding another motivation value, but we 
had no basis for deciding whether to add it, other than the wholly 
subjective, "Does it seem common enough to add it?", a point on which 
the participants were divided.

Having a methodology would put less guesswork and subjective judgment 
into the process of adding new values, and having the bahavior-backed 
core would provide a clear extension mechanism that empowers each 
community to use its own terms.

> This set is based on research done over several years, by 
> participants in the OA CG, the WG and beyond. That research 
> was done by looking at existing clients and their use, 
> observing people's interactions, and looking at existing 
> models. Unless someone is willing to re-do that work?

There's no need to redo the work… we could look at the existing 
research and apply a new set of criteria on each one. Where is the 
research collected? I couldn't find it in the Open Annotation CG wiki.

I hope this additional context helps explain my reasoning.

GitHub Notification of comment by shepazu
Please view or discuss this issue at 
 using your GitHub account
Received on Wednesday, 25 November 2015 23:43:37 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:54:42 UTC