Re: [model] Proposal: Allow motivatedBy on SpecificResource

Hi, Jacob–

On 6/23/15 3:24 PM, Jacob Jett wrote:
> Hi Doug,
>
> Let me see if I can offer an intelligible counter-point.
>
> On Tue, Jun 23, 2015 at 12:55 PM, Doug Schepers <schepers@w3.org
> <mailto:schepers@w3.org>> wrote:
>
>     Hi, folks–
>
>     Forgive me for (still) not understanding some of the subtleties of
>     the issues here; I'll try to make a cogent argument anyway.
>
>     I'm strongly against the notion of restricting the number of bodies
>     (or targets) in an annotation.
>
>     I look at it from the perspective of an annotator (that is, the
>     end-user):
>
>     Abby selects some text (the word "Julie"); she selects the
>     "annotate" option from some menu (e.g. context-menu, sidebar, popup,
>     menu-bar, keyboard shortcut, whatever). A dialog pops up, giving her
>     the option of leaving a comment, offering a suggested change, adding
>     tags, and so on. She types the comment, "Julie should be Julia, as
>     mentioned in paragraph 2"; she types the suggested change, "Julia";
>     she adds the tags, "#typo", and "#personalname", and "#sigh".
>
>     The resulting annotation has a single target (the word "Julie"), and
>     3 bodies (the comment, the replacement text, and the tags).

I realized after that this is probably not correct; I guess each tag has 
its own body, so that would be 5 bodies (the comment, the replacement 
text, and one for each of the tags)… unless you can combine multiple 
tags in a single body?


> The thing is, what happens behind the scenes, in the depths of the
> annotation tool which should be completely opaque to the end user.

That's not correct. An annotation is an entity; it has a unique ID and a 
URL. It can be distributed, shared, and annotated as an entity.

(After reading and commenting on the rest of your email, I think this is 
the fundamental disagreement that you and I have.)


>     A machine thinks that all these bodies apply to the target; it knows
>     that the replacement text is meant to substitute for the selection
>     text (the target); it knows that each of the tags should somehow be
>     indexed for search with this target and body. But it doesn't know
>     what any of the content /means/.
>
> I'm not sure I understand how the machines knows that the replacement
> text is meant to be substituted for the selection text, especially if it
> doesn't know what the content /means/. That content could be anything (I
> think that is the point you were going for) but somehow the replacement
> text is an exception. I don't really follow how this can be...

I'm suggesting that we specify a particular behavior for an "edit" (or 
"replace" or whatever") body, for copy-edit clients. This wouldn't 
necessarily need to be in the data model spec, but the data model spec 
should enable the use case.


>     The machine doesn't know that Abby referred both to the target and
>     to the instance of "Julia" in paragraph 2; it only knows about the
>     explicit link to the target, "Julie"; a human can use the
>     information in the content body, but the machine can't (unless it's
>     a smarter machine than we're talking about today).
>
> It could (and arguably should) know this. The entire concept of specific
> resource and selectors is predicated on making this kind of
> functionality possible. It seems a bit odd that somehow the tool in the
> example handles a use case the model isn't designed for (recognizing
> that some content is replacement text) but then doesn't exercise the
> specific resource + selector structure that is relatively basic to the
> model...

The data model does enable the case where there are multiple targets 
("Julia" and "Julie"). I simply don't think that a common annotation 
client, meant for the average user, is likely to have that as its 
primary workflow (that is, the UI is unlikely to offer or enforce that 
affordance).


>     The machine doesn't know that Abby added the tag "#typo" as a signal
>     for the kind of correction she was offering, or that she added the
>     tag "#personalname" as a note for herself for a different project
>     she's working on, or why she added the tag "#sigh"; in fact, another
>     human probably wouldn't know what the tag "#sigh" means… was she
>     bored? is she irritated at all the typos, in which case the tag
>     "#sigh" is actually kind of an annotation on the tag "#typo"? was it
>     a wistful sigh because she loves Julia?
>
> I get that it doesn't know what to make of these hastags but I'm still
> stuck on how it does know what replacement text is.

Because it was labeled as the replacement text, by the "motivation" (or 
"role", as Paolo suggests).


> It seems
> unreasonable that our engineers have figured out how to identify and
> make machine-actionable some random content meant to be replacement text
> but then can't figure out what to do with hashtags...
>
>     None of this matters to the machine, which only needs to perform a
>     set of tasks:
>     1) present the human reader/editor with the information, and let the
>     human decide if they want to accept the change;
>     2) provide an affordance (say, a button) to change the selection
>     text with the replacement text;
>     3) if the human decides to make the change, perform the change
>     operation.
>
>     That's it. There are other ancillary tasks, like letting users to
>     whole-text searches or tagged-index searches, and so on, but for the
>     core task we're talking about, the machine doesn't need to be any
>     smarter than that.
>
> Right, the existing model can do this when applied correctly

I question the word "correctly". I would say, "when applied in a 
specific way according to a methodology or best practice".


> (i.e., by
> separating the superflous annotations from the editorial annotation). If
> I'm understanding correctly, you basically want to construct an
> annotation system that identifies body content that conforms to the
> pattern of "replacement text" but does nothing with any other kind of
> body content.This begs the question of what purpose the other body
> content serves.

(I'm going to take that to mean "raises the question", and that you 
aren't accusing me of using circular reasoning? I know the vernacular 
use of the phrase, but your use is ambiguous here. :P)

There might indeed be other specific actions or presentations that 
particular body content might have.

Off the top of my head, here's a few:
* "bookmarking": automatically imports into the user's bookmarks or 
bookmarking service

* "classifying": UA limits the content to a choice from a particular set 
of controlled vocabularies

* "highlighting": doesn't show a body (and might even hide the 
annotation), but only colors the selection in a particular color

* "linking": inlines the content of target1 when target2 is in the 
viewport, and vice versa

* "replying" (to another annotation): positions the annotation under the 
annotation that it's a reply to, indents the annotation, and hides it in 
an expander

* "tagging": auto-links the tags to search terms on the annotation server

Not all of these make total sense in the context of multiple bodies, but 
many of them do, and all of them seem plausible to me. There might be 
other behaviors that we mandate or suggest in some spec, though 
ultimately it would be up to the client to decide how to treat them.



> Now, in the example you've illustrated, the data entry system has a way
> separate the different content types into separate bodies. However, if
> we only care about the replacement text content type, then why not just
> pitch the other bodies into the big circular file in the sky? They have
> no role in the devised annotation system, so why waste any system
> resources on them?
>
> Presumably, we might want to actually preserve this other annotation
> content and (re)serialize it for consumption by exterior contexts. The
> best, most interoperable way to do this would be to represent the
> hashtags as distinct annotations targeting the original editorial
> annotation. Nothing about this use case actually requires support for
> annotations with multiple bodies (possessing multiple motivations).

I don't understand this part. You seem to be reacting to your own 
strawman about the utility or futility of multiple bodies with (or 
without) their own motivations, and not really addressing anything I said.


>     The idea of separating out this annotation into its constituent
>     parts seems like overkill. I think it would surprise Abby to find
>     that once she's published what she saw as a single annotation, that
>     it's broken up into multiple annotations that have to be shared or
>     used separately, and she can't find her suggested change because the
>     tag body wasn't indexed with the replacement-text body or the
>     comment body, and so on. To her, it was a single act of creation,
>     and it should be modeled that way; the only thing we know about her
>     intent was that she made a single annotation, and that should be
>     preserved.
>
>
> This isn't overkill at all. This is simply the dichotomy between (user)
> perceptions and system design. Let's take your digestive track as an
> analogy. You don't typically think about it but, it breaks your food up
> into the different kinds of things and pumps them through to the various
> places best suited for their digestion.
>
> The user never really thinks about this. They simply consume the food
> and sometime later they poop. They don't think about what happens in
> between, the entire process is opaque to them.
>
> We engineers don't have this luxury. For us, we have to figure out how
> the food is going to be broken down and processed so that nutrients can
> be absorbed. First it will need to be ripped up and crushed so we design
> the jaws and teeth a certain way. Then it needs some chemical bathing to
> break down carbohydrates into sugars. Before it can get to the chemical
> bath though, it has to be transferred to the place where bathing is
> going to happen -- the stomach -- so we design a muscular tube, the
> esophagus, for this task.
>
> By now you see where I'm going with this analogy.

That's giving me too much credit.


> The user is going to
> deal with the interface that we design for them to cope with. How the
> annotation data is processed inside is a completely separate matter.
> Whether their annotation is functionally treated as four different
> annotations or a single annotation makes no difference to them, they
> don't care what is being digested where. We must be careful not to
> conflate presentational issues with data / process / workflow issues.

I completely disagree. I don't see the process as opaque to the user, 
and I think there are a variety of contexts, outside any particular 
client or server, where they might encounter their annotation.

There will be (we hope) a wide variety of different annotation clients, 
each with different functionality. Some will have copy-edit 
functionality (I think this is a major use case), so we should help 
guide those user agents toward interoperability.


> The model doesn't constrain how the front end should look or how it
> should present information to the end user. It constrains how the data
> is to be interpreted internally. If an edit is accompanied by some
> hashtags that provide context for human users then there are two ways of
> dealing with the situation. Either the entire blob is a single document
> - an annotation with one body or, we have a way to distinguish between
> content types. If we can distinguish between content types, then we
> should represent the user's annotation as multiple annotations
> internally because it will let us get the most mileage out of the
> information gained from being able to distinguish between the content
> types. At no time would (or should) the end user have to cope with this
> directly.

I don't see how you draw this conclusion.


>     Maybe another annotation interface might offer different, discrete
>     options that elicit a different act of creation from the user, but
>     the data model shouldn't constrain that.
>
> The data model doesn't.
>
>     As argued before, there is ambiguity in this kind of annotation…
>
>     The ambiguity arises in part because we have made a data structure
>     that is easy to generate and manipulate, so it is "lossy" with
>     regards to all the expressiveness and inter- and intra-linkages it
>     could have, but those would come at the price of complexity of
>     format and stringent requirements on the user to disambiguate intent
>     via the UI.
>
>     The ambiguity mainly arises because of the nature of humans, who
>     generate and detect complex patterns of behavior, and who have
>     limited means to express their thoughts or intents.
>
>     We can't solve either of these issues. We can only decrease the
>     ambiguity a bit.
>
>
> I'm not really following. Sure the meaning of the hashtags may be
> ambiguous but, you already found a way to separate the replacement text
> from the rest of the document. The replacement text is unambiguous.
> There's nothing to gain and everything to lose by mushing the
> replacement text back into a single annotation with the ambiguous
> hashtags. Far better to annotate the replacement text with the ambiguous
> hashtags.

First, you don't propose a way that the UA actually elicits from the 
user how each hashtag does or doesn't relate to the replacement text vs 
the target.

Second, you are ascribing some magic to the "edit" body that Occam's 
razor would suggest is solved by seeing the motivation as a label.


> One could even aggregate all of the hashtags into one tagging
> annotation. Then this becomes a two annotation solution that the
> preserves the unambiguous replacement text. We have an existing pattern
> for this in the model. Surely this would be the better way to proceed
> than to warp the model to accommodate this one implementation dependent
> use case. Shifting motivation to the individual bodies is going to
> seriously complicate the implementation of a large number of other use
> cases.

Which other use cases does it complicate?

I'm not suggesting an implementation-dependent use case, I'm suggesting 
that this is a common use case that we should specify, so we have 
interoperable behavior for a class of implementations (copy-edit-aware UAs).


>     Maybe another annotator, Beth, is far more precise in her
>     annotations, such that there is almost no ambiguity; she separates
>     out her annotations and is always exactly on point, she replies to
>     her own annotations if there is any potential ambiguity; that's even
>     easier for machines to "understand". But maybe another annotator,
>     Chuck, is far more ambiguous in his annotations, suggesting
>     irrelevant and irreverent changes, and adding comments and tags that
>     are unhelpful or even contradictory.
>
>     Web Annotations should allow for this full range of expression, even
>     at the expense of machine comprehension.
>
>     Please, let's try to keep the model simple by default, and slightly
>     more complex for more complicated scenarios, and limit the
>     concessions we make for machines when humans are the real end-users.
>
> I agree but the humans you're describing are developers and not the real
> end-users. Real end-users don't actually care how we make the sausage.
> They only care that it tastes delicious. The alternate pattern is only
> marginally more verbose, already exists in the model, and doesn't
> needlessly complicate the other existing use cases.

I don't understand why you think the annotations are opaque to 
end-users, and not just developers.

I've already offered several examples where the end-user would encounter 
their annotation-as-an-entity (e.g. in a search operation, in another 
client, sending a link to their annotation to a friend, sharing an 
annotation across services). Can you offer examples where it's truly 
opaque in an open (not closed), multi-client, multi-server system?


>     To Paolo's points about motivations vs roles, or how we structure
>     the annotations, or having different serializations for JSON and
>     JSON-LD, I'm open to any of these suggestions; I suggested
>     "motivation" because it seemed like it met a need, but if it has to
>     be modeled a different way, that's okay, too.
>
>
>     Finally, I want to suggest that if we go down a path of
>     architectural purity and complexity, the data model is far less
>     likely to be adopted by authoring tools, so let's keep that in mind.
>
>     Regards–
>     –Doug
>
>
> Again I'm not sure I follow. Sure we could reinvent things like Twitter
> using the annotation model but at the end of the day the real purpose of
> the model is to link resource A to resource B.

Perhaps that's your purpose. I'd be surprised if the consensus of the 
working group is to have that simple a model.

My purpose to to make something that satisfies some very common use 
cases (such as commenting, replying, fact-checking, tagging, and 
copy-editing), in a reasonably elegant way, such that developers adopt it.


> It's as simple as that.
> Everything else, like determining the content types in the resources, is
> extra. The editorial workflow use case is extremely complicated because
> it has more than the average number of extras.

Actually, it's not really complicated at all.


> Not only do I have to
> determine the content in A and B but I also have to figure what to do
> with the content in A (and once I've done that, then what?).
>
> That's a lot more functionality than an annotation should be expected to
> have. It requires a lot more effort and more robust architecture to
> accommodate that. There's no easy way to accommodate the editorial use
> case.
>
> Ultimately this proposal is a -1 from me.
>
> Adopting it will completely throw away the core feature of the model --
> that we may _reliably expect bodies to be "about" targets_.

To be frank, I don't think this is a realistic goal. You can't force 
humans to use the system that way.

You best hope toward that goal is to establish some conventions and best 
practices within your user community, reinforced by affordances in your 
annotation client.

Regards–
–Doug

Received on Tuesday, 23 June 2015 22:36:57 UTC