- From: Close, Tyler J. <tyler.close@hp.com>
- Date: Mon, 25 Jun 2007 22:23:02 -0000
- To: "W3C WSC W3C WSC Public" <public-wsc-wg@w3.org>
Johnathan Nightingale wrote: > But if your question is aimed to get at the impression that > use cases are not driving our recommendations so much as > following them, then I agree with that impression, and am > sort of stuck on what a better approach might be. For the use-cases, and other material in the Note, my expectation was that we were simultaneously exploring and documenting the problem space, while setting up yardsticks against which to measure proposals. I expected some proposals would come into the workgroup pre-baked and so only benefit from the use-cases as an explanatory device and evaluation tool. I think it's important that we ensure we at least keep this evaluation role as a goal. > When I was writing up IdentitySignal, I had to resist my > initial impulse to say "Applies to all use cases," and > instead go through to identify the cases where it was > particularly relevant. I don't object to doing that, but it > was an after-thought, more than a motivating, explanatory > phase of recommendation development. Since you mention it, we could pick on the IdentitySignal proposal a bit. ;) Delving into the relevant use cases in more detail would help us explore questions like: - When exactly is the user expected to interact with the IdentitySignal? - What motivates this interaction to take place? - What exactly is the interaction? - What interpretations of the presented information may a user have? - How should the user react to different presentations? - In what ways is this interaction, or motivation, similar to other things that have been tried? As I understand it, the IdentitySignal proposal is similar enough to Firefox's hostname display that working through the use-cases might show that the interactions and motivations are not significantly different and so user study results may be similar. Just listing the relevant use-cases skips over this analysis, and the following comparison and evaluation. So perhaps it's a question of perspective: seeing the use-cases as the obstacle course that is our first proving ground, rather than a boilerplate proposal section to be filled in as an after-thought. Did other authors have a perspective similar to Johnathan's? Is the limited exploration of the use-cases simply a matter of not seeing them as fundamental, rather than them providing a poor foundation? > For some recommendations (particularly robustness > recommendations) it is still more difficult to pretend they > are motivated by particular use cases, when in reality they > are motivated as responses to known threats (e.g. sites which > attempt to spoof chrome by positioning legitimate chrome off-screen.) Yes, I also think this is a problem with our use-cases section. As another example, the use-cases don't provide me with a reference for talking about spoofed GUI elements. In the PII bar proposal, I had to make my own subsections for talking about different attack variations on our vanilla use-case. The Note doesn't even provide me with a list of all the attack variations I should address. Tyler
Received on Monday, 25 June 2007 22:24:26 UTC