- From: Thomas Roessler <tlr@w3.org>
- Date: Wed, 21 Nov 2007 17:12:43 +0100
- To: public-wsc-wg@w3.org
Minutes from our meeting on 2007-11-14 were approved and are
available online here:
http://www.w3.org/2007/11/14-wsc-minutes.html
A text version is included below the .signature.
--
Thomas Roessler, W3C <tlr@w3.org>
[1]W3C
Web Security Context Working Group Teleconference
14 Nov 2007
See also: [2]IRC log
Attendees
Present
Mary Ellen Zurko, Rachna Dhamija, Thomas Roessler, Ian Fette,
Jan Vidar Krey, Johnathan Nightingale, Tim Hahn, Bill Doyle,
Luis Barriga, Tyler Close, Yngve Pettersen, Anil Saldhana, Hal
Lockhart, Philip Hallam-Baker, Serge Egelman, Dan Schutzer, Mike
McCormick
Regrets
None
Chair
Mez
Scribe
johnath
Contents
* [3]Topics
1. [4]Newly completed actions
2. [5]agenda bashing
3. [6]Process for addressing comments
4. [7]ISSUE-117 - Evaluating proposals
* [8]Summary of Action Items
__________________________________________________________________
<trackbot-ng> Date: 14 November 2007
<tlr> ScribeNick: johnath
Mez: next item on the agenda, approving minutes from 10-31
... not hearing any problems with that
... minutes approved
Newly completed actions
Mez: next item is newly completed action items
... haven't reviewed these for several weeks
... reminder to folks that I set aside time for this work Friday
mornings
... if it isn't ready to close at that point, they won't get caught
till next week
... large number of items this week, thank you for that
agenda bashing
Mez: we really have to nail down the comment response and tracking
process
... doesn't seem to be getting resolved via email, so I'd like to set
aside time here
... particularly now that we have a rec-track document
... and after that, Issue-117
... I think we've had some good discussion on the list, though not all
explicitly associated with that issue
... any comments on agenda?
... let's move on to Agenda Item 7
Process for addressing comments
Mez: I am going to take notes on this section as well
Mez: bill and thomas have been discussing tools and what I want to do
here is a blow-by-blow treatment of comment processing and disposition
<Mez> 1) somebody says something in our public comment list, directed
at wsc-xit (or wsc-usecase)
Mez: let's mostly focus on wsc-xit here
<Mez> 2) somebody in the wg, and it's been Bill lately, takes that, and
turns it into ISSUES in tracker
Mez: is step 2 right here?
... specifically asking Bill and Thomas, here
tlr: the sequence I would suggest is first adding the comment to the
last call comment tracker for that document
<Mez> 1.5) add a comment to the last call tracker for "that document"
<tlr> [9]http://www.w3.org/2006/02/lc-comments-tracker/39814/
Mez: I don't understand what that is and how it gets done
<tlr>
[10]http://lists.w3.org/Archives/Public/public-usable-authentication/20
07Nov/0001.html
Mez: I see in that URL that there is a document pointer for each of our
documents
tlr: follow the comments url
... From here you can add a new comment, copy paste the url of the
message, leave basically everything empty
... click on "import the whole message text"
... I wonder if Bill would like to walk through the process right now
with a demo comment?
Mez: without a shared conference, we can't really watch him do that
... I'm not sure I want to slow things down that much
... if we get enough comments, making bill the bottleneck will start to
be a bad idea
... I appreciate bill's volunteering, but if we get substantial comment
traffic, it will be too much bottleneck, and I'll start assigning them
to other members in good standing
tlr: I will fill out the test comment
bill-d: to which document?
<Mez> wsc-xit
tlr: Adding a test comment to wsc-xit
<tlr>
[11]http://www.w3.org/2006/02/lc-comments-tracker/39814/WD-wsc-usecases
-20071101/
tlr: the effect that you will see is that there will be a link in the
comments tracker to "View comments" in wsc-xit
Mez: I don't see the comment
tlr: I see it - it might be stuck in some proxy
... at any rate, a comment will appear (LC-1915, in this case) which
contains the content of the comment, and other metadata
<Mez> tlr, put in the url where yousee the comment. tx
tlr: the next step is a manual one to open an issue in our issue
tracker so that we can deal with it
Mez: my experience is that a single comment could generate multiple
issues
tlr: that could be
Mez: so to be clear, a single comment can be broken up into multiple
issues
tlr: another approach would be to break up a message into multiple
comments
bill-d: does the related issues field help here?
tlr: no - that field is useless for linking to tracker
Mez: so in deciding whether we put the message in intact as a comment
and split up issues, or splitting the message into comments that map
1:1 with issues is whether one or the easier makes it easier
tlr: I suspect that we will find it easier to add multiple comments
from a message, to 1:1 map to issues
... however, I would like to briefly check if that will cause problems
with the comment tracker -- it seems not to
... so I recommend splitting the message up into multiple comments
<Mez> 1.5) break the comment into logical issues, putting each in as a
new comment against wsc-xit
<Mez> 2) link each to an ISSUE in tracker
Mez: if I remember correctly, the point of this process is that the
comment tracker is public
<Mez> 3) same somebody tells the comment person what their ISSUEs are
so they can watch the resolution unfold, as we all do
<ifette> I am loving the fact that the process is being typed into IRC,
can I assume that it will be cleaned up and put into a wiki how-to?
<Mez> yes
<ifette> :-)
<Mez> I don't trust wikis to type this stuff in; they go down
<Mez> need a copy here
bill-d: there was some discussion about the combination of lc-tracker
and tracker, about there being a way to do this in one step
... is the reason for doing both that tracker is public?
<tlr>
[12]http://www.w3.org/2000/06/webdata/xslt?inclusion=1&xslfile=http%3A%
2F%2Fwww.w3.org%2F2003%2F12%2Fannotea-proxy&xmlfile=http%3A%2F%2Fcgi.w3
.org%2Fcgi-bin%2Ftidy-if%3FdocAddr%3Dhttp%3A%2F%2Fwww.w3.org%2FTR%2F200
7%2FWD-wsc-usecases-20071101%2F&annoteaServer=http%3A%2F%2Fwww.w3.org%2
F2006%2F02%2Flc-comments-tracker%2F39814%2FWD-wsc-usecases-20071101%2Fa
nnotations
<Mez> sweet
tlr: no - tracker is public and associated with the working group, but
one of the advantages to LC-tracker is that it allows generation of
annotated documents with comments
Mez: my question about that splendiferous URL is how things get placed
within this annotation?
... is there something in comments that puts them at specific points?
<Mez> good catch ian
tlr: there is a section that lets you choose a section to associate
comments with
<Mez>
[13]http://www.w3.org/2006/02/lc-comments-tracker/39814/WD-wsc-xit-2007
1101/
Mez: I still am not seeing the comment
tlr: I see it on that URL
Mez: I don't
yngve: the comment seems to be on the usecases
tlr: good catch - I made a mistake when entering it
<bill-d>
[14]http://www.w3.org/2006/02/lc-comments-tracker/39814/WD-wsc-usecases
-20071101/
Mez: people will obviously need to make a call on the comment "type"
unless we want them all the be marked "substantive"
... is there anything about type that matters downstream?
tlr: nothing occurs to me immediately
... we will obviously want to spend more time on substantive comments
than editorial ones
<tlr> that sounds like 140% comments were received. Scary.
Mez: if we should expect any metrics being applied later about types of
comment, it would be good to know now
tlr: if anything comes up, I'll let you know, nothing that I know of
Mez: and where, again, is the ability to specify where it goes in the
document?
tlr: "Section of the document concerned"
... might be under "view individual comment" - yep
<Mez> 4) issues get resolved, as they have been in the past, in the wg
<Mez> 4.1) gets recorded in the ISSUE in tracker
<Mez> 4.2) gets recorded in lc tracker as well
Mez: do we get reports out of lc tracker?
tlr: yes
Mez: how do we do that?
... that wasn't nearly as painful as I thought it would be.
(mez - did you want me to give you the action? )
<scribe> ACTION: mez to write up "comment disposition process" in wiki
[recorded in
[15]http://www.w3.org/2007/11/14-wsc-minutes.html#action01]
<trackbot-ng> Created ACTION-342 - Write up \"comment disposition
process\" in wiki [on Mary Ellen Zurko - due 2007-11-21].
ISSUE-117 - Evaluating proposals
<Mez> [16]http://www.w3.org/2006/WSC/track/issues/117
Mez: looking around for related conversation in other threads
serge: there's been discussion on this related to specific proposals
... but it would be nice to come up with some set of steps that all
proposals go through
... I would offer that for a recommendation to be made, we should look
through the shared bookmarks at the very least, and the author of each
rec should have to justify why the current literature shows that this
solution may be effective
... or make the claim that no current literature examines the
underlying approach in their proposal
... after that, if the current literature doesn't address the topic,
then it should be subjected to user testing
... and only after that should we consider making the recommendation
PHB2: I would be happy to accept a modified form of that, in that I'm
not that much of a fan of the academic literature
... I think that we are only just starting with usability testing, that
we don't know how to test this stuff yet
... at this point, I really would not say that the body of literature
we have is useful
... I don't think we've got good analysis here on any of the work to
remedy existing attacks
serge: can we have some examples about current literature that isn't
useful?
Mez: going back to serge's proposal, I see two process problems. One is
the notion of assigning authors to recommendations. I don't think that
aligns with the standards/wg process
... at this point, there are a number of things scattered, there's a
bunch of normative text in different sections, I'm not sure we'd get
coverage, tracing them back to original authors
... I think that grouping them conceptually, tracking those with
issues, might do a better job of coverage
... second issue is that generically pointing to "shared bookmarks" is
untenable, since it's a very long list, not realistic to expect
everyone to have to have read all of it, and some of it is genuinely
not-public
... just like we don't expect everyone to be accessibility experts,
that's why we have experts
serge: I agree with most of what Mez is saying about tracing back to
authors, but am more interested in the general process of evaluating
proposals
... I understand that not everyone can read the literature, but if
someone points to literature as a reason to reject a proposal, it
should be considered, not dismissed as "I'm not familiar with the
literature, that's not my job"
PHB2: I didn't make a platitude, I made an assertion - I disputed the
value of the academic literature
... I do not recognize academic literature as representing empirical
evidence in this field - it can exhibit conflicts of interest or
otherwise not represent objective reporting of real data
... I am really not aware of many studies in the field that are
genuinely empirical
<serge> So because of the concern for conflicts of interest in academic
studies, we should just take VeriSign's assertions about the value of
EV certificates at face value!
PHB2: rachna's paper on sitekey for example, was interesting, but
concluding that sitekey was useless overreaches
... I don't think you're going to find a statement out there that any
of our proposals will work or not work
... I'm quite happy to accept empirical evidence, but the process
statement should be phrased in those terms, not in terms of the
academic literature
serge: phil, you seem to be conflating perceived security with actual
security
... I agree that a site with sitekey instills trust in users, but that
is different than real security - we should only be recommending
improvements to real security
<serge> We've seen those attacks in the wild!
PHB2: I'm not conflating the two. The attacks that study examined are
mitigated by some technologies. It is known that sitekey is vulnerable
to certain attacks, and you don't expect otherwise, but there are
mitigation techniques in place.
<serge> What is the mitigation technique for the attacks that were
studied?
PHB2: I would take that particular study and present it to my customers
as evidence for or against particular competing technologies, but I
wouldn't take it as conclusive that a technology is a failure
<ifette> +1 phb
<serge> If you're not going to provide real evidence for your claims,
this argument is futile
Mez: in terms of the notion that specific authors must respond on
behalf of certain recommendations is out of date
... what we have now is a document with normative pieces. What I
propose instead is to focus on logical units
... the structure of those discussions would be the same as other issue
discussions - people can argue pro/con, make tweaks, straw poll changes
... towards the end of this discussion, I'd love it if we could find an
exemplar item to apply this process to
ifette: want to go back to something johnathan said back in Austin. We
are designing a recommendation to present security context, not a
recommendation to stop phishing. I think we risk getting narrow minded
and focusing on attacks, on phishing attacks.
... I think that if something doesn't solve phishing, that's not a gate
to it being a recommendation
<PHB2> The sitekey authentication system tracks the IP address of
contact requests, they have a feedback loop that detects suspicious
patterns of multiple access attempts from the same IP address. This
restricts the volume of attacks that a perp can make from a botnet
ifette: especially now that most phishing groups are seeing phishing
declining in favour of malware and other attacks
<PHB2> Its not a perfect defense but it does help the bank to displace
the attacks to other targets
serge: I just wanted to say that this issue was driven off my original
point, that I hope we can agree that we shouldn't recommend things
based purely on conjecture, there should be a mechanism for
scrutinizing them
... just because someone has a great idea, doesn't mean we should
recommend it
... right now all we've been talking about is "I like this idea" or "I
don't like this idea" and volume carries the day
Mez: that is not what we've done so far
... we have been deliberately inclusive for this draft, which is now up
for review
... now is the time to go through them and have the discussions around
removing/modifying them
<tjh> sorry - have to leave for another call.
Mez: we are using issues to track that, but I'm happy to take specific
requests for agenda items too
<rachna> I think that we should go through proposal by proposal and
discuss the "empirical" evidence we have and what we need
<PHB2> +1 rachna
serge: I agree, now is the time to start going through them and
deciding which ones to keep
<PHB2> Working out how to get that information is the key
Mez: the only modification I would make to rachna's proposal is to map
it to parts of the rec track document, not proposals
<tlr> +1 to mez
Mez: one way we could do that, rachna, is to key up a specific one of
those discussions, and see how the process works
<PHB2> The empirical evidence we have to date is very thin, it is very
specific and largely taken under lab conditions
<PHB2> The bias introduced through the lab conditions is a major
problem
serge: I have to go, but I'm just hoping (inaudible - little help?)
<serge> johnath: we need to map out the next steps really quickly on
how to proceed
thx serge
<PHB2> Did the Harvard study demonstrate that users ignore the absence
of sitekey indicata or did it show that they ignored the indicata in
the lab environment where they were possibly primed for demoware?
<rachna> Phil, if you can help provide real world empirical data from
deployments, that would be useful
tyler: it sounds like different members disagree about the utility of
the existing literature
... I think the existing literature is valuable, and am willing to be
guinea pig with the safe form editor aspects of the document
<serge> I think we need to go through each recommendation, write down
which assumptions it makes for it to succeed
<PHB2> I think we have to work on separating out the questions and
identifying separate tests
<serge> and then how to test those assumptions, or whether they're
already been tested (and the result)
<tlr> +1 to PHB on this one
Mez: that sounds like a great idea, despite the density of the SFE
parts of the document
<PHB2> Don't tell me that its impossible to get people to take notice
of what they see on the computer screen, if that was true then there
would be no phishing
<PHB2> The problem is that the bad guys are better at getting the user
to take attention than we are
<rachna> To answer Phil, the Harvard study showed that when BOFA users
thought that they were doing a usability study about the design of the
BOFA website, 92% of people provided their own BOFA credentials when
the Sitekey was removed.
tlr: we need to be extremely careful in picking the questions we try to
address, because our language aggregates a lot of different practices
serge: just to repeat what I said in channel, I think the next step is
to get through each recommendation, and define the assumptions it makes
in order to succeed
<PHB2> To reply to Rachna, yes, but take people into a lab and maybe a
missing image is assumed to be due to a different cause?
<Mez> a reminder, a "recommendation" is a piece of normative text in
wsc-xit
<johnath> agree with serge that it's not useful to talk about how to
evaluate unless we know about assumptions? ... but wasn't that supposed
to happen? ... remember usability experts going through the existing
material at one point ... there was useful effort about assumptions ...
<rachna> jonath, yes we did that and only a few people (like you) read
it
<johnath> ... potential problems, all that ... sent mail to list, "here
are my interpretations" ...
<Mez>
[17]http://www.w3.org/2006/WSC/wiki/RecommendationUsabilityEvaluationFi
rstCut
<serge> yes
<PHB2> And furthermore, the problem here with that particular attack
seems to me to be that you can't create a 100% reliable security
indicator in the content area! Thats not quite the same as saying
people don't look at the indicata.
serge: there was little discussion on the work that we did, going step
by step through the document might be the way to get people to pay
attention
<PHB2> I would very much like to replicate the study and get better
data, particularly if they proved the results of the original study :-)
Problem is that the only way I can see doing that would be to monitor
the success rate of a phishing attack.
Mez: serge, if you want to actually form a proposal on process, I can
straw poll it
serge: I don't know how our working group guidance is built, but I do
think we should have some kind of rules about how we evaluate our text
Mez: I've been falling back on "people should open issues"
<tlr> We are not in a process where "proposal" would be the right
granularity for decisions.
serge: right, but absent the issues that people raise, there should
still be an evaluation process
<rachna> Phil, the purpose of studies is not to give us perfect
replication of real world scenarios (we can't do that). It is only to
tell us where and why the problems exist and what useful remedies might
be.
tyler: I think, to move forward, we need the researchers need to start
going through normative text and making specific recommendations for
changes
<serge> I really need to go
<PHB2> +1 rachna
<serge> can we make the appropriate action items so I can figure out
what I need to do
<PHB2> Thats my point, we are not doing physics here, we are not going
to get physics type experimental results
<tlr> PROPOSED ACTION: serge to put together for a single agenda item
the usability evidence, map that to some set of statements in wsc-xit,
put it into issue
<PHB2> There is a big difference between accepting that we have
problems with site key and concluding that 'users ignore all indicata'.
<rachna> It is not physics, but humans are not entirely unpredictable
either.
<PHB2> Actually electrons are entirely unpredictable, they only become
predictable in aggregate. Same is true for humans
<serge> Begin examining some of the recommendations, write down the
underlying assumptions for success, then list any prior studies that
have already examined those assumptions, and possibly how to test the
untested assumptions
PHB2: there are some things that we can agree on - "Do no harm" is a
first criteria
<scribe> ACTION: serge to Begin examining some of the recommendations,
write down the underlying assumptions for success, then list any prior
studies that have already examined those assumptions, and possibly how
to test the untested assumptions [recorded in
[18]http://www.w3.org/2007/11/14-wsc-minutes.html#action02]
<trackbot-ng> Created ACTION-343 - Begin examining some of the
recommendations, write down the underlying assumptions for success,
then list any prior studies that have already examined those
assumptions, and possibly how to test the untested assumptions [on
Serge Egelman - due 2007-11-21].
<serge> I don't plan on doing all of them, just a sample, since I'd
hope that others would help.
<serge> there's no way that's getting done in a week. :)
<tlr> serge, what's a realistic due date?
<Mez> right, please reset the due date to something you'll make serge
<serge> anyway, I need to go, I might stay on IRC from my other
meeting.
<serge> maybe 1 month
PHB2: another proposed criteria is whether users always deactivate it
<Zakim> Thomas, you wanted to make philosophical bad cop point
PHB2: if we could predict how people interact to this stuff, we
wouldn't have the problems we do with lab research
<PHB2> ;-)
<PHB2> There is more than one set of success criteria, security is not
the only one
tlr: as we go into this discussion, we will have to have a very close
look at whether the success criteria that the studies come up with are
ones the group can agree on
<PHB2> I want to reduce crime, but my employer's interests and my
cusotmer's interests and the browser providers interests are all subtly
different.
Mez: I think it would be a bad use of our time for usability studies to
have one set of success criteria when the group reaches consensus on
other criteria
tyler: I think phil mentioned a couple things that we can move forward
on. Introducing new vulnerabilities, being so annoying that users
disable it, those are good things to use in evaluation
PHB2: we have to consider what level of user interference is necessary
to incorporate security into people's browsing habits
... we need to be a little careful here about making hard and fast
statements about what's acceptable to the end user
<Zakim> Thomas, you wanted to talk about attack vectors
PHB2: clearly if it's absolutely untenable, browser vendors won't
implement it, but I don't think we can make assumptions about knowing
how to make security usable at this stage
<PHB2> +1 tlr, its a balance of risk issue
tlr: I hear tyler say "if there is a new attack vector opened, it is
positively harmful" - there are cases where we have to trade attack
vectors off against each other
<PHB2> It also depends on the controllability of the attack vectors
tlr: particularly when the current state is more dangerous than the
proposed state, even if the proposed state introduces a new (less
dangerous) attack
Mez: 5 minutes to go - tyler, if you want to continue that discussion,
take it to the standard outlets please
... meeting again next week. I'm pretty close to being out of issues
without follow-up, which is a good thing, but means we have an emptier
agenda until we start digging through wsc-xit
<tlr> ACTION-284: Trusted Certs
tlr: I would suggest ACTION-284 go on the agenda
<tlr>
[19]http://www.w3.org/mid/2788466ED3E31C418E9ACC5C31661557084EBB@mou1wn
exmb09.vcorp.ad.vrsn.com
Summary of Action Items
[NEW] ACTION: mez to write up "comment disposition process" in wiki
[recorded in
[20]http://www.w3.org/2007/11/14-wsc-minutes.html#action01]
[NEW] ACTION: serge to Begin examining some of the recommendations,
write down the underlying assumptions for success, then list any prior
studies that have already examined those assumptions, and possibly how
to test the untested assumptions [recorded in
[21]http://www.w3.org/2007/11/14-wsc-minutes.html#action02]
[End of minutes]
__________________________________________________________________
Minutes formatted by David Booth's [22]scribe.perl version 1.128
([23]CVS log)
$Date: 2007/11/21 16:10:09 $
References
1. http://www.w3.org/
2. http://www.w3.org/2007/11/14-wsc-irc
3. http://www.w3.org/2007/11/14-wsc-minutes.html#agenda
4. http://www.w3.org/2007/11/14-wsc-minutes.html#item01
5. http://www.w3.org/2007/11/14-wsc-minutes.html#item02
6. http://www.w3.org/2007/11/14-wsc-minutes.html#item03
7. http://www.w3.org/2007/11/14-wsc-minutes.html#item04
8. http://www.w3.org/2007/11/14-wsc-minutes.html#ActionSummary
9. http://www.w3.org/2006/02/lc-comments-tracker/39814/
10. http://lists.w3.org/Archives/Public/public-usable-authentication/2007Nov/0001.html
11. http://www.w3.org/2006/02/lc-comments-tracker/39814/WD-wsc-usecases-20071101/
12. http://www.w3.org/2000/06/webdata/xslt?inclusion=1&xslfile=http%3A%2F%2Fwww.w3.org%2F2003%2F12%2Fannotea-proxy&xmlfile=http%3A%2F%2Fcgi.w3.org%2Fcgi-bin%2Ftidy-if%3FdocAddr%3Dhttp%3A%2F%2Fwww.w3.org%2FTR%2F2007%2FWD-wsc-usecases-20071101%2F&annoteaServer=http%3A%2F%2Fwww.w3.org%2F2006%2F02%2Flc-comments-tracker%2F39814%2FWD-wsc-usecases-20071101%2Fannotations
13. http://www.w3.org/2006/02/lc-comments-tracker/39814/WD-wsc-xit-20071101/
14. http://www.w3.org/2006/02/lc-comments-tracker/39814/WD-wsc-usecases-20071101/
15. http://www.w3.org/2007/11/14-wsc-minutes.html#action01
16. http://www.w3.org/2006/WSC/track/issues/117
17. http://www.w3.org/2006/WSC/wiki/RecommendationUsabilityEvaluationFirstCut
18. http://www.w3.org/2007/11/14-wsc-minutes.html#action02
19. http://www.w3.org/mid/2788466ED3E31C418E9ACC5C31661557084EBB@mou1wnexmb09.vcorp.ad.vrsn.com
20. http://www.w3.org/2007/11/14-wsc-minutes.html#action01
21. http://www.w3.org/2007/11/14-wsc-minutes.html#action02
22. http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
23. http://dev.w3.org/cvsweb/2002/scribe/
Received on Wednesday, 21 November 2007 16:13:02 UTC