Re: Proposal: a "clear site data" API.

On 13 Jun 2015 1:18 am, "Mike West" <mkwst@google.com> wrote:
>
> Hi Alex, Jonathan!
>
> On Sat, Jun 13, 2015 at 8:23 AM, Alex Russell <slightlyoff@google.com>
wrote:
>>
>> i also don't understand how this will help us in the situation of an XSS
where there's a SW and live tabs. I.e., where there's a compromised
document that stuffs crap values into storage to defeat a server or SW's
attempts to clean things up. Without a variant that pauses and forces
refresh of all documents at the origin, it doesn't solve my big problem.
>
> The strawman deals with this by sandboxing any active documents whose
origins match the header's scope (see
https://mikewest.github.io/webappsec/specs/clear-site-data/#neuter-contexts).
It's not at all clear to me that this is the _right_ way to deal with the
problem as it has a number of strange side-effects, so I look forward to
suggestions. :)
>

Right, i saw that. My worry is the case where tabs might be re-stuffing bad
data. Neutering won't prevent existing entangled message ports, e.g., from
allowing collusion/duping, particularly if this happens serially (which i
didn't umderstand from the strawman). Also, how do we communicate to the
user that these tabs are "dead"?

That's why I'm focused on suspending/reloading. Suspending script execution
keeps collision from happening. Hard reload after all are disconnected and
storage is reset seems the only way to know a page is " clean ". Am i
missing something?

Also: this is awesome! Thanks for drafting it!

>> On 12 Jun 2015 5:54 pm, "Jonathan Kingston" <jonathan@jooped.com> wrote:
>>>
>>> Does all data stored in the file system and IndexedDB count as a cache
always? Would these be worthy exclusion directives?
>
>
> The strawman lumps those in with "DOM-accessible storage". Since they
seemed to me to be the crux of the issue, it didn't make much sense to me
to add exclusions (e.g. no one would use them). If we decide that Richard's
suggestion to move to a blacklisting model (or a more extensive exclusion
model) is the right way to go, then I agree that we'd want to add granular
selection capability.
>
>>> I'm also a little worried of giving this power to clear http only
cookies and opaque data to a JavaScript API also.
>
>
> If we were giving the ability to _set_ this kind of data, I'd be worried.
I might even be worried if we gave the ability to clear _specific_ pieces
of data. Clearing _everything_, on the other hand, is purely destructive,
and seems pretty safe.
>
>>>
>>> Can the specification have an advisory to add a console message or
similar reporting to ease debugging.
>
>
> When would you expect to see a console message? What would you expect it
to say?
>
>>>
>>> A post data clear event might be useful also so JavaScript can know to
clean up interfaces to show the user is logged out.
>
>
> Given that this is (in the strawman) attached to an HTTP response, I'd
expect the logout landing page to be something the site could build without
such an event. What's the use-case you see?
>
>>> When all contexts are neutered how will they handle stale user input?
>
>
> They won't handle it well. The idea is to give the origin a kill-switch
for open contexts in order to prevent them from re-poisoning the well after
we clean it up. Sandboxing them removes their access to things like local
storage and IDB, but also prevents them from persisting any state they
might _want_ to persist (because "good" data and "bad" data are
indistinguishable from the UA).
>
> --
> Mike West <mkwst@google.com>, @mikewest
>
> Google Germany GmbH, Dienerstrasse 12, 80331 München,
Germany, Registergericht und -nummer: Hamburg, HRB 86891, Sitz der
Gesellschaft: Hamburg, Geschäftsführer: Graham Law, Christine Elizabeth
Flores
> (Sorry; I'm legally required to add this exciting detail to emails. Bleh.)

Received on Saturday, 13 June 2015 16:38:22 UTC