Re: Call for Adoption: Encrypted Content Encoding

On 8 December 2015 at 08:23, Eliot Lear <lear@cisco.com> wrote:
>> I can't make sense of this statement.  Is the intent to describe a
>> scenario where malware is encrypted so that only the intended victim
>> can decrypt it?  And that the victim relies on some intermediation for
>> defense against this kind of thing?  Why not say that?
>
> Maybe partially that in as much as an end system confers trust on an
> intermediary (think dropbox.com or some such), but as much that the
> intermediary itself might be unwittingly participating in a malware
> attack, in which case its resources are misappropriated.

I think that's a good angle to use.  I honestly doubt than many
intermediaries would be happy to be misappropriated like that :)

>> Of course, you need to approach this in a more careful fashion.  Your
>> model - if I'm right - presumes a lot about where malware
>> countermeasures are best deployed.  There's an unrecognized
>> architectural question that you are implying a particular answer to.
>> Having the text acknowledge that this is a choice is important, unless
>> you want to open that up for debate.
>
> I'm not quite certain what you're saying.  I'm happy to acknowledge an
> unrecognized architectural question so long as we can state state that
> clearly (whatever it is) ;-)

I think that your proposal had some sort of implication that malware
scanning was the responsibility of some sort of intermediary.  That's
probably unintentional.  I think that all we need to do is acknowledge
that this is a (value-neutral) choice.

>>> of one or more individuals, where a file sharing service or a blind cache is
>>> either broken into or otherwise populated, leading to the potential
>>> infection of one or more recipients who process the content.
>> Note that there isn't any context in the draft that suggests that this
>> applies to blind caching (a very specific concept) or even file
>> sharing.
>
> What got me going were these two use cases that got discussed up
> thread.  I think they're fine use cases, but they pose certain risks
> that would be worth highlighting.  They can be mitigated as discussed
> below.  Perhaps the wording should be more general?

Maybe this can be more compact:

Some clients rely on an intermediary to filter unwanted content, such
as malware.  If such an intermediary is unable to decrypt content,
then it could be unable to fulfill its task. In this case, clients
might need to use other means of protection, such as acquiring
information about the acceptability of content via other channels.

This does provide fewer clues about what you might do.  I'm not
opposed to providing more explicit hints.  But we don't really have
anything concrete to point out, so I don't really want to get into too
much detail.

> It may be, but you have also done some work that could be the basis of
> such a system, which again I mentioned up thread.  I'd be happy to
> suggest more when that draft gets adopted :-)

Sure, but like I note above, that's only a tiny piece of what we'd need.

Received on Monday, 7 December 2015 21:52:18 UTC