Re: Draft of security disclosure best practices

I'm going to argue that this "responsible disclosure" policy shouldn't be
adopted; it should be discarded, because it's based on erroneous assumptions.

The explanation is lengthy.  I'm going to scatter some reference
examples throughout, but in the interest of brevity, I'm going to omit
hundreds more.  I'm including these references in part to show that
these are *not* isolated incidents: they're systemic, chronic problems
created by the vendors of software and services.   This is, unfortunately,
what the security/privacy landscape looks like.

The numbered points I'm going to make aren't in any particular order:
they overlap and complement each other, so it's difficult to come up
with an order that might be considered optimal.  I apologize for the
redundancy created by this presentation.


1.  "Responsible disclosure" policies are constructed based on the
supposition that security researchers owe the companies in question
something.  But we don't.  We're not their employees or business partners,
we're not their customers or users: therefore we owe them NOTHING.

Now, we may *choose* to give them something -- like a heads-up about
a bug -- because we think it's a good idea or because we think it's a
nice thing to do or because it's Thursday -- but the fact that we may
exercise that choice from time to time does not magically turn it into
an obligation.  It's not OUR responsibility to ensure the security/privacy
of their software and services: that's completely on them.


2.  It is the responsibility of companies to release software (firmware,
whatever) and to provide services without security/privacy issues.
They often fail to do so, because they use poor development practices
(e.g., closed source), because they rush its development (e.g., nearly
everyone), because they skimp on QA (also nearly everyone), because they
decide to harvest some data that they really really shouldn't, and because
-- particularly in the case of DRM -- they focus far more on denying
users the unfettered use of their own computing hardware than they do
on protecting users' security and privacy.

Let's be clear: a failure to do that is a lapse in *their* responsibility.

We're not required to compensate for all that.  We're not even required
to try.  It's not our product.  It's not our company.  We're not required
to spend our money and our time doing the things that they were unwilling
to spend their money and their time on.

After all: if they *did* spend sufficient money and time, if they *were*
sufficiently careful, there would be little, if anything, for us to find.

But that's not even close to what's happening.  They're rushing very sloppy
work out the door and demanding -- via "responsible disclosure" policies --
that we compensate for their appalling lack of due diligence.

That's a non-starter.


3.  By now we all know that the playbook for companies presented with
a security or privacy problem is some combination of:

	A. Deny
	B. Delay
	C. Obfuscate
	D. Threaten reseacher
	E. Attempt to censor researcher
	F. Litigate against researcher
	G. Relabel as feature
	H. Deny and delay and obfuscate some more
	I. Reluctantly fix it poorly, likely introducing a new problem
	J. Take credit
	K. (later) Release new product with same problem, return to (A)

For example:

	https://boingboing.net/2008/03/17/sequoia-voting-syste.html

	https://boingboing.net/2014/10/21/inside-secure-threatens-securi.html

	https://www.eff.org/deeplinks/2011/11/carrieriq-censor-research-baseless-legal-threat

These "responsible disclosure" policies are an attempt to facilitate
this approach by recasting the security researcher's side of it as
somehow "responsible" for THEIR side of it.

This leaves researchers with various options, which I'll boil down
to two approaches:

	1. Try to do it their way.  It's more likely that this
	will results in threats, censorship, litigation and possibly
	prosecution than in a timely, accurate, complete fix and credit
	where it's due.

	2. Don't try to do it their way.  Either publish anonymously or
	sell the vulnerability on the open market.

Vendors have only themselves to blame for this.  Had they not, in the
aggregate, accrued a long and sordid history of completely irresponsible
behavior -- which they're adding to every day -- then perhaps other
choices would be viable.


4.  "Responsible disclosure" policies are based on the happy fantasy that
what one researcher has found has ONLY been found (and found recently)
by that researcher and not by half a dozen others (and some time ago).

This would be convenient, and perhaps some of the time it's true (although
there is no way to prove it, only the opposite) but it's not a solid
operating assumption.  Software comes under scrutiny because it's new,
or it's perceived as important, or because it's used in critical roles,
or because it has a history of problems, or because the vendor has a history
of problems, or because the vendor is a jerk, or because it resembles
other software with problems, or because it's deployed at a target,
or for myriad other reasons that would take up pages.  But the point is
that if it is of interest to one person, there are plenty of reasons
why it's of interest to other people.  Sometimes: many other people.

Thus when researcher A dutifully does the "responsible disclosure"
tango, there is absolutely no way to know that researcher B quietly
(and profitably) sold the exact same thing to parties unknown three
months ago, or that researcher C, who happens to work for a nation-state,
has been happily exploiting it for two years, or that researcher D will
come across it tomorrow.

The myth of "responsible disclosure" is that these things never happen
and can't happen, and thus "responsible disclosure" actually protects
the public.  And maybe sometimes it does.  But it's not a good bet, and
as the number of researchers and the sophistication of their tools and
the resources available to them all increase, it's a worse bet every day.

	"What one man can invent another can discover."
		--- Sherlock Holmes

A far better bet is to presume that highly competent well-resourced people
already have it and are actively exploiting it.  Right now.


5.  This is specific to DRM: DRM is an attempt to remove some of the
user's control over their own hardware and software.  It attempts to
constrain what a user can do with their property.  This is no different
from many forms of malware, so it's not surprising that it's fraught
with security and privacy issues by its very nature.

In other words, an attempt to institute DRM creates a set of problems that
didn't previously exist.  Vendors are *choosing* to do that: nobody's
making them do it.  And, having chosen to create this set of problems,
they're now trying to impose conditions on the researchers who wish to
investigate them.

If they're so concerned about those problems and the consequences of those
problems, then all they need do is refrain from creating them.  Don't use
DRM, and the entire problem space associated with it vanishes.


6.  DRM *will* be used as a delivery and enabling mechanism and/or
an excuse for malware:

	https://blogs.technet.com/b/markrussinovich/archive/2005/10/31/sony-rootkits-and-digital-rights-management-gone-too-far.aspx?Redirected=true

	https://www.techdirt.com/articles/20130207/03465521908/canadian-chamber-commerce-wants-to-legalize-spyware-rootkits-to-help-stop-illegal-activity.shtml

	https://www.techdirt.com/articles/20170204/17195136635/windows-drm-now-unwitting-ally-efforts-to-expose-anonymous-tor-users.shtml

Even if we make some enormous leaps of faith (to wit: none of these will
ever do it again, and even more unrealistically, nobody else will try)
what's to stop an attacker from piggybacking on them?  It's happened
before.  Repeatedly.  And there's no doubt it'll happen again: of course
it will, it's really quite an effective tactic.

DRM provides a dangerous attack vector, and it *will* be exploited early
and often -- whether via vendors themselves, via their partners, or via
third parties.  The best defense against this is prompt full disclosure.


7.  Another problem with "responsible disclosure" is that it denies
defenders timely access to critical information.  If it's known that,
let's say, there's a vulnerability in mod_abcdefg, a popular module in
the Apache web server, then perhaps those using it can disable it while
waiting for a fix.  Or modify their deployment of it.  Or monitor it
more closely.  Or *something*.

But if they're denied knowledge of this vulnerability's existence, they
can and will do nothing.  They have no means to defend themselves and
don't even know that they should.  They'll remain not only vulnerable,
but unaware that they're vulnerable, until the vendor finally decides
to say something publicly.  (Which, by the way, the ASF is quite good
about, so don't read this as a critique of their efforts. It's just
a hypothetical example.)

This gives their adversaries two huge advantages: (1) knowledge of an
exploitable vulnerability (2) the element of surprise.

The delays in public notification are now several orders of magnitude
larger than the time required to exploit this information.  The delays
for fixes are even longer.  If vendors were interested in exhibiting
responsibility, then the large ones would all operate 24x7 emergency
response centers tasked with notification, migitation, and fixes -- ready
to leap into action minutes after a vulnerability was disclosed to them.

But none of them do anything remotely like that. It's not uncommon for a
delays of *months* to occur when even "days" is quite clearly unacceptable
and wholly unprofessional.

	https://arstechnica.com/security/2012/08/critical-java-bugs-reported-4-months-ago/

	https://news.slashdot.org/story/11/12/07/0057227/adobe-warns-of-critical-zero-day-vulnerability
	(note their timeline for a fix)

	https://arstechnica.com/security/2014/05/adobe-shockwave-bundles-flash-thats-15-months-behind-on-security-fixes/

And even then, after months of delays, sometimes they still can't get it right:

	https://tech.slashdot.org/story/13/09/11/2126238/microsoft-botches-more-patches-in-latest-automatic-update

	http://www.infoworld.com/article/2611984/microsoft-windows/microsoft-botches-still-more-patches-in-latest-automatic-update.html

	https://www.techworm.net/2014/12/microsoft-outdoes-putting-buggy-updates-time-kb-3004394.html

I think it's extraordinarily disengenuous for any of these to demand
"responsible disclosure" from researchers when they themselves have
completely failed to even come close to meeting minimally acceptable
standards.


8.  Another problem with "responsible disclosure" is that it presumes
vendors view the reported vulnerability as a bug and actually WANT
to fix it.   Decades ago, I paraphrased Arthur C. Clarke by writing
"Any sufficiently advanced bug is indistinguishable from a feature".
(This wound up in the Berkeley Unix "fortunes" file, if you're looking
for it.)  I thought I was engaging in hyperbole -- and maybe I was.
But not today: today, that's a fairly accurate assessment of many pieces
of software, which have deliberate privacy and security bugs designed
and built-in to serve the vendors' needs at the expense of users.

This is getting much, much worse in a very big hurry thanks to the
world's most widely distributed dumpster fire: the "Internet of Things".

So suppose I find this:

	https://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/

	https://www.techdirt.com/articles/20150220/07171830085/lenovo-cto-claims-concerns-over-superfish-are-simply-theoretical.shtml

	https://arstechnica.com/security/2015/02/ssl-busting-code-that-threatened-lenovo-users-found-in-a-dozen-more-apps/

or this:

	https://www.engadget.com/2017/02/06/vizio-smart-tv-viewing-history-settlement-ftc/

or this:

	https://motherboard.vice.com/en_us/article/internet-of-things-teddy-bear-leaked-2-million-parent-and-kids-message-recordings

or this:

	https://boingboing.net/2017/02/20/the-previous-owners-of-used.html

or any of these:

	https://www.techdirt.com/articles/20160725/09460835061/internet-things-is-security-privacy-dumpster-fire-check-is-about-to-come-due.shtml

And suppose I dutifully report any of these to the vendor.  Do you think
they'll rush to assemble a team to fix it?  Do you think they'll issue a
preliminary advisory?  Do you think they'll fully analyze it and develop
a solid, tested fix that's distributed to everyone?

Or do you think it's more likely that their legal department will do its
best to silence me...because they don't view this as something broken?
Or because they don't want the bad publicity?  Or because they've been
happily selling all the data they've collected, profiting a second time
from their customers without their knowledge?

To them, many of these things are features, not bugs.  They're not accidents.

In this situation -- which is increasingly common -- I think the only
responsible option is to the tell the world immediately.  Why?  Because if
I tell the vendor, and only the vendor, there is a substantial likelihood
they'll attempt to silence me either by request or by coercion.  And if
someone else finds this -- which is highly likely -- and they announce it
anonymously, then the vendor is likely to blame me and retaliate.  And I
have no effective means to show it wasn't me.  Which means that even if
I do the "responsible disclosure" dance their way from start to finish,
I can still find myself on the receiving end of ruinous litigation --
or worse -- and the public interest will be much less well-served than
if I'd said something up front.  And everyone will know about it anyway.

An observation: the combination of the IoT and DRM is toxic.


9.  We live at a point in time when vulnerabilities are considered quite
valuable -- when individuals, companies, organizations, and governments
look for them, pay for them, sell them, stockpile them, use them.  There's
a lot of money and power involved, which means that we have to consider
some things that we probably wouldn't have considered a decade ago --
things we would have laughed off as well into tinfoil hat terrority.

For example: do you think that everyone who works AT Microsoft is working
FOR Microsoft?

If you were an entity interested in acquiring vulnerabilities as early and
as often as possible -- like a government -- in doing so surretitiously,
in doing so at minimal expense, then one of the most effective things you
could possibly do would be to have your own people inside the bug triage
team at Microsoft.  And Apple.  And Oracle.  And Google.  And Adobe.
And so on.  Cheap.  Effective.  Low-risk.  High-reward.  Sustainable.
Deniable.

When I've brought this up before, the objection's been raised that this
would be difficult.  In response to that, keep in mind that intelligence
agencies routinely manage to infiltrate *each other*: I don't think
they'd find it particularly challenging to place or acquire someone inside
Adobe or Oracle or any of the others.  That's a much lower bar to clear.

Or they could just ask nicely:

	https://www.techdirt.com/articles/20130614/02110223467/microsoft-said-to-give-zero-day-exploits-to-us-government-before-it-patches-them.shtml

Which means that when a security researcher thinks they're reporting
a bug to one of these teams, they are: but those may not be the only
people they're reporting it to.

The generalization of this is: if a bug is serious, if it's exploitable,
then that information is far, far too valuable to stay in any one place.
It WILL propagate.  We could debate how and why and how fast and
everything else, and that would be an interesting conversation...but
whatever conclusions we arrive it will have to include the inexorable
reality that information which has high value is unlikely to stay put.

So let's stop pretending that it will: that's outdated wishful thinking.


10. Many of the vendors and operators who are clamoring for "responsible
disclosure" have failed to take even the most obvious and rudimentary
steps to ensure that anyone actually trying to do that can reach them.

To wit: RFC 2142 dates from 1997 -- twenty years ago.  Section 4 lists the
common mailbox names for contact points relevant to this: abuse, noc,
and security.  Surely any organization interested in timely two-way
communication would maintain those.  I invite you to test them at
the vendor of your choice and see what kind of response -- if any --
you get.   How can any vendor claim to be "responsible" if it has 
failed to take basic steps such as these?  How disengenuous is it
for them to demand "responsible disclosure" when they have their
fingers firmly in their ears and aren't listening?

It's quite common among security researchers -- I've had this experience
more than once -- to find that a vendor of software or services has made
it impossible to actually do the "responsible disclosure" that they demand.
No working role addresses.  Phone number that reaches technical support
whose personnel claim there exists no possible escalation mechanism.
Or: email which disappears into /dev/null.  Or: carefully recited
problem descriptions that never generate a response of any kind.
Or: reports typed into an invariably horrible web form, also apparently
destined for /dev/null.  Or: boilerplate responses that indicate a
complete failure to read and understand the report. Or...

Admittedly, this has markedly improved in some cases -- and that's a
good thing.  Better communications between everyone helps us all.
But overall the situation is still miserable.



Conclusions:

The best disclosure is full disclosure.  Assume that worthy adversaries
already know all the details (or will know VERY soon) and that those
with sufficient resources and motivation have already acted (or will
act VERY soon).  This accurately reflects contemporary reality.

Full disclosure levels the playing field: it gives defenders a fighting
chance of taking effective action before it's much, much too late.

Yes, this will sometimes mean that vendors will have a fire to put out:
but given that they KNOW this is going to happen, they should already
be well-prepared for it: they should have a firetruck or two.  E.g.,
Adobe, a company with a market capitalization of $56B, could easily
afford to have a 24x7 emergency response team on standby.  And given
their history: they darn well should.

So should Microsoft and Oracle and Yahoo and all the rest.

Do keep in mind that vendors could greatly diminish the need for such
measures by doing a better job upfront: stop rushing software and services
to market.  Stop pushing developers so hard.  Stop shipping and deploying
known-buggy code with the intent to fix it in the field.  Stop skimping
on QA.  Stop trying to sneak in security- and privacy-destroying features
because someone thought it would be a good idea.  Take the time and
spend the money to at least *try* to do it right in the first place,
and then it won't be necessary to push the big red button at 4 AM on a
Saturday, call in everyone, start all the coffee machines, and light a
bunch of money on fire frantically trying to fix something.

Worth noting is that many open source projects have a vastly superior
track record on these points.  The reaction time and thoroughness of
the response in many cases has been exemplary.   Which begs the question:
if a loosely-organized distributed confederation of individuals can address
a vulnerability in a timely manner, why can't a corporation with far
more people and far more money to spend?


The security community and the software engineering community and the
programming community have been trying to get vendors and operators to
be responsible for decades.  We've been screaming it from the rooftops:
do X, don't do Y, for a large number of values of {X, Y}.  Granted,
things come and go from that list as we learn more, but it's 2017 and
there are still software and service vendors making 2007's and 1997's
and 1987's mistakes all day every day.  It is disengenuous to suggest
that *we* should be in any way responsible for their failure to heed
well-considered, proven advice.  To put it another way: if they were
serious about being responsible, they wouldn't be shipping software or
operating services loaded with defects that we all knew were defects
a long time ago.  They wouldn't be trying to harvest private data they
shouldn't (see examples above among many others) or trying to compromise
the security of the very people they would call their customers.

They're in no position to demand "responsible" anything from us or anyone
else until THEY start being responsible.


Yes, this does mean that from time to time security researchers will
disclose problems that make life uncomfortable for vendors -- let's say,
a bug that allows a bypass of DRM and lets people download movies.
This may be embarrasing and inconvenient, but it's hardly earth-shattering.
And do recall, as I said above, that vendors could easily avoid this
situation by eschewing DRM: then it can't possibly happen.

It also means that from time to time a security researcher will disclose
a problem that has serious consequences -- that is, something with real
impact on the privacy and/or security of a lot of people.  I think
the security community as a whole has long since put far more than
adequate evidence on the table to support the contention that we use good
judgment in such cases.  (I think we've shown FAR better judgment that
the vendors.)  Yes, once in a while someone has done something foolish --
out of ignorance, or spite, or for other reasons -- and others will do
so in the future, but (a) they're going to do it anyway and (b) they're
not going to care that the W3C or anyone else has a policy.


The best move for the W3C, the thing that best serves the needs of
the billions of Internet users out there, is to drop this proposal:
"responsible disclosure" isn't responsible.

---rsk

Received on Tuesday, 28 February 2017 18:58:33 UTC