W3C home > Mailing lists > Public > xmlschema-dev@w3.org > April 2009

Re: Conditional Levels of a Schema

From: Arshad Noor <arshad.noor@strongauth.com>
Date: Wed, 08 Apr 2009 03:39:41 -0700
Message-ID: <49DC7EED.5080700@strongauth.com>
To: Michael Kay <mike@saxonica.com>
CC: 'Dieter Menne' <dieter.menne@menne-biomed.de>, xmlschema-dev@w3.org
Michael Kay wrote:
> I'm no security expert but it seems very surprising to me that an argument
> based on security should lead you to include data in a message that the
> recipient doesn't want or need. I would have thought the "need to know"
> principle was still relevant.
	The paradigm we are used to, currently, Michael, is to
	restrict access to data on a need-to-know (NTK) basis.
	In the paradigm that we promote, we stop worrying about
	data-flows and focus on access-control of decryption keys.

	The entire (failed) model of security today is because
	applications are designed with little or zero data-security
	inherent in them.  They all assume that the network/host
	will provide the protection.  However, evidence shows that
	this is fallacious.

	We believe that the model needs to be turned upside-down;
	that security must begin with the data by armoring the
	data first.  This allows the data to be secure no matter
	where it goes - disks, network, log-files, CDROMs, flash-
	disks, databases, etc.  This is very unlike reality today,
	where data is safe only on the SSL/IPSec wire, but completely
	unprotected the moment it comes out of that pipe.  Where
	do you think the attackers are focusing their attention?
	Outside the encrypted pipe.

	Some vendors tout encrypted databases, encrypted disk-drives,
	encrypted file-systems.  All of these are point-solutions
	that do not cover all the risks.

	With encryption *in the application*, you've addressed the
	vulnerability once and for all, leaving key-management as
	your biggest headache.  And, that's the problem we solved
	three years ago.
>> That is the only downside: the data is always present.  But, in these days
> of megabit speeds to mobile devices, and gigabit to desktop/laptops, I'm not
> so sure its an issue for new applications).
> Wrong, it's a big issue. In the system I mentioned with 400 messages, many
> trivial messages were reaching Gb size because the schema insisted on
> inclusion of data that the recipient of the message wasn't interested in.
> Rather than designing messages to match what the process model said was
> needed on a particular data flow, they were designing messages based on the
> static data model, so for example a complete bank account object was being
> sent when the recipient only wanted to know the current balance.

	I won't dispute that there are applications where this is a
	problem.  In those situations, the businesses must decide
	which cost they're willing to accept over the long-term:
	maintaining multiple schemas/application-logic that provide
	appropriate levels of data to an application, or dealing with
	extraneous data in a single schema/application-base.

	I believe the old expression - "you can't have your cake and
	eat it too" - comes to mind.  :-)

Arshad Noor
StrongAuth, Inc.
Received on Wednesday, 8 April 2009 10:40:23 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 23:15:51 UTC