W3C home > Mailing lists > Public > public-interledger@w3.org > March 2016

Re: Blockchains and interledger (left-over question from the workshop)

From: Stefan Thomas <stefan@ripple.com>
Date: Wed, 2 Mar 2016 11:01:18 -0800
Message-ID: <CAFpK0Q3s+fQpyKtjfstyjAR_-rphesWSSYTG9qff78bbm01zUg@mail.gmail.com>
To: Dimitri De Jonghe <dimi@ascribe.io>
Cc: Melvin Carvalho <melvincarvalho@gmail.com>, Bob Way <bob@ripple.com>, Evan Schwartz <evan@ripple.com>, Interledger Community Group <public-interledger@w3.org>
> I admit I dont know the fine details here, but If we stick to two for
both chains, then you could trigger a transfer in one ledger, then do a
double spend in the easier ledger, leading to a need for a reversed
transaction.  Have I missed something?

The whole point of ILP is to give people choice in terms of what ledger
they want to use. ILP protects you from faults in the ledgers that other
people choose, but if *you* (as a sender, recipient or connector) choose a
ledger that allows double spends, that's your problem.

A discussion of blockchain architecture is probably a bit off-topic, but
I'll say that explicit node selection as in Ripple, Liquid or Stellar is
far safer than using proof-of-work to choose nodes. In Bitcoin, validators
are chosen based on how well they can mine, which favors extremely scrappy
businesses with very thin margins. That doesn't leave a lot of room for
investments into availability and operational security. With explicit node
selection you can choose very reliable validators or simply so many
validators that it is unlikely they would be able to successfully collude.
(Collusion among large numbers of participants with different interests is
likely to be intractable.)

> Is it worth assessing possible vulnerabilities? They might impose
constraints on ledgers that want to participate.

The paper is pretty clear on what ILP does and doesn't guarantee. It would
be a very bad idea to fix problems of the *ledger* (e.g. double spends,
loss of state, etc.) on the ILP level. We can assume that since ILP gives
all ledgers equal reach, people will prefer "good" ledgers where "good"
likely includes secure, cheap, permissive, etc. A "good" ledger must have a
very low risk of double spends, otherwise people wouldn't use it.

I want to comment on the scenarios you mentioned:

> - transfers of amounts beyond the connector liquidity

Payments that exceed available liquidity will be rejected by the connector.

> - multi-spents through various paths (will keep many connectors tied up)

Tying up liquidity incurs a fee that is freely chosen by the connector.
Connectors can create an appropriate pricing policy that prevents liquidity
starvation. The subject of fee policies does get fairly involved, because
there are many optimizations, but if you want to merely show the overall
feasibility of the system, you can imagine a fee policy where the fee is
inversely proportional to the remaining liquidity between two assets. In
that model lowering the available liquidity by 50% costs twice as much each
time. Since transfers time out, the cost of the attack is linear with time
as well. Since additional capital is attracted by the increased fee level,
the cost of the attack rises over time.

In that model, an attacker can succeed in (possibly significantly) increase
fee levels at a realistic cost. Using the BAR model (see the Interledger
paper) we can say that rational actors (e.g. competing connectors) would
not do this attack, because the fees they would earn will always be lower
than the fees they would be charged. But Byzantine actors could perform
this attack.

In order to reduce the effectiveness of the attack there are a number of
optimizations available. One, connectors may partition their userbase. If
they can divide their users into groups and the attacker falls into only
one of those groups, they can keep the fee level for the other groups
unaffected. So for example, if all the attacker's requests come from one IP
address, the fee level for all other IP addresses could be unaffected. If
none of the attacker's requests are from nodes with valid identity
attestations, then requests with valid attestations could be unaffected and
so on.

Building the identity/reputation systems that allow partitioning like that
is out of scope for ILP, but there are other efforts underway at the W3C
and elsewhere to create those sorts of tools.

> - cyclic paths

Long paths and cyclic paths are two ways to tie up liquidity without
requiring equal liquidity yourself. So we assume that all liquidity
starvation attacks would use one or the other. The mitigations are the ones
described above.

Other attacks we know about are:

> - DoS attacks on pathfinding

This includes creating fake paths etc. to try and tie up the pathfinding
algorithm indefinitely. The currently implemented algorithm is certainly
vulnerable to that, but based on our research so far we believe that
algorithms that are robust against these attacks are practical. Just to
sketch one out: Connectors could keep track of what next hops have
successfully processed how much liquidity as well as how often path finds
through a certain connector have failed.

An attacker could still create legitimate paths and get them to rise in the
priority list by actually having them execute correctly for a while then
turning them off. But since it would take considerable time to build a
reputation and very little time to destroy it, it doesn't seem like a very
impactful attack.

Note that we are essentially creating a modular standard:

- The bottom layer is the escrow layer. Any system that supports the
crypto-condition-based escrow can be made to securely participate in an ILP
- The next layer is the ledger layer. Ledgers should have a consistent API,
but can be agnostic as to what type of flow (Universal, Atomic from the
paper, Optimistic, etc.) is being used.
- The next layer is the connector layer. Connectors do need to know what
type of flow is being used (Universal, Atomic, etc.), but don't care about
how the path is constructed.
- The next layer is the pathfinding layer. Pathfinding
(five-bells-pathfind) does need to construct the path, but doesn't care
which order that payments are set up, how retries are performed etc.
- The next layer is the orchestration layer. Orchestrators
(five-bells-sender) does need to have ways to do retries and track the
state of the payment, but doesn't care about the use case and how recipient
information is exchanged.
- The next layer is the use case layer. Applications need a way to exchange
the receipt condition, recipient account identifier and source or
destination amount, but otherwise don't care about the layers below.

Note that individual participants may implement APIs from different layers.
Specifically, connectors may implement APIs that support pathfinding, such
as getting a feed of their liquidity data or statistical information about
which neighboring connectors are reliable.

Note further that this separation of concerns is aspirational. We may have
to compromise sometimes, but we are certainly trying to separate the layers
as much as possible.

On Wed, Mar 2, 2016 at 6:27 AM, Dimitri De Jonghe <dimi@ascribe.io> wrote:

> Op wo 2 mrt. 2016 om 07:01 schreef Melvin Carvalho <
> melvincarvalho@gmail.com>:
>> On 2 March 2016 at 05:13, Stefan Thomas <stefan@ripple.com> wrote:
>>> > How many confirmations are needed for inter ledger transfers between
>>> two block chains?
>>> Same as any other Bitcoin transaction. There are technically two
>>> transactions that happen, but the second one is not in the critical path.
>>> (In the execution phase, all transfers can execute simultaneously.)
>> But one block chain is bitcoin, for which confirmations are protected by
>> a lot of hashing.  The other block chain could be (and almost certainly
>> will be) far less robust in terms of hashing.  So how many confirms on the
>> second chain?  I admit I dont know the fine details here, but If we stick
>> to two for both chains, then you could trigger a transfer in one ledger,
>> then do a double spend in the easier ledger, leading to a need for a
>> reversed transaction.  Have I missed something?
> Very interesting point, and deserves some more elaboration... (please
> correct me where I am wrong)
> Per ledger one has two types of transactions:
> (1) the locking transaction that places the funds in escrow
> (2) the release transaction that either pays forward (2a) or reverses (2b)
> the funds.
> Considering double-spents may happen due to a simple mistake, a faulty
> ledger or malicious participant behaviour (collusion, sybil attack, etc):
> In the case of (1), the funds are both in escrow and spent elsewhere. This
> might happen at the connector level when he receives a high rate of escrow
> requests on a certain ledger but can't keep account of his unspent outputs.
> One should be able to detect double-spents the condition/fulfilment process
> of the escrow. Indeed, it may take arbitrary time to detect this, hence a
> safety margin might be in order. Typically for the bitcoin blockchain one
> waits about 6 blocks (ie 1 hour).
> In the case of (2), this means that the escrow would perform a
> double-spent, but this requires m-of-n signatures or other fullfilment
> conditions. A double-spent on this level seems unlikely. Can't think of a
> plausible scenario at the moment, but maybe let's keep the possibility open.
> That said, it looks like the slowest ledger will keep the funds locked for
> a long time (>1 confirmations). These opportunity costs will probably be
> taken into account as connector fees.
> One could think of more possible attack vectors and locking scenario's:
> - transfers of amounts beyond the connector liquidity
> - multi-spents through various paths (will keep many connectors tied up)
> - cyclic paths
> @stefan, @evan, @others
> Is it worth assessing possible vulnerabilities? They might impose
> constraints on ledgers that want to participate.
> Dimi
Received on Wednesday, 2 March 2016 19:02:09 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 19:02:10 UTC