W3C home > Mailing lists > Public > public-interledger@w3.org > March 2016

Re: failures during the preparation phase (universal mode)

From: Xavier Vas <xavier@tr80.com>
Date: Thu, 17 Mar 2016 08:19:57 +0000
To: Interledger Community Group <public-interledger@w3.org>
Message-ID: <06c98f02b3b51e98002eea450002f242@localhost>


Thanks for picking up on my "unify prep and execution phase" thread.

On 2016-03-16 03:51, Evan Schwartz wrote: 

> First, when the sender constructs the full path, they know whether they want a fixed source or destination amount and can quote it accordingly. If they are instead going to pass off the request to the next intermediary, and let's say they wanted a fixed destination amount, how would they determine the source amount? It might work if each node could also provide a quote of how much it would cost to route the payment, but I'm not sure how much better this would be than the current setup. Definitely worth discussing more though.

Well the routing can begin at the destination end, invert the whole
thing, route backwards to the source and add fees as you go. Routing
doesn't have to happen in the same direction as moving data/money. It
could even start in the middle. You are from Ripple labs, no? Ripple's
path finding works from either end depending on which amount is fixed.
This concern is independent of whether nodes route themselves or it's
onion or bird's eye. 

> The second, somewhat related, issue that will come up with payment routing but not IP packet routing is that of fees. If each node takes some kind of fee there's a strong incentive to get payments to flow through you. If the sender is picking the path, they obviously have an incentive to find the best path possible. The other nodes will have an interest in having the sender (or receiver) pay as much as possible. Anybody have ideas about how to do the node-by-node routing such that it still gives the sender the best deal?

Not so fast, IP packet routing has a "fee", and that fee is dropped
packets. Hence the slow start TCP algorithm which tries to find some
optimum of high throughput and low drop count. 

But let's get back to money first: In the end it's all reputation and
competition, just like in current markets, financial and tangibles. Note
that with onion type routing all that any end user or connector node
will ever be communicating with are the other connectors which --
because of reputation concerns -- they will each have a relationship
with that spans many transactions and hence is more long-lasting and
more slow-changing compared, relatively to the speed of individual

As opposed to current human/paper driven markets though, it all happens
at internet speed. If you as an end user want to send $100, you could
have your software agent first make 5 transactions for $1 with each of 5
initial connectors, sending $25 totally. Those transactions take 10
seconds each, in total a minute. It then picks the connector with some
optimum of throughput (destination amount) and low variability across
the 5 transactions. With in this minute, the agent has played the
competition and established some trust and uses that to send the rest.
That's conceptually not so different from above slow start TCP

If that seems overly complex, well welcome to Really Large Internet
Scale Systems (TM) where nothing is deterministic and things are always
in flux, but game theory and intelligent agents can provide varying
degrees of certainty. I think a lot of people coming from the
Bitcoin/Ripple space where everything is absolute (including absolute
failures) will have a hard time getting their heads wrapped around that.

If you want an instant mental picture to guide your thinking, think of a
BitTorrent swarm, assuming you have a rough idea how that works. There
is a constant joining and leaving of peers and varying availability of
data blocks, and connections are renegotiated constantly. Hence the end
user's download speed fluctuates. There's even some tit-for-tat game
theory in there somewhere. 

- Xav 

Received on Thursday, 17 March 2016 08:20:00 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 17 March 2016 08:20:01 UTC