>> Okay so we're delighted to
have Jacob Leshno from Columbia,
tell us about the Economics of
the Bitcoin Payment System.
>> Thank you. Very
happy to be here.
Thank you for coming. So, I'm
sure that you've all
heard out about bitcoin.
I just want to say, what
I want to tell
you about bitcoin.
There's a lot of aspects to
the system that you
can think about,
the way that I
want to think about it
is, as a payment system.
So, there's a new currency,
there's a new coin,
but a lot of people have
been printing coins,
they're really novel innovation.
Bitcoin is that it's
an independent system.
It's a system which
is not controlled by
anybody, it's fully
decentralized,
and it allows you to
do the same kind of
transactions or provide
the same kinds of
services as PayPal
does, or Fedwire.
It can hold a balance
in some currency,
and you can send the balance
to some other people.
So, you can make transfers.
And the really big
difference is that,
PayPal is operated by a company,
Fedwire is operated
by a company.
They maintain
the infrastructure,
there's no such counterparty
that operates Bitcoin.
Bitcoin is kind of
operates as a collective.
So, I have to
have a slide about like
bitcoin is kind of serious,
like there's two things or
three things from this slide.
One, is the bitcoin
market cap is pretty big,
it's like a lot of
billions of dollars.
Second, this is from September,
since then bitcoin
went from $4,500 here
to $19,000 and now it's
about $9,000 $10,000,
so it's incredibly volatile,
so things move
pretty quickly here.
And the third thing
that I want to
show you is that it's
not just about bitcoin,
there's a lot of
different cryptocurrencies,
and each one of
those cryptocurrencies
is its own system
in its own design.
And probably what we
want to think about is,
how you design those kind of
systems, kind of trade-offs.
So what is the system
supposed to do?
Is provide service to users
just like PayPal does.
You want to hold an account,
have some balance,
and be able to transfer
to other entities.
And generally this is
the kind of business
where you think of
natural monopolies,
because it's better
for me to be in PayPal,
if all my friends
are also on PayPal.
And if a new technology comes
up but none of
my friends use it,
it's hard for this technology
to gain market traction
if nobody ever wants it.
So in economic terms,
like that will be
a network of sonority
that makes this thing sort
of a national monopoly.
So, we may all be very
happy to move to PayPal
but we know that once
we move to PayPal,
PayPal will have
a strong grip on the market,
will be very hard for us
to move to somewhere else,
and that's kind of problematic
because PayPal will
want to increase
the fees once it's
locked everybody in.
And bitcoin is going to be
different because in bitcoin,
you have some sort of
protocol that connects users,
and some servers that
provide the service,
but nobody's controlling it.
So, the title of
the paper monopoly without
monopolist is not there
to suggest that
bitcoin will become
a monopoly, far from it.
But even if something like
bitcoin becomes a monopoly,
it will not do some of the
things that a monopolist does.
Because a monopolist has
full control of the system,
they decide how many
servers to deploy,
how much to charge the users,
but in bitcoin there's
no such entity that
get to decide how
much to charge,
or how many servers to deploy.
And what we're trying to
do here in this paper,
is try to understand how
all those decisions will
get determined under
the bitcoin system.
So you'll still need servers,
you still need to charge users,
you still need to pay
the infrastructure
that you are using,
how is this payment going
to be determined in bitcoin,
how are the servers
going to be financed,
and how many servers
will the bitcoin system
decide to get?
So let me skip
this and just say that's
what I'll do now is,
I'm guessing that
a lot of people
here are familiar with bitcoin,
but let me just go through
a brief overview at a high level
how the system operates.
From that I will
derive economic model,
that just focuses on
the properties of the system.
What does it mean
for the way that we
can model the system
as an economic system?
And then, we'll
derive some questions
that will tell us what will be
the prices that people
will pay under bitcoin,
how efficient will
this pricing scheme will be,
and what does it mean for
the design of the system?
How can we think of
designing different kind of
bitcoin systems to make
them more efficient?
Okay. So, how does
bitcoin look like?
So the first thing is that,
bitcoin does not store balances,
it stores older transactions
because it's much
easier to update.
If you want to
update the ledger,
you don't need to
rewrite the transaction,
you just append
a transaction to the end,
and that means that
everybody can update
by just appending the
recent transactions to
their ledger as well.
Each transaction says, "I
am the owner of address X,
I have 19.5 bitcoin
in this address,
and want to send three of
those bitcoins to address Y,
and 16.4 into address Z,
and quite importantly,
I decide what is
the transaction fee that I'm
willing to pay for
this transaction".
So in the bitcoin system
every time you
send a transaction,
you choose how much
transaction fees
you want to pay for
this transaction.
And now the ledger
is just going to
be a long list of
all of those transactions,
organized into blocks.
So think of a block as
representing 10 minutes
of transaction data,
and if I know all the
sequence of blocks,
I can calculate what are
the balances of everybody.
And then I can see whether
a new transaction is valid.
To make sure that the owner
authorized this transaction,
we have a cryptographic
signature.
And to make sure that the owner,
actually has the balance
that he says he does.
You can look at
all the ledger and
calculate that somebody
actually gave him
the money from a source
that's valid. Okay.
So, the big challenge
that Bitcoin or
the Blockchain
technology solved is
how do I make sure that
I have many miners,
many servers in the system,
that will all process
transaction simultaneously and
get to a consensus about which
transactions went through.
And the way that it's done,
is that I'm going to
have many servers.
Each server is going
to be called a miner.
I'm going to make them
all participate in
this kind of wasteful activity
that will randomly
select one of them.
And to some level of affection,
I can think of all of
them holding copies
of the blockchain,
transactions coming
in to some pool of
unconfirmed transactions that
is available to everybody.
And then, I need to append
those transactions to relate in
a way that will
maintain the consensus.
So what I do,
I select one of
those miners at random.
This miner that's got
selected gets to suggest what
is the block that's
going to be appended
to the blockchain.
He says, "I got selected,
I took some transaction
from this public mempool.
I want to process those
transactions, I will add them.
I confirmed that all
of those are legal. "
>> So, it means this
is randomly selected?
>> I will get to this in a bit.
And you can think that
every 10 minutes on average,
it's not exactly
every 10 minutes.
It's a pass-on process
of selection but
think of every 10 minutes
one of those miners
will get selected,
he gets to assemble the block,
and that block has 10 minutes
of transaction data.
There's a limit on
how much information,
how many transactions you
can put in this block.
So, as of July of this year
it was one megabyte.
Now, there are some amendments.
It's different between
different systems but let's
just think of this as
being one megabyte,
or roughly 2,000
transactions per block.
And this miner said,
"These are all the
transactions that
I decided to process."
He propagates it to
the rest of the miners,
the rest of the miners
validate that the block
is actually legal
and that all the transactions
that this miner
wanted to put through
are legal transactions.
They have their signatures.
They don't move balances
from accounts that didn't
have balances to begin with.
And once they all
agree that this block
is actually valid,
they'll reach consensus
on this block,
and then they can start
mining the next block.
And then, another random miner
will be selected from
the next block and so
on, and so on.
So, because I have
this random selection,
I get that none
of the miners are
influential in the system.
If I don't really need to pay
attention to any one of
those particular miners,
because the chance
that he will get
actually selected for
those 10 minutes is very small.
And even if he
does something bad,
he blocks my transactions,
my transaction will just
wait for the next block
which will just be processed
as usual by another miner.
So, let me answer different
questions and eval questions.
So one is, why would
the miners do this?
The miners do this for profit.
We're not assuming that
the miners are honest or
that the miners are
benevolent in some way.
The miners are totally
here to make a profit,
and the way that
they make profit
is that they get paid
every time they get
selected to mine a block.
How do they get paid?
In two ways. One is that
the system allows them to
have a special transaction
that creates money from
nothing and moves it to them.
So, basically printing money.
And this is currently
the majority of the reward.
So, every block in
bitcoin, every 10 minutes,
the miner that mine the block is
allowed to create 12
and a half bitcoin,
and move it to their account.
But this is only a
short-term reward,
in the long term this
is going to go away.
Every about four
years approximately,
the amount of bitcoin
on you print per
block is getting cut by half.
>> The last time was that?
>> About two years ago,
it was 25 bitcoin per block.
Yeah, I don't remember
the exact date.
So, one question that's
kind of started us
thinking about this is,
what will happen
once you have it?
And at some point,
after you half it enough
times, you go below,
beyond the floating point,
and the rewards per block
becomes literally zero.
So, what is the other source
of compensation?
The other source is those
transaction fees
that I mentioned,
that whoever sends
a transaction,
the user can decide on
how much transaction fees
he wants to pay.
So, those are the two sources
that go to a miner.
If you process a bunch
of transactions,
older transaction fees from
those transactions go to you.
>> So, the user
just when he makes
transaction process with it
how much he's willing to pay?
>> Yes.
>> Okay. And he
might be invisibly
impressed by one miner
and says that is too low,
and then another by
miner taking more.
>> Yeah.
>> Or maybe if he offers
too little everyone says,
"Well, I don't want this guy,
I'll take other transaction."
>> Yeah. So, we will
analyze this exact gaming.
How should you select
your transaction fees?
So, your question, how do you
select somebody at random?
That's a very marvelous
innovation of Bitcoin,
that we don't have
any trusted authority.
We don't have identities.
How do we do random selection?
Well, the answer is let's
ask everybody to do hashing,
and you'll hash and try to
find a hash that has
a low enough value.
Basically, the only way
that you can try to
solve this is
brute power, brute force.
If you do more hashes,
your probability of finding
one that's low is
going to be higher.
So, let's just adjust
the difficulty.
How low do you need to go to
be such that in collectively,
we all together will
find a hash that's this low
every 10 minutes on average.
And then, every time you find
a hash that's low enough,
you got a permission to mine
a block and that will be
pass-on process with
an average arrival
of once every 10 minutes.
Okay. The last thing is
that you should also ask
a lot of questions about,
is this really an
equilibrium for
all the miners to accept
other miners block,
building on top
of them, do also,
should be maybe they can
do other shenanigans.
We are going to avoid
this question for today and
just assume that the system
is working properly
in equilibrium.
There are some very nice papers.
First by the original paper,
this Nakamoto 2008 is the paper
that introduced Bitcoin.
Nakamoto is a pseudo-name.
Eyal and Sirer is a very,
I think one of the nicest papers
here that actually say
that the original design
is almost correct,
but there's something
that they kind of ignore,
that miners can exploit
but there's a fix for that.
And basically, if
all the miners are small enough,
then it's equilibrium
for all of them to be
honest and follow
what I described. Yes?
>> Some process, is it
an assumption or is it actually,
this is the way
the system is designed?
You repeatedly said
it was some processes.
>> What about the hash function,
the difficulty of finding?
>> So, we have a lot of
people trying to solve.
>> Okay. Sorry, I
can see the limit.
>> So, collectively you can
think that this is
approximately pass on.
>> The assumption is
that there is nothing else
you can do than go on and try
>> Yeah.
>> Does not matter
how many people we have,
the only thing that can do is to
go on and try it,
it's a [inaudible]
>> Okay.
>> If they can do other things,
that's an excellent question.
>> Yeah.
>> So, basically, boiling
those down to
a few properties of
the system that I'll
use for a model.
So you have users that choose
their transaction fees,
minus, I will assume,
they're all small and
the price takers in the sense
that users do not
care about what
a particular miner does.
A miner can say,
"I don't process
any transactions
that are less than two bitcoin.
Give me less than two bitcoin
transaction fees."
Nobody will respond
because the chance
that he's going to get
selected is too small.
So, this is really
an assumption, okay?
We talk about this
in the paper and it's
actually not that problematic,
but it is an assumption.
Also another obstruction
I'm going to assume
that all the miners
are symmetric.
They have the same costs,
and the same ability.
It's also false in practice
but it's a useful obstruction.
And what will happen
if you generalize
this will be
pretty normal dynamics
of competition with
competitive advantage.
We have free entry
and exit of miners.
A miner is not
committed to the system.
They can start mining,
they can stop mining
wherever they want.
In practice, if you want
to start mining today,
you may want to buy
dedicated hardware,
so there is some fixed cost.
We're going to ignore that
and just assume that there is
zero fixed costs and
just the marginal cost
of electricity that
you want to mine. Yes?
>> Could you please clarify
that assumption a little bit?
Do you mean that
only small miners are price
takers or do you mean that,
in your assumption, that
miners are small, in general?
>> I'm going to assume
that all miners are
small and therefore
they're all price takers.
>> That's not quite true, right?
>> That's not quite true.
I can talk about why this is
still a good assumption
and a good way to analyze.
In the sentence, once
you have free entry,
I don't really need this
assumption but it's just easier
to explain when I assume
that everybody is small.
The really important assumption
here is the free entry
not the fact that
everybody is small.
>> As far as the purpose
is concerned,
there's free entry
and exit of miners,
but in the economics
of real life,
people tend to invest
in the structure,
invest real money and hardware,
and because of which they're
compared to state mining,
to at least recover
their costs [inaudible].
>> There is a more
complicated dynamic and what
I'm going to do here is
a very simplified version,
but roughly you can think of
those results as
roughly holding.
Not exactly but roughly,
but I'll talk about that
in a couple of slides.
And last, very
important property
of the system is that
the capacity of the system,
the number of transactions
the system can serve,
is independent of
how many servers
or how many miners
are in the system.
So, this is a bit
of a weird property.
I add more computational
power to the system,
but I don't add
any more capacity.
And the reason is
that I just select
one miner at random
to process one block.
It doesn't matter
how many miners there are.
>> The difficulty
is just [inaudible].
>> Yep.
Okay. So, and this is,
the throughput is exogenously
determined by some parameters
in the protocol.
The somewhat arbitrary
parameters that Bitcoin
had in July of
one megabyte in 10 minutes.
So, in terms of
an economic model,
this translates, we will
translate it to
this kind of model.
We have capital
and small miners.
I'll think about as
a continuous number,
just for making my life easier.
They have equal computing power,
equal cost of mining,
CM per period of time.
There's many potential miners
and free entry and exit.
A block is mined by one of
the miners at
the Poisson rate mu,
mu being ten minutes, let's say,
and every time
a block is mined up
to K transactions can be
processed within the block,
and miner can post
a block with less
than the full number
of transactions,
but he has a limit
on one megabyte.
So, let's say,
one megabyte is up to
K transactions and
all transactions
take the same amount of space.
In practice this is also
another simplification
but, you know.
What about the users?
Users arrive also
at the Poisson rate.
This is really an assumption,
but we need some arrival
process for the users,
we might as well be assume
Poisson to make life easier.
And we assume that
the arrival rate
of users is below
the capacity of the system.
So the total capacity
of the system is up to
K transactions at rate mu.
So, K times mu is on average
how many transactions the system
can process per unit time.
We assume that
the average number of
arrivals of transactions that
arrive is less than that.
So on average the system has
sufficient capacity
to process everybody.
And this is intentional
because I think
this is the more
interesting regime,
that the system can
process all the
transactions that come in.
I'm gonna identify a user with
a single transaction
who wants to send
a user that arrives as
a single transaction.
He can select what is
the transaction fee
he wants to bid.
It's going to be
some non-negative number,
b, that he wants to bid.
And users have
some value for getting
the transaction through that
I'm going to assume
is just large enough,
but they don't like
getting delayed.
They would like to be processed
as quickly as possible.
How much they dislike
delay is heterogeneous.
Some users have
a lower delay cost,
some have a higher
delay cost and
the distribution is known. Yes.
>> You said that everything
is decentralized,
like the random selection
of the miner,
but what about setting
these parameters?
Like mu, for example-.
>> Yeah, I'm going to talk
about this at the end.
I'm going to think about
how to set those problems.
>> But it's just,
technically, who sets them?
>> Whoever designed
the protocol in the beginning.
>> But there are adjustments?
>> So then, adjusting
those protocol is
a pretty difficult process
because it's like updating
the Wi-Fi standard.
Like we all need
to agree to move
to a different system
with the new update,
and if you want to
adjust those parameters,
we need to get to
a new consensus
on a new protocol basically.
>> So this mu is fixed for
big clients in 10 minutes,
so it's hard to put
the protocol [inaudible]
>> Because lambda, in fact,
is increasing, and so you
have to increase
mu, as well, right?
>> So, that's exactly what
I'm going to talk about.
So for the first part,
for like the next 10 or
10 slides or something,
think of mu being fixed,
lambda may change.
So you may have a lot of
congestion on the network,
less congestion on the network
but the bitcoin system will
not change its parameters.
Then we'll ask, what if you
could code something
that responds to that?
>> One thing that might
have missed it the most,
so this looks like there
is a lot of symmetry
here in the miners,
but previously you
mentioned that there are
some miners with more power.
>> Yeah.
>> So you're ignoring this?
>> I'm ignoring that.
The implications will be
pretty straightforward
once, you know.
If you want to include
miners that are bigger or
have some cost advantage,
they'll make some profits
in the normal, like,
just like any other company
that has competitive advantage
or cost advantage.
So basically you need to
analyze two sides here.
You need to analyze what will
the miners do and what
will the users do.
The miners are easier.
So let's start with them.
Okay. Oh and I should say.
Okay. So I have
a couple of assumptions.
One is when a user comes
to post a transaction,
they don't observe the queue.
They know the steady
state distribution
but not the particular
state of the queue.
This is mostly for tractability.
Second, that's all the users
want to be in the service
and service in the system.
They have enough value
to want to participate.
And this is a
reasonable assumption.
You'll see in a few slides.
I'm assuming that there's
no new coins that
are being minted,
the entire compensation
to miners is
fully from their
transaction fees.
And you can include
the block reward,
nothing really changes.
And I'm going to assume that
the system operates
correctly throughout.
So there's enough miners,
the system works, it's trusted,
all the users work,
and under this
assumption I want to
analyze how much
will the users pay?
How many miners will I have?
So now, analyzing the miners.
If I'm a miner, if
I don't get selected,
there's nothing I can do.
If I do get selected,
what will maximize my profits?
I can assemble a block
of up to K transactions,
I should take the K
highest paying
transactions and
ignore the rest.
So basically, all the miners
do the same thing.
They all take the K
highest fee transactions.
And what will be
the revenue that
each miner expects to get?
Well, if all the
transactions go through,
the total amount that miners
expect to receive is equal to
the total amount that
users pay in expectation.
And each miner
expects to get one
over N and out of
the total revenue,
because he's going to be
selected with
one over N and chance.
So, let's say that
the revenue that the users,
the total transaction
fees that users
post is this Rev that I
calculate from somewhere.
Given that the number of miners
should be such that the revenue
divide by N is exactly equal
to the cost of a miner.
Every miner makes zero profits
because if this was
bigger then I'll
have more miners
wanting to join, driving N up.
If this was less they'll have
miner exiting, driving N down.
This can only be
stable if this is
exactly offset by
the cost of mining.
And I have zero profit
for miners and that
basically says that
the number of miners that
they get is determined by
the total revenue
that the system
has divided by
the cost of mining.
So if I want to know
how much infrastructure
is going to be
deployed by the system,
I just need to ask, "How
much am I paying the miners?"
So, is the zero profit
assumption crazy?
Do miners make
zero profit or not?
They invest in
infrastructure and hardware.
So here's a paper that
estimate that from 2016
that had some estimate of
how much the miners spend.
The total amount of all miners,
how much do they spend on
electricity costs, on hardware.
And back in October 2015
when they did,
they looked at the system
and did the estimates,
the system was processing
about one and a
half transactions a
second and the total
amount spent by
the entire bitcoin network
per transaction was
about six dollars,
that was spent in
trying to just get
the hash that will get you
to be selected for
mining the block.
And some extra, very small
fee that was actually
spent on actual validation
of the transactions.
>> So the first row
is electricity?
>> So yes, so this
is electricity.
This is depreciation of
the hardware because they
assumed the hardware is good
for a year, something like that.
And you get, that six dollars
per transaction is
kind of crazy, right?
That's very expensive
per transactions.
The reason is that
the transactions,
the people who post
this transaction were
not paying the six dollars.
Six dollar was coming mostly
from the printed Bitcoin.
And now, this gives you
the cost per transaction.
This column gives you what would
be the cost of transaction
if we scaled up the system
and things would
become more efficient.
I can say it's
still pretty high,
it's more favorable, but
it's still pretty high.
And now we can take
those numbers and compare
how much they paid versus
how much they earned.
So in total, if you
take those numbers and
calculate per 10 minutes,
in total the entire Bitcoin
network spent about $6,000.
And at that time the reward
was entirely from
the printing of new Bitcoin.
Transactions with
were essentially zero.
And at that point in time,
each block got 25 btc,
which was the market value
of about $300.
And, translating to dollars
because the cost are in dollars.
The cost of the miners are
chrysty that's cost in dollars.
So the reward in dollars
was something like $7,500.
So they're not doing
exactly zero profit,
but you can see that ballpark.
Seems like zero profit is
not a terrible approximation.
So, that's what I mean, okay?
And here's a picture of
what mining looks like.
It's big warehouses
with a lot of machines.
The Dutch just do a lot
of hashing very quickly.
So that tells me what
do the miners do,
how many miners there will be.
Now I need to, so for
how much will
the users actually pay?
And that's a bit of the,
I think the more
interesting part also,
because in Venmo, or PayPal,
or other systems you have
a monopolist that sets a price.
Here we don't have
anybody setting the price.
We just created such a system
and now the question is,
will anybody actually pay
anything? How much
will they pay?
Will we be able to extract
anything from the users
to finance this system?
So, let's think of
what we got from
the equilibrium of miners.
Each one of the miners
now looks at
the K highest transactions
and processes them.
So essentially what
happens is that
their users play
congestion queueing game.
I don't really care
how many miners there are,
because one of them is going
to be selected in front of them.
Whoever gets elected they're
all going to do the same thing.
So basically, I have a game
where blocks are
mined at rate miu.
Each block will process
the K highest fee transactions.
So it's as if I can
bid for priority.
If I pay a higher
transaction fee,
I get more priority
in getting processed.
And now in equilibrium
we want to ask,
what is this equilibrium of
congestion queueing game?
Like each user will have
a trade-off between
how much they bid and
how much is
the delay cost, okay?
So the utility of a user
will have some value
for service R,
that hopefully makes
the entire thing positive.
Some delay cost
that depends on what
is my cost-per-unit delay.
What is my delay
given that I bid B?
And the distribution
of bids is given by
G. The equilibrium
distribution of bid is
G. And this function
will say what is
the expected delay given
the stationary distribution
of the system?
And then I have to
also pay my bid.
So if I bid higher,
this will make me pay more,
but I'll have less delay cost.
So how does this look?
So the delay depends
basically just
on how many people come and
cut ahead of me in line.
Those basically two
kinds of people.
People who bid less than me.
I don't even see them,
because I cut ahead of them.
I don't care how many
of them there are.
Does the people who
bid more than me,
they cut ahead of me, so,
what I care is
how many people arrived
that will have
a higher priority than me.
Because people with
high delay cost
will want to cut
through the line more.
You'll get that basically
a standard argument
will give you that,
the bid must be increasing
indeed the cost.
People who are more sensitive
to delay get high priority.
So the rate at which
higher priority people arrive is
basically how many people
have a higher delay
cost than you,
oh sorry, where is this?
Yes, how many people have
a higher delay cost than you?
That's this F of bow Ci.
And, basically
the parameter that
you want is the
congestion parameter,
like what is the ratio between
arrivals to the capacity
of the system,
multiplied by only the fraction
that's above you.
That's the relevant
congestion parameter for
you and your expected delay
is basically a function of this,
of your congestion parameter
given your place in
the distribution of cost.
So how long will you wait when
blocks are of size K, yes?
Also inside here there is,
actually, this does
not depend on miu.
This expression does
not depend on miu.
This is how many blocks
will you have to
wait if each block is of size K,
and the congestion parameter,
that's the effective load
of people that have
a higher priority than you,
is this row times your place
in the distribution.
And this translated
to from arrivals
in blocks to weighted time.
Okay, so now each agent
will optimize this and
then we can just solve
for the first other conditions
and just get an expression.
We can also do some analysis
and try to get what is
this expected delay.
Using just queuing formulas.
>> It's like a
multi-queue multi-server
kind of queuing system, right?
>> It's a batch
processing queue.
So the fact that it's
batch processing,
like that's the previous slide,
why we have some expression.
And those expression
are really nice,
you need to find the root to
polynomial to actually
calculate this.
But, Mathematica actually
calculates it very nicely.
And I'll show graphs in
the next slide I guess.
The more high level
interesting thing is
that the amount
that the user pays,
is actually exactly equal to
the externality imposed
on other transactions.
So in equilibrium,
the congestion,
the delay that
each user suffers,
is an efficient result
of efficient assortment
of priority.
People that have high delay,
get high priority.
And they pay
exactly the externality,
so it's as if you sold
priority in a VSG auction.
So that's actually pretty nice.
You can think it's
an efficient assignment
of priorities in the system.
The thing is that
your externality
will depend on the overall
congestion in the system.
If there's a lot of
congestion in the system,
you impose a much bigger
externality on other users.
So, this is the expected delay
for the user with
the lowest priority.
And now if the user
with the lowest priority
would bid a bit higher,
would just cut from
getting this delay
to a bit lower delay.
And this also tells
you what the externality
is going to be,
is going to move some people a
bit higher than the margin.
You can see that
it's very flat here,
but gets very convex.
It's standard queuing intuition.
And because it gets
very convex here,
people will wait a lot
and will pay a lot
when the congestion parameter
gets closer to one.
So, here's how much
people will pay.
It's a graph of how much people
will pay assuming
that the distribution of
delay cost is
uniform, zero to one.
Each line is an agent with
a different delay cost.
And the X-axis is what
the overall congestion
in the system is.
If the congestion is low,
basically nobody pays.
If the congestion is very high,
people start paying a lot.
So, let me skip this.
So, what does it
mean for this kind of
system? Is this good?
So, it's good in some ways.
There are some things that's
very nice about the system.
One is no transactions
are excluded.
So if you have a firm that
tries to maximize profits,
usually they'll
increase the price
until some people go away,
go to another alternative.
In this case, we don't care
about the value of agents,
all their agents may
have very high value,
very high surplus
from using the system.
We're not excluding anybody.
And moreover, even if you
pay zero to the system,
we still process you.
We might just make
you incur a delay.
So on a social perspective,
this is actually something
very appealing for the system.
It's the opposite of what you'll
get if you set up
a monopolist around this.
>> If you pay actually no fee,
then why would the.
>> The slope is less
than what [inaudible].
>> So he can,
but he is not necessarily
motivated unless.
>> Eventually there
will be a block with
less than K transactions
other than yourself,
so even if you're paying or not.
>> Yes, you can have a system
where some people are
sensitive to delay and they
pay and they finance the system,
and the people who
are not sensitive
to delay and don't
need to pay anything.
So in a sense,
if you want to think about
a financial system
that's widely accessible,
this is a pretty nice property.
If you allow
everybody to access,
but you discriminate on
the speed of transaction.
You let everybody participate,
but still make it bad enough
to participate for free,
that some people will
be willing to pay.
So you get that
all users can have
a strictly positive net surplus.
And even if the system
is a complete monopoly,
even if we all looked
in to use bitcoin,
we basically have
no alternative.
You still don't get
rent extraction here.
Like the prices will not
rise if this is
all a monopolist,
basically the system is
committed to serve
us at those prices.
>> What if the cost
of the miners
starts to exceed what they
are getting from that?
>> So, you'll have less miners.
And so the number of miners will
depend on the revenue that
we can get from the system.
But the system kind of
fixed in the protocol,
what are the pricing rules here.
The caveat here that
the fixed pricing,
but the fixed pricing to be
something that's
a function of the delay.
So if there is
very little delay,
we're going to get very little
revenue that's maybe too
little and that may be
disastrous for the system.
If we get too much delay,
we may get too much revenue and
that's not good either. Yeah?
>> So, a question,
what happens in
a very lightly loaded system
in which maybe [inaudible].
>> So, I'll get to this in
a couple of [inaudible].
>> Right?
>> Yes.
>> So, we'll save
the long questions to the end.
>> So I'll get to this in a bit.
So, this is in a way appealing
because you get some protection
of consumers in a way that
monopolists will not give you.
Now, I guess you're
all alluding to
the question, is
this sustainable?
Will this actually work?
So, I need to do
the accounting and
see what is the revenue
that I'm actually getting
from this entire system?
The revenue must be sufficient
to fund enough miners.
So I have some expression here
that gives me what is
the total revenue generated
by the system per unit time.
And as we said before,
if I have the revenue, I
know how many miners I get.
The other thing I
need to account,
is that I impose
delay cost on users.
So in order to make the people
that are sensitive to delay pay,
I need to make those
who don't pay wait a
lot. And that's costly.
It's costly to make people
wait, that's inefficient.
So, I also want to account
what's the delay cost.
So, this is the article graphs
for both of those expression.
They both start from zero.
If the system is not
congested at all,
nobody waits and nobody pays.
And as the system
becomes more congested,
people pay more and there's
more delay cost that
wastes people times. Yeah?
>> This delay cost at
the 0.1 or 0.2 level.
>> Yes.
>> Because at 0.1, or 0.2,
or lower enough levels,
it's hard to impose
delays which people won't-
>> So people.
>> How does it separate so fast?
>> So the delay here goes up
because even if I process
for show in the next block,
I have to wait for
the next block.
>> I do understand that
but if a miner has
room in his block,
and he should strategically
not process it to make
sure that people don't
start paying less.
>> But each point is very small.
>> So you're assuming,
this already,
even if there's an available
room it will get processed?
>> Yeah. Yeah.
>> So no strategic delay
and yet there
is enough separation there?
>> What do you mean
by enough separation?
>> I mean, the system is very
underserved and 0.1
was very underserved.
>> Yeah.
>> Underserved.
>> There are just main more
potential for
managing transactions
than the number of transactions.
>> Yeah, so.
>> It compresses two kilobytes.
>> So you can see the revenue.
>> And apparently not
two kilobytes but 200,
whatever those bytes are coming,
like a 10 factor or 10 less.
>> Yeah. So you can
see that nobody pays.
The total revenue
is essentially zero.
>> Yeah, but why
is the delay cost
on zero? I think
it's another way to-.
>> Because people
still have to wait for
the next block. And
so, if you have-.
>> That's not fair
in some sense.
Because if you get process in
the very first block that
you could have been-.
>> Oh, I see there is
this straight line up there.
I got it, I got
it, I understood.
This is a straight line up,
so I just like,
there is no actual.
>> Yeah.
>> No, it should be a fake.
No, I do not understand.
>> So you still need to wait for
the first block to arrive and
that's relevant because
you can think of having
frequent blocks and to
have infrequent blocks.
>> But you fix the frequency,
right? You fix it.
>> Yeah, he didn't.
>> So I said this
is a parameter Mu.
>> But Mu is fixed.
>> Yeah. And the slope
will depend on Mu.
>> No, I don't understand.
>> But this is
the total delay cost
or the delay cost per user?
>> It's the total delay cost.
>> That's why it's [inaudible].
>> No, it should be flat.
>> No, it should be linear.
>> Because it's try to-.
>> Because it's looking,
it is not normalizing by users.
>> Okay.
>> I see, okay, okay.
>> Okay.
So we can show that
both of those things are
increasing with
the congestion parameter.
I think this is
the more interesting slide.
Well, the black curve
is again the revenue,
the same one that it had before.
This is what comes
out of our model,
when we just plugged in
the distribution of
the delay cost to
be uniformed between zero
and 0.1, just eyeballing.
And K to be 2,000
transactions and then
we get this curve.
The blue dots are
empirical data,
where we took days on
the bitcoin network.
And for each day, we
calculated what is
the total revenue divided
by number of blocks,
what is the total revenue per
block from transaction fees?
Versus what is
the average size of a block?
So what percent of the blocks
during the day will,
how full were the blocks
will during that day?
And percentages will be
between zero and one.
The average size of
the blocks will be
zero to one megabyte.
One megabyte saying that
all the blocks today
were full count.
And K means that they
were 10 percent full.
So this gives us a measure
that went up as congestion.
So we align those two things
together and we
can see that during
the days when the blocks
on average were
pretty empty,
indeed nobody paid.
In the days when blocks were
starting to get pretty full,
like 0.8, 0.9, they're
still on average.
It's some excess capacity
on the blocks.
But people start paying already,
that's what you get
from the queuing model.
That because of randomness,
maybe just because
the times that I came,
there was some temporary
congestion and therefore,
I wanted to pay more to get
ahead of this block even
though there is enough capacity
for everybody and
like the queuing once
you approach
100 percent utilization.
>> So, how does
these people know?
>> There are lots of
websites that can give
you this information.
And I have an app
that has some bitcoin.
I'm not investing in bitcoin
and just have bits to solve,
be able to credibly
talk about this.
And if you look at the apps
that maintain your bitcoin,
they will actually tell you,
if you want to get
your transaction processed
in the next hour,
this is how much I
estimate you need to pay.
If you want to have it processed
in the next 10 minutes,
you probably need
to pay more and
your apps will give you
some estimates on this.
>> Okay.
>> So I think that what we do
is not crazy and it kind of
shows that really
that congestion
is really driving up the fees.
There's been a lot of dynamics
over the bitcoin network.
So for the last six months or
something that transaction
fees were very high.
A year ago, there
was essentially zero,
now their back to
being pretty low
again and it was with
the transaction volume.
So again, what does it mean,
some positives and
some negatives.
One is that all the miners
provide the
infrastructure basically
it cost because of
those free entry.
We have a lot of miners,
they were competing with
each other and they're
providing the service
basically at cost.
The revenue determines
the infrastructure level,
but you get the revenue
varies with the congestion.
So the number of miners
that we'll going to
have from the system
varies without any regard
for whether we need
more miners or not.
Just depends on
the congestion level.
And one very problematic
scenario as you already pointed
out is supposed we have
no delays, then we
have no revenue.
We have no revenue means
that the miners leave.
The miners leave that
doesn't create congestion.
Because even if we have
few amount of miners,
the capacity stays the same and
the only thing it can make is
make this system less reliable.
Which in turn should lower
the demand for the system
and lower the
congestion further.
So you can think this is a
potentially downward spiral
for the system.
There's some reason why
you shouldn't get it.
Some miners would may have
a stake that they will
be willing to continue
mining anyway but this is
a very problematic aspect
of this pricing mechanism.
There's a bunch of
other costs in this design.
So one is there's a lot of
redundancies in the work
that the miners have to do.
The tournament that does
the random selection of
miners is very,
is very wasteful.
So it generates
a random selection
which is pretty hard to
generate in other ways,
so it serves a purpose.
But it's pretty costly to do.
You must have delay cost
to get anybody to pay.
The infrastructure level
may not be optimal
and we just talked about
this potential instability.
So in the last five minutes,
what I want to talk is,
can we do anything by
just adjusting Mu and
K. So suppose that this lambda.
The survival of transactions
can vary over time and I
suppose I can respond
in some way and
just adjust how
frequent are the blocks,
and how bigger the blocks
to address this,
to get my target revenue.
So we do some analysis
and basically what we show is
that once the blocks
are big enough,
the system kind of looks very
similar to be able
the system can
be analyzed almost for
all large sizes of
the blocks at once.
And what you get,
is that the revenue,
when the congestion is very low,
when the congestion
is close to zero.
The revenue is going
to be a very low order,
but the delay cost are
going to grow linearly.
And that's again the total
delay cost and
the total revenue.
It's not normalized by
the amount of agents.
So that means that the system
is going to be very
inefficient at raising
low amounts of revenue. Okay.
So, we can think of fixing
the blocksize and
changing the frequency of
the blocks to adjust
the congestion parameter,
and then we have
this parameter curve
that we can travel on.
Well, I want to
travel on this plane.
Well, on the x-axis
I have my revenue,
on the y-axis I
have my delay cost.
If I set my congestion to zero,
I start from this point,
I have zero delay cost
for everybody,
and zero delay cost
or zero revenue.
As I increase
my congestion parameter,
I'll start traveling upwards
in both of those dimensions.
My delay cost will go up
and my revenue will go up.
And what we are interested in,
how will they go up?
You can see that
at the beginning,
I just increased my delay cost,
and my revenue stays flat.
Once I get to some
significant delay cost,
then I start getting these
curves to become very concave.
So, it starts from
being very flat
and as the volume becomes flat,
and the shape of this thing
is the opposite of what
you would want to get.
So, this means that if I want
to raise a low amount
of revenue,
I actually need
substantive delay cost.
And once I have
some amount of revenue,
it's easy to get much more
without much more delay cost.
I would like to have
an easy time to get
the necessary amount
of revenue to fund
the system and make
sure that I don't get more
than that by accident,
but I get this inefficiency.
And if I put in the blocksize,
I get that if I look at
different size of block,
the curves that I get are
just a scaled version
of the same curve.
So, here, the vertical axis
delay cost over logarithmic,
the revenue is still linear.
And you can see that
basically it always want
to take smaller blocks.
If I look at the curve of
how much delay cost do I need to
impose on the users to raise
a certain amount of revenue,
the answer is, if I
have a lower blocksize,
I can raise this revenue
for a low amount of
delay cost for users.
Basically that says, if you want
to adjust the parameters,
then you can either make
more frequent blocks or
make the blocks bigger,
it will be better to adjust
the blocks to be smaller.
And still, you'll get that
the system is going to have
to impose some
significant delay cost
in users to raise any revenue.
So, I guess I'm
out of time. Yes.
>> [inaudible].
>> The limit on?
>> The SI unit on the y-axis?
>> So, both are in
dollars, both are dollars.
>> Dollars per time?
>> Yes.
>> Because you had
a conversion for time as 0.1?
>> A uniform, like they
have a distribution.
>> Of what was it?
>> [inaudible].
>> So, I think here's
uniform zero, one.
>> Curve, the right
there, [inaudible].
I just want to translate
that into an actual time of
how it will affect the process.
You should be able to
roughly translate it.
>> I think multiply by
two and per 10 minutes.
So, the units are
per 10 minutes.
I think the units we
use are per block time.
>> So, there's
another time parameter
in your system that
you are not mentioning.
Is that the estimate
of time between
subsequent blocks
that the minors
can coordinate to make
sure it's identical.
>> Yes.
>> So, if you make
the blocks very
frequent then it
just won't work.
There are some real
limits out there.
>> So, yes.
So, that's a great comment.
This is obstruction from
technological constraint.
This is if we could
set the parameters
however we wanted,
this is what we want,
so that says that try to find
a design that makes
the blocks more frequent and
it may be better to push in
this dimension and try
to find other ways we can
design those cryptocurrency
systems to try
to push how frequent
we can make the blocks.
So, yes. So, let
me just wrap up.
So, we think that
innovation in bitcoin,
not just in term
of the technology,
but also in the economic
landscape that it generates.
It generates a system that is
committed to some protocol,
it doesn't have an owner,
that does a lot of
positive things if you think
of somebody committing
not to raise prices.
It also has a lot of
challenges because you
think that the pricing
may need to
adapt over time and there's
some questions of how
to adapt out of this.
Congestion as a revenue
generator mechanism
has some pluses and minuses,
I think it's interesting
to look at this.
But there's still a question of
what other rules can
you generate here?
So, this is one example
of the rules that were
set up in Bitcoin.
You can think of many other
rules that you can implement.
It's not that you
can do anything
because it's still
a decentralized system,
but I think it's
very interesting to
explore whether there's
better mechanisms to try to
find revenue
generating mechanism
for those
cryptocurrency systems.
Thank you very much. Yes.
>> I really like the word, this
connection with
the queuing games.
But one question here,
if the assumption here
is that the delay costs
are basically a key factor
here in the model,
and I wonder in practice
how realistic it is.
Let's say for example,
I want to make
a transaction with bitcoin,
does it really matter
maybe I'm making
such a big transaction
in terms of
the volume that I don't
really care if I have
to wait for a certain
amount of time.
Is it really something
that you believe
is key in these kind of systems.
>> So, the model
allows some users to care
and some users not care.
>> I understand but there is.
>> And they think
this is realistic.
You can tell me
that the submissions
should look differently
than what we assume.
So, by no means I think that
the uniform zero one delay cost
is anywhere close to be correct.
But I think that yes,
there are users who want
the transaction to be
processed more quickly,
like if you're paying, nobody's
paying at a restaurant
for bitcoin,
but if you are paying at
a restaurant for bitcoin,
and you had to sit there
until the transaction
go through, probably more
time sensitive than if you're
sending some bitcoin
to your friends,
because you owe him money
for last night's dinner.
>> What time scale are you
thinking about when you say,
wait more, wait less.
It's like, is it
a question of seconds,
minutes? I'm just curious.
>> So, right now,
transactions on bitcoin can
wait 10 minutes
for the next block,
so it can wait two days.
>> So, it seems just,
I was thinking, what really
matters is this K times queue,
because that's like
the capacity of a queue.
But you seem to indicate
that it's not just a product,
it matters for that
because of the scale.
Where is that coming from?
Can you use some intuition?
>> Sorry. I don't have
a great intuition for
this but I can give you
this handwavy kind of story.
I suppose I take a block
of 1,000 transactions,
and I cut it to many a
hundred transaction blocks.
When I had 1,000
transaction block,
I didn't care where I
am inside the thousand.
When I cut it to many hundred
transaction blocks, now I care.
Am I in the first hundred, or
the second hundred or the third?
So, I create basically
more incentives to
compete for the top places.
So, this is not
a great explanation but
this is kind of handwavy.
>> Is this because of
this batch processing
as opposed to this frequency of
identical servers that came
new servers something like
this would really matter.
>> Yes.
>> Question is, you're
making the cost even to
the very first [inaudible].
>> Yes.
>> So, I think we
have to finish here
and continue offline,
so Jacob is here until
Saturday. Thank you again.
>> Thank you. Thank
you very much.
