[APPLAUSE]
MARC DONNER: I'm not
the speaker.
I'm just the introducer.
So I'm Marc Donner.
I'm an engineering director here
in the New York office.
It's my distinct pleasure and
honor to introduce Joe
Halpern, who's the head of the
CS department at Cornell.
He started off as an honest
mathematician and then fell
into computer science, sort
of the way many of
the rest of us did.
Over the course of years, he's
done very many interesting
things, including teaching
mathematics in
Ghana for two years.
But I'll let him tell the
rest of the story.
He's going to talk tonight
about scrip systems--
not script, scrip.
So funny money to you, or
something on that order.
But there's lots of them around,
and I will let him
tell the story.
Joe.
[APPLAUSE]
JOSEPH HALPERN: Can you
hear me back there?
OK.
So the rules of the game are,
do not wait until the end to
ask questions.
Ask them whenever you feel
like asking them.
If I think it's inappropriate,
or it makes more sense to wait
until the end, I'm in charge.
I'll tell you to wait.
So this is joint work with Ian
Kash, who was my student.
He's at Microsoft in
Cambridge, England.
And Eric Freedman, who's
is now [INAUDIBLE].
He was at Cornell.
So as Mark said, scrip
is funny money.
It's all over the
place, actually.
You can think, as Marc
pointed out--
oh, how'd I get to the end?
Scrip has been widely used.
I'll explain the Babysitting
Co-op story.
That's coming up.
It's used in systems like Karma
and Brownie Points and
Dandelion and AntFarm.
So these are all computer
systems, so are the
following--
Agoric, Mariposa, Yootles,
Mirage, Egg.
So here, they've been used
to prevent free riding--
I'll explain that--
and for resource allocation.
But they're also used
in the real world.
Think of airline miles.
They're scrip, right?
It's a way of keeping track of
things, which you can use to
get free trips and
other things.
In Ithaca, we have Ithaca Hours,
which is scrip money
that you can use to pay
some businesses.
It can be very effective,
because you can think of scrip
as a market mechanism.
And they "can yield orderly
systems beyond the ability of
any individual to plan,
implement, or understand." So
this is saying, basically, the
market is a good thing.
But they are far from perfect.
So we wanted to understand
scrip systems.
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: They're sort
on the border of scrip and
real money.
So I mean, this isn't a talk
on what is really scrip.
But I think to some extent,
they share some of the
features of scrip.
So let me explain the Capitol
Hill Babysitting Co-op story.
This is a story that
I read about in an
article by Paul Krugman.
Yes, Nobel Prize winning
Paul Krugman.
But there is a journal article
in an econ journal that his
article is based on.
So the story goes like this.
A bunch of yuppies in Washington
all had kids.
And they decided they would
form a babysitting co-op.
They would babysit
for each other.
High tech.
And so of course, they didn't
want to pay each other.
But somehow, they wanted to
keep track of who was
babysitting, because they
didn't want free riding.
They didn't want it to be the
case that, gee, people babysat
for me 20 times.
And I never babysat
for anybody else.
So the way they did it
is they had tokens.
Tokens are scrip.
So everybody had some tokens.
And if you wanted somebody to
babysit for you, you had to
give them a token.
And you got a token when you
babysat for someone else.
So think of this scrip as
functioning like bookkeeping.
It had no value in the
outside world.
It was just a system
of tokens.
Now, what was interesting is at
the beginning, the system
worked terribly.
Nobody was going out.
Everybody was staying home.
We have this great system.
How come nobody is going out?
So these were Washington
yuppies, right?
So they legislate it.
You have to go out.
It's good for you.
Go out at least once a month.
Get out of the house.
Didn't work.
Then they brought in a bright,
young economist, said, your
problem is you haven't got
enough tokens in the system.
So they printed more tokens.
Worked like a charm.
People started going out.
They said, well, gee, if
printing some tokens is good,
it's got to be even better
to print more tokens.
They printed more tokens.
Yes, Washington.
The system crashed.
Nobody went out anymore.
OK, time out.
So think about, why did nobody
go out at the beginning?
And why did people stop
going out when they
printed more tokens?
Any intuitions here?
Everybody is afraid
of running out.
If you've only got one
or two tokens--
of course, if you have zero
tokens, you can't go out,
because if you don't have a
token, you can't get somebody
to babysit for you.
That's the whole point.
And if you only have one or two,
you might be wondering,
well gee, what happens if my
mom gets sick, and I really
need to go out?
Or there's some special
occasion?
So it's clear if there are very
few tokens in the system,
people are afraid of going out
because they're worried about
running out of scrip.
Now, what's the problem
when they printed a
whole bunch of scrip?
What went wrong then?
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Yeah, so
let's make precise why
they don't have any--
you're right.
They don't have any value.
Why don't we have any value?
What's some intuition?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Sorry?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: There's
a lot of them.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: They don't
want to babysit anymore.
They don't need any more.
That's exactly right.
In other words, if everybody has
20 tokens, they say, why
the hell do I need to go babysit
for somebody else?
20 tokens?
So there's no problem.
I'm happy to go out.
I've got 20 tokens.
But I don't have anybody who
wants to babysit for me,
because everyone else
has 20 tokens, too.
And they would say they
don't need a 21st.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: OK, so
in our framework, and
also in this story--
which, by the way, was based
in a real-life situation.
There really was a Washington
Babysitting Co-op.
There's no auctions, right?
So it's not you can say,
OK, I'll pay you
three tokens to babysit.
So throughout this talk--
I'll mention this again at the
end, because I think it's an
interesting research
question here--
you have to assume that the
price of a job is fixed, one
token per job.
No bargaining, no
side payments.
That's the market.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Regardless.
There's no inflation, no
nothing, although I'll talk
about that, too.
OK, so that's where
we came in.
We wanted to understand.
Now, why do we want
to understand?
What was the research question
to us as computer scientists?
What's the right amount of money
to pump into the system?
What's the right amount of scrip
to pump into the system?
Well, if you say, well,
too little is bad.
Too much is bad.
Now, too little is bad because
if people don't have enough.
They're worried they're
going to run out.
Too much is bad because
everybody has too much.
They don't way to babysit.
Well, what's the sweet
spot in the middle?
We wanted to understand that.
OK, so that was our technical
question.
Are we all together?
So now I'm going to have some
technical slides, which I hope
I can explain well.
But you understand that's
our motivation.
That's what we want to do.
Oh, sorry.
Let's do this again.
So our approach, we wanted to
develop a microeconomic model.
Think game theory.
I'm coming at it from the point
of view of game theory.
I'll explain all of this.
And I want to try to figure out
what the optimal behavior
is for agents in that model.
What's the right thing to do if
you're rational and smart,
understand how the
system works?
What should you do?
I want to show there's a stable
outcome that's a Nash
equilibrium.
I'll explain what Nash
equilibrium is.
In fact, let me explain
it now.
So this is game theory.
So how many people don't know
what Nash equilibrium is?
It's OK if you don't know.
So let me explain.
So have you heard of the
movie "A Beautiful
Mind?" John Nash, right?
He got a Nobel Prize.
This is what he got
the prize for.
It was his Ph.D. thesis,
by the way.
So in game theory, the point
is we have a bunch of
strategic agents.
They all have a strategy.
Strategy is just what
you're going
to do in every situation.
Now, we're going to say a
collection of strategies is an
equilibrium--
is a Nash equilibrium.
If everybody is best responding
given what
everybody else is doing, that
even if I know what all the
rest of you are doing, I have
no motivation to do anything
else other than what
I'm doing.
So fix a strategy
for everybody.
We're going to say that set of
strategies, that collection of
strategies, is a Nash
equilibrium if even if you
knew what everybody else was
doing, you would have no
temptation to change what
you were doing.
That's a Nash equilibrium.
It's a stable situation because
nobody wants to change
what they're doing.
It's not necessarily
a good situation.
You might be totally
unhappy with the
equilibrium you're in.
But nevertheless, you can't
do better by unilaterally
changing to something else.
So we're trying to understand.
And a Nash equilibrium
isn't perfect.
I could give a whole other talk
on problems with Nash
equilibrium and ways
to get around it.
But that's not this talk.
It is the case that in many
situations of interest,
especially people really
understand the system, where
what they end up doing is
playing a Nash equilibrium.
So we wanted to understand
Nash equilibria in this
setting with scrip systems.
Are we together?
That's the goal.
And we wanted to use the
understanding to maybe tell
system designers how to
build better systems.
So here's the formal model.
It's not as bad as it looks.
So let me explain
it intuitively.
This is some of the math
you're going to get.
But I'll try to make it easy.
So our pictures, we've got
a bunch of agents in.
And for the purposes of this
slide, let me assume that all
agents are the same.
And I'll explain in what
ways they're the same.
So in each round, one
agent is chosen
randomly to make a request.
What I mean by make a request,
you need babysitting.
So I'm going to assume that the
need for babysitting comes
from the outside.
It's not something you're
planning strategically.
You wake up one morning.
You read in the paper
great movie playing.
I want to go out, right?
So I want babysitting tonight.
So nature chooses somebody at
random to need something, to
want babysitting.
So think in terms
of babysitting.
Now in our general model,
we assume that there are
different types of players.
So some players, if you like,
are needier than others.
So if you like, what this is
assuming is everybody's
equally likely to want
babysitting.
But you can imagine some
people tend to want
babysitting more than others.
They like going out.
That's OK.
We allow that in the
general model.
But for this talk, let
me not assume that.
Now, the one strategic choice
you have is if I say, look, I
need some babysitting,
who's willing to
babysit for me tonight?
Raise your hands.
That's a strategic choice.
You might decide, I'm not
interested in babysitting.
Why might you say I'm
not interested?
It's because I have 20 tokens.
I don't really want a 21st.
So you have to decide whether
or not you want to babysit.
Well, once you've decided, out
of all the people who raised
their hands that said, I'm
willing to babysit, I'm going
to choose one at random.
Now again, in our general model,
we assume that the
choice isn't necessarily
totally random.
You could imagine some people
advertise better than others.
So if you've got good
advertising, you're more
likely to be chosen than
somebody who's a lousy
advertiser, or their
reputation effects.
But for the purposes of this
talk, just to simplify things,
let's assume you're
chosen at random.
Everybody's equally likely
to get chosen.
Now, the rest of it is
the obvious thing.
When the person who makes the
request gives one token to the
other person-- so think of a
token as being $1, but it's
funny money, scrip--
and you get one unit
of utility.
Utility, think of it as a
measure of happiness.
So the person who gets the
babysitting done gets one unit
of utility, and the person
who does the
babysitting and pays $1--
so dollars and utilities
are not the same.
The tokens are just
bookkeeping.
What you really care
about is happiness.
So you get one unit of
happiness if you get
babysitting done for you.
You lose alpha units of
happiness if you do
babysitting.
And we're going to assume that
alpha is a lot less than one.
Otherwise, the system would
never get off the ground.
If the amount of unhappiness
you incurred by babysitting
was more than the amount of
happiness you were going to
get by having somebody babysit
for you, of course you'd never
babysit, right?
Because all you're getting is
a token, and the token buys
you babysitting in the future.
So babysitting isn't worth the
pain of doing babysitting,
you'd never do it, right?
So we're going to assume that
you get one unit of happiness
if you babysit, and you lose
alpha units of happiness doing
the babysitting.
But alpha's a lot
less than one.
Everybody else breaks even.
And of course, the person who
asks to have the babysitting
done has to pay $1 to the
person who does the
babysitting.
But the dollar has nothing
to do with happiness.
Think of it simply
as bookkeeping.
It's a token.
And the other thing we're
going to assume is that
there's what's called
discounting.
So a unit of happiness tomorrow
is not as good as a
unit of happiness today.
So I'll pay you back on Tuesday
for the dollar you're
giving me today.
They dollar's actually worth
less on Tuesday than it's
worth today.
So we're assuming--
so technically, we have a
discount factor delta.
Think of delta as being less
than one but close to one.
So $1 today is worth delta
dollars tomorrow.
So think of delta as 0.9, delta
squared the next day,
delta cubed the day after that,
delta to the fourth the
day after that.
Question?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So yeah, I'm
told to repeat questions.
And the question is, why do
I assume a delta at all?
So if you don't want
to assume that--
and we're going to think of
this-- think of delta as being
very, very close to 1.
But in fact, it seems that in
real life, people act as if
there's a delta less than 1,
that getting something today
is better than getting the
same thing tomorrow.
So this seems to be a real
psychological phenomenon.
It's not like we made it up.
Now, in fact, for our results,
we need to assume that n is
relatively large, although it
turns out 100 is good enough,
as a practical matter.
And delta is pretty
close to 1.
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So the
question is, is this
independent of a termination
effect?
So in this world, there's
no termination.
People live forever.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Stay tuned.
You'll see what the results
are saying.
Think of me as the
system designer.
This is the total utility
for an agent.
So this is saying for each
round r, look at how much
utility you get in round r,
which is either 0 or 1 or
negative alpha.
Those are the only possible
utility quantities.
If you get babysitting
done for you in round
r, then uir is 1.
If you do babysitting for
somebody in round r, uir is
negative alpha.
And if you don't babysit
or get babysitting
done, this is 0.
And the delta of the r just says
something 10 rounds out,
you multiply by delta
to the 10th.
So this is your total utility.
For me, the system designer,
what I care about is
maximizing social welfare.
The social welfare is just
going to be the sum of
everybody's utility.
I want everybody to be
as happy as possible.
I want to design a system where
I'm going to have very
high total utility.
Are we together?
So I just add everybody's
utility.
Here, I'm weighting everybody
the same way.
And of course, I'm going to
maximize total utility-- just
sort of keep this in the back of
your mind-- by making sure
that whenever somebody wants
babysitting done, they'll have
a token to pay for it.
And I'll have somebody who's
willing to do babysitting.
So again, think of
the Washington
Babysitting Co-op story.
So there are two things
that can go wrong.
If there are very few tokens--
if you've got a system of
10,000 people and you have 100
tokens, and it's clear that
most people most the time
don't have a token, bad.
Bad from the point of view of a
system designer who wants to
maximize social welfare because
if you don't have very
many tokens, even if you have
10,000 people and 20,000
tokens, there's a non-trivial
chance somebody
won't have a token.
Just look at fluctuations.
So I want a job done.
I don't have a token.
I'm unhappy.
I could have been happier
if I had a token
to pay for it, right?
Conversely, if I want a job
done and there's lots of
tokens floating around,
intuitively--
I haven't said why yet-- but
nobody's going to volunteer.
I'm also unhappy.
I have lots of money to pay for
you, but nobody is raising
their hand.
So I want to maximize two
prob-- and as you'll see
later, what's going to happen
is too few tokens in the
system will be bad for social
welfare, because a lot of
times when somebody wants a job
done, they won't have a
token to pay for it.
Too many tokens in the system
will be bad for social
welfare, because a lot of times
when somebody wants a
job done, nobody will
raise their hands.
Question?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: OK.
So again, I'll repeat.
So the question was, they could
start asking for five
tokens for babysitting
rather than one.
I'm not allowing that.
So in this system,
prices are fixed.
Now, again, it's very
interesting to ask what
happens if you allow markets,
where you can say, look, how
much are you willing to
bid for this job?
I won't do it for one, but maybe
I'll do it for three.
So we're not allowing that.
Again, I claim this is
quite realistic.
There are many markets in the
world where prices are fixed.
Certainly, in Ithaca, where I
live, they have Ithaca Hours.
There are fixed prices for
things that are posted.
Think of all the things that
you're aware of in the world
where there are fixed,
posted prices.
Now, I understand there's other
situations where there's
bargaining.
But bargaining incurs
overhead.
So customers and producers like
it much better when there
are fixed prices, again, as
an empirical statement.
So again, this is Google.
Google works by bids
and market design.
I understand that.
And it's very interesting to ask
how things would change in
this framework if we
allowed bidding.
But let me at least claim there
are lots of real-world
scenarios where there are
fixed, posted prices.
So it's not like we're making
up something that never
happens in the real world.
And as a technical matter, what
I'm about to tell you in
this talk says nothing about
what happens when there's
auctions for goods, although
I'll come back
to that at the end.
Questions?
Yeah.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So the question
was, are my results
robust under a no-trade negative
utility penalty?
Except I don't understand
what that is.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Well,
the utilities are
just what I have there.
So your utility is either
0, negative alpha,
or 1 in each round.
And there's nothing hidden.
That's the utility.
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So think of
social welfare as overall
happiness, which is literally
defined as the sum of
everybody's utility.
So this is the utility for
a particular agent, i.
So 10,000 agents, each one
has their utility.
Just add them all up.
That's social welfare,
by definition.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So the question
is, is this situation
going to result in a situation
where a few
people do all the work?
In fact, not just in practice,
we can prove
theoretically, no.
Things will be quite uniform.
But stay tuned.
Let me get to technical
results.
So I think at this point,
I will stop
questions and keep going.
So our assumptions--
so let me repeat.
I was saying this before.
For ease of exposition, and this
is not we assume in the
full paper, I've assumed that
agents are homogeneous in the
following sense.
They have the same cost, alpha,
for performing work.
And of course, in the real
world, that might be true.
But in many cases,
it might not.
So I might love babysitting
your kids.
My cost for babysitting
your kids is very low.
Or I might hate babysitting
you kids.
They're real brats.
And the cost would be high.
So there's no reason to assume
that all agents--
some people love babysitting.
Some people hate it.
I've assumed that everybody has
the same probability of
being chosen if a volunteer.
So among the people
who raise their
hands, I choose at random.
So everybody is equally
likely.
I don't have to assume that.
I've assumed that everybody
gets the same utility for
having a job done.
Of course, in the real
world, I might love
going out to a movie.
Movies are good, but
I don't mind
staying at home and reading.
I've assumed everybody uses
the same discount factor.
Think of discount factor as
measuring your patience.
So roughly speaking, it's
saying, how patient are you?
If you get $1 today, but you're
not going to be able to
use it for three weeks,
are you OK with that?
So your discount factor
is between 0 and 1.
I'm going to assume that people
are pretty patient.
Discount factor is close to 1.
But a discount factor that's
close to 0 is saying, hey, if
I don't get to spend my money
right away-- think of my kids
when they were young--
it's not good for anything.
So money two days from now
is basically worthless.
So you think of a discount
factor like a quarter.
So a dollar today is work,
like, 1/16 of a dollar
tomorrow and 1/64 of a dollar
three days from now, pretty
close to 0.
So if you're very impatient,
your discount
factor is close to 0.
If you're very patient, your
discount factor is
very close to 1.
Are we together?
That's the technical content.
So again, I'm assuming
that everybody has
the same time factor.
That might not be true.
And in our general model, we
have what economists call
different types of agent, where
a pipe is characterized
by their alpha, the probability
of being chosen.
All these five factors,
we have a
number for each of them.
And that tuple of five numbers
characterizes an agent.
And again, let me repeat.
I said it three times,
I think.
The price of a job is fixed.
So there's been other work, some
of it based on our work.
So Hens et al did their
work independently.
They're economists.
A slightly different model.
They said that there was no
cost for volunteering.
That is, babysitting
is no pain at all.
But they assumed that agency
utilities change over time.
We don't.
And they assumed that agents
choose whether to provide
service, request service,
or opt out.
So they have a slightly
different model.
But again, they're investigating
scrip systems.
Aperjis and Johari actually had
a paper that followed onto
ours, but their focus was on
finding equilibrium prices,
which ours isn't.
But I'm a being [INAUDIBLE],
telling you about other work.
So back to what we were doing.
We're interested in what seems
to be the most natural kind of
strategy, which is that
you have a threshold.
And the way to think about the
threshold, so suppose my
threshold is $7.
Why should I work?
Under what circumstances
will I work?
What's the intuition?
So again, let me ask you, to
make sure that we're all on
the same page here.
What would induce you to raise
your hand if somebody says, I
need a babysitter?
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: You know you
have something coming up?
Well, in this model,
you don't.
Because remember, you're
chosen at random.
But you can figure out the
probability that you'll have
something coming up.
But why might one person be
more likely to raise their
hand than another?
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Yeah,
less tokens, right?
So basically, the intuition
is, if you have very few
tokens, you're nervous.
Why are you nervous?
So suppose you have
two tokens.
You're nervous that you might
want something three times in
a row before you get
a chance to--
I mean, raising your hand
doesn't guarantee that you're
going to get work, right?
Raising your hand just says, I
am one of the 25 people who
raised their hand.
I have a probability 1 over
25 of being chosen.
So intuitively, you volunteer
for a job if you feel like
you're running low, whatever
that means.
And that's what we're going
to be talking about.
At what point do I start
getting nervous?
When do I start feeling
like I'm running low?
And if you feel like you have
a lot of money, you don't
raise your hand.
So that's the way
you're thinking.
You're following a threshold
strategy.
So a threshold strategy says,
I have a threshold,
let's say, of $7.
And what that means is if I
have a less-- or 7 tokens.
If I have less than 7
tokens, I volunteer.
If I have 7 or more, I don't.
Are we together?
That seems like the most
natural strategy.
You have some fixed threshold.
Very easy to implement,
obviously,
in a computer system.
You have some fixed threshold
that says, below
that I raise my hand.
Above that, I don't.
That's a threshold strategy.
Oh, I did it again.
Two buttons--
better human factors.
So why do I want to satisfy
stuff, and why don't I?
This is just an argument
for thresholds.
If I have lots of money,
I don't raise my hand.
If I'm running short,
I do raise my hand.
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Right.
So the question is, do
all the agents have
prior independent alphas?
And the answer is, they all
have the same alpha.
In the general model,
they don't.
So in the general model, part of
your type or personality is
your alpha.
And we allow for finitely
many types.
There could be a very large
number of agents, but there
might be seven types
of agents.
So your type is characterized
by five numbers.
One of them is the alpha.
So your second question was,
what about auctions?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So the question
is, do they have
insight into the success
of prior auctions?
In a precise sense, as we'll
see, they don't have to.
So the answer is,
it's irrelevant.
But let me make that precise.
OK, so everybody understands
the threshold strategy?
You understand why it's the
obvious thing to do?
So threshold strategy, let me
call it Sk, is a volunteer if
I have less than k dollars.
So think of k as your
comfort level.
And so the question we're asking
technically is, is it
reasonable--
[PHONE RINGING]
JOSEPH HALPERN: Ah,
not my phone.
Your phone.
So I feel like the airline
pilot that says everybody
should turn off their
cell phones.
So is it reasonable to play a
threshold strategy, and how
does the system behave
if everyone
plays a threshold strategy?
The way we're going to formalize
that, we're going to
prove that there's a Nash
equilibrium where everybody
plays a threshold strategy.
Actually, there's an easy Nash
equilibrium where everybody
plays a threshold strategy.
That's where everybody plays the
threshold strategy of 0.
A threshold strategy of 0 says
you never volunteer, because
you volunteer if you have less
than $0, and you'll never have
less than $0.
So that's a threshold strategy
where k is 0.
That's a special case,
where k is 0.
And I claim if everybody plays
the strategy S0, which is
never volunteer, that's
an equilibrium.
Why is it an equilibrium?
Well, I claim if none of you are
ever going to volunteer,
the best thing for me to
do is not to volunteer.
Why should I volunteer?
Well, if I volunteer,
you will happily--
so if you need babysitting, if
I say, OK, yes, I'm going to
do it, you say, great,
I'll take you.
Babysit.
Here's a token.
OK, now you have a token.
What can you do with a token?
Well, next time you want
babysitting, you say, oh, who
wants to volunteer?
But all the rest of you are
never volunteering.
What good is my token doing?
So clearly, if everybody else
is following the strategy of
never volunteer, my best
response is never volunteer.
So there is a Nash equilibrium
where nobody ever volunteers.
That's obviously not
very interesting.
So we're interested in the
question of, is there a
non-trivial equilibrium at
threshold k where everybody
plays k, and they're happy with
that, and k is not 0?
And how does it behave?
And you'll see it does very
interesting things.
So you had asked, do you know
what other people are doing
and the successive--
let me sort of start addressing
that question.
So here are some technical
stuff.
So bear with me a few seconds.
I think I can explain this, and
there isn't that much of
it, anyway.
But I think it's sort of cool.
So out of curiosity, how
many people have
heard of maximum entropy?
Even if you don't know what it
is, how many people have at
least heard of it?
A fair number of you.
OK.
So it turns out that
I'll explain it.
I don't assume you know it.
The maximum entropy
characterizes the distribution
of money in the system.
And knowing that, we can use
that to prove that there is
not technically a Nash
equilibrium but an epsilon
Nash equilibrium.
So a Nash equilibrium says,
given what everybody else is
doing, what I'm doing is
the best response.
Epsilon says, what I'm doing is
the best response to within
a very small epsilon.
Like, there might be something
I could do that's better.
But it can't be more--
think of epsilon as
being, like, 0.001.
It's not going to be more
than epsilon better.
So if I don't want to spend
hours thinking about what's
the best thing to do, I'm happy
with just playing my
threshold strategy.
Well, maybe out there, there's
something a bit better, but
it's not going to
be a lot better.
You can make epsilon as small
as you want, as it happens.
So I mentioned that it's also
a Nash equilibrium if
everybody plays a threshold of
0, never volunteering, but
it's not interesting.
Now, here's the interesting
thing for system designs.
What we're going to show is,
so no matter how much money
there is in the system, there's
an equilibrium.
But, now it turns out as you
pump more and more money into
the system-- start at 0, pump
more money into the system--
social welfare improves.
So if you're a system designer,
you want to pump
more and more money into the
system up until a certain
critical point.
So again, going back to
Washington Babysitting Co-op
story, you can see that
if you start with $0--
that means nobody can ever do
any babysitting, because they
have no tokens--
putting more money into
the system, that
increases social welfare.
People are going
to be happier.
They're going to be able to pay
for babysitting until you
get to a certain point where
there's lots of money floating
around in the system,
and nobody is
going to want to volunteer.
That's the intuition,
but what we show is
it's a critical point.
So things get better and better
and better and better
until there is a crash.
And it's a sharp crash.
It's like you fall
off a cliff.
So if you're a system designer,
there's sort of a
magic amount of money.
All that matters, it turns out,
is the average amount of
money per person.
So there's a particular
number, like 7.
If you're a system designer,
independent of the number of
people in the system, you want
to have an average of $7 per
person in the system.
You want to get as close
to that as you can.
But if you have a little
bit more than that--
so putting in more and more
money up until you get to $7
makes things better and
better and better.
The total amount of happiness in
the system keeps increasing
up until you get to
this cliff point.
And then you fall off the cliff,
and then nobody's ever
going to volunteer anymore,
and people
are extremely unhappy.
No babysitting gets
done at all.
So now, this assumes, of course,
that everybody's
totally rational.
Now, of course, in the real
world, even if you totally
understand this, not everybody's
rational.
So if I were a system designer,
I wouldn't push my
luck and go all the way
to 7 or even 6.999.
You want to back off a bit.
And I'll talk about that
later in the talk.
But this is the lesson for
system designers, that there
is a magic number, which is the
average amount of money
per person.
And that's what you
want to hit.
Well, you don't exactly
want to hit it.
You probably, in the
real world, want
to get fairly close.
But you don't want to
push your luck.
But if you go over it, the
system will crash.
Very bad.
Crash in the sense of nobody
will ever volunteer.
Are we together?
So that's what the mathematics
is telling us.
They've actually observed this
phenomenon in the real world,
I think in "Second
Life," actually.
If somebody knows more
about this--
I mean, somebody mentioned
this to me after talk.
"Second Life" is
a game, right?
And apparently, this phenomenon
has been observed.
So we can sort of simulate it.
But it happens.
OK, next few slides
are technical.
Bear with me.
If you're totally
non-mathematical, it's
probably, maybe, a
time to tune out.
But I will try to make it as
accessible as possible.
And then we'll get back to what
we learned from this.
So formally, we can view this
system as what's called a
Markov chain.
OK, how many people have
heard of Markov chains?
Oh, fair number of you.
OK, but I'll try to make
sense of this.
So a Markov chain consists of
a collection of states and
transitions between states.
So if you're in this state,
you'll move to another state
with a certain probability.
So think of it as what
we call a graph.
So a bunch of nodes, and
nodes are the states.
The states are connected by
edges, and the edges have
probabilities on them.
And the way to think about this,
if you're at this node,
you look at where the
edges are going.
Each edge is labeled
with a number.
That's a probability.
The sum of the numbers is 1.
It says with a probability
1/3, you're going here.
With a probability 1/4, you're
going here, and with a
probability of, whatever,
about 5/12,
you're going over here.
So each of the edges is labeled
with a number that's a
probability.
That a Markov chain.
So it doesn't tell you exactly
what's going to happen.
You can sort of figure out
how likely is each path.
If I'm here, I'm going
to go here.
Then I'm going to go here.
What's the probability
of doing this?
That's a Markov chain.
So what are the states?
The states are just a tuple that
describes how much money
each agent has.
So imagine I've got 1,000
agents, and I've got $2,000 in
the system, 2,000 tokens.
A state just describes how those
2,000 tokens are split
up between the agents.
You've got three, you've
got two, you've got 27.
You're sitting there
with 300 of them.
A bunch of people have zeros.
OK, that's a state.
Who has how much money?
Are we together?
Now, if you're in a certain
state, what's the probability
of moving to another state?
Well, the only other state I
can move to, if I'm talking
about tokens, is one person
gets one more token, and
somebody else gets
one less token.
That's the only move that
I'm allowed, right?
So if you have 300, and you need
babysitting done, and you
volunteered, then one of your
300 is going over to him.
You have 299, and you have
whatever you had plus one.
Are we together?
Those are the state transitions,
but there are
lots of possible transitions,
because somebody
is chosen at random.
So everybody is equally likely
to be chosen to want
babysitting.
Now, I can't tell you what the
transition function is until I
tell you what strategy
everybody's following.
So suppose we fix everybody else
at following a threshold
of 7, let's say.
Once I fix everybody else at
following a threshold of 7,
then I could describe this
as a Markov chain,
because I can say, look.
With equal likelihood, each
one of you is going to be
chosen to want something.
We're in a particular state.
Remember, a state is a tuple
saying how much money
everybody has.
OK, so if there is 200 of you
here, with a probability 1
over 200, Marc has chosen
to want something.
If he has greater than $0,
he can say, OK, who's
willing to do a job.
If he has 0, that's it.
He's chosen, but nothing
happens.
So with a probability 1,
we stay in the same
state if he's chosen.
But otherwise, everybody who
has less than $7 will
volunteer, because they will
all have a threshold of 7.
So everybody with fewer than $7
will volunteer, and one of
them is chosen at random
to do the job.
And Marc's money goes down by
$1, and whoever's chosen to do
the job, their money
goes up by $1.
That's the transition.
So it's easy, once we know
everybody else's strategy,
everybody else's threshold, to
compute the transitions.
Is that clear?
Even if you don't understand--
there's no deep math going on.
That's the Markov chain.
Now, a key fact about this
Markov chain is if you run it
for a while--
nature chooses somebody at
random, you see what happens.
What's going to happen
after a while--
and this is just, there's books
and books on Markov
theories, so I'll explain
why this happens.
You're going to end up in a
situation where each state is
equally likely.
This is a standard, like theorem
two in any standard
book of Markov chains.
I'm not going to prove it, but
a key observation is that the
transitions are symmetrical.
What I mean by that, and that
I will explain, is that the
probability of going from state
1 to state 2 is exactly
the same as the probability
of coming back from
state 2 to state 1.
And I'll explain why
in a second.
So then we'll open
up your favorite
book on Markov chains.
And there's lots.
And one of the theorems says
in any situation where the
probabilities are symmetric,
after a while, you end up in a
situation where all states
are equally likely.
So let me explain
the symmetry.
So again, symmetry means that
the probability of going from
state 1 to state 2 is exactly
the same as the probability of
coming back from 2 to 1.
Well, what's the probability
of going from
state 1 to state 2?
Remember, the only states I'm
interested in is ones where
the transition means I have
$1 less, and let's
say Stu has $1 more.
Those are the only possible
transitions.
So let's look at a situation
where I'm the one who wanted
babysitting done.
So that happened with
probability 1 over n that I
was chosen.
And let's look at
all the people--
again, I fixed the threshold,
let's say, at 7.
So let's fix all the people
who have less than $7.
They're going to volunteer.
And Stuart is one of them.
And each one is equally
likely to get chosen.
So the probability of making
that transition is 1 over n
times 1 over m, where m is
the number of volunteers.
What's the probability
of coming back?
Well, it's the probability that
Stuart is chosen to want
babysitting, which is 1 over n,
and the probability that I
volunteer, and I get chosen.
But in the long run, if the
threshold that everybody is
playing is k, after a while,
nobody will ever have more
than k dollars, because if you
add more than k, you're going
to come down to k.
You'll never volunteer until
you're at k or below, and
you'll never go above k.
So after the system's employed--
do you see that?
So if the threshold is 7, you
might start out life--
I don't care how you
start out life.
You might start out
with, like, $50.
But if everybody is playing
a threshold of
$7, you'll have $50.
Then you want something done.
You'll want $49, $40.
You'll never volunteer until
you're below $7.
And once you're below $7, you'll
never rise above it.
So after some initial period--
I don't care what state
we started out in--
we'll be in a situation
where nobody has
more than $7, right?
So let me talk about, again,
whether, probability of 1 over
m coming back.
So again, the probability of me
wanting something is 1 over
n, and Stuart being chosen
is 1 over m.
That means that m people
had less than $7.
Well, coming back, is the
probability of Stuart being
chosen, that's 1 over n.
And what's the probability
that there's
going to be a volunteer?
Well, there's one fewer
volunteers--
Stuart.
But there's one more
volunteer.
That's me, and I will have less
than m dollars, because
even if I had m before,
I gave Stuart $1, so I
have less than m.
So it's the same one
over n times what.
There's exactly the same
volunteers when I wanted a job
done as when Stuart wants a job
done, except that before
Stuart volunteered and I
didn't-- because you don't
volunteer for yourself--
and now I'm volunteering
as Stuart is.
So it's trivial.
And as I say, there's this
theorem, which I'm not going
to prove, that says if you
have that situation, all
states are equally likely.
Seems good.
We understand the
system, right?
There's one slight problem.
Oh, god.
I did it again.
You can see I'm a high-tech
kind of guy.
So I don't care about exactly
how much money Stuart has and
exactly how much money Marc
has and exactly much money
[INAUDIBLE] has.
I really care about the
distribution of money.
What I really care about is what
fraction of people have
less than k dollars.
Why do I care what fraction
of people
have less than k dollars?
Why should I care about that?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Sorry?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Well, let's get
a more practical answer.
Why do I care about the
number of people who
have less than k dollars?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: It tells me
the number of volunteers.
And why do I care about that?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Yeah.
So look, I'm trying
to figure out--
so suppose all the rest of your
are following threshold
strategy k.
I'm trying to figure out
what I should do.
I'm noise.
I'm not going to affect
anything.
So I'm trying to figure
out my best response.
All the rest of you are
clamped at following a
threshold strategy of 7.
Are we together?
And I'm trying to figure out
what I should do in response.
Well, what should I be
comfortable with?
Should I set my threshold
at 10?
Should I set my threshold
at 3?
What's the right number?
Well, what I want to
know is, what's my
competition if I volunteer?
I mean, what I need
to figure out--
here, I'm sitting there
with 10 tokens.
And is that a good
number or not?
Should I feel comfortable
having 10 tokens?
Well, what do I need to
know to figure it out?
Well, what I need to know is,
how likely am I to run out of
tokens before I can replenish?
If I'm worried that I'm going to
run out of tokens before--
in other words, I'm worried that
I'm going to be chosen to
want something 11 times before
I get a chance to volunteer
and be chosen.
Are we together?
Because then I'll be unhappy.
I mean, it's always possible
that I get chosen
11 times in a row.
I mean, this is random.
It could be coin lands heads
11 times in a row.
But I have to figure out how
likely that is, right?
So what I have to figure out is
the probability that I'll
be chosen 11 times before I get
a chance to make a token.
That's what I'm interested in.
So I need to know what my
competition's going to be like
when I volunteer.
Now, if all the rest of you
are following a threshold
strategy of 7, my competition
is exactly the number of
people who have less than $7.
Does that make sense?
That's what I want
to understand.
So I don't care whether it's
Stuart who volunteers or David
who volunteers or Marc
who volunteers.
I don't care who volunteers.
I just want to know how
many of you volunteer.
The names of the people who
volunteer is totally
irrelevant.
So I don't want to know who
has $5 and who has $3.
I want to know what fraction
of people have
$5 and $3, and $6--
well, actually, all I really
care about is what fraction of
people have less than $7.
Are we together?
That's the technical
point here.
You've got that, you're
in good shape.
Now look at what happens.
Suppose I tell you there
are $2 in the system.
Well, what could happen?
It could be-- you're
the agents.
One person has the $2.
How many ways are there
for that to happen?
Do you remember your [INAUDIBLE]
from high school?
We have n agents.
How many ways are there that
one of them has $2?
Not very hard.
n ways, right?
OK.
The other possibility is
those $2, two different
people each have $1.
So I'm thinking, how
could the $2 be
spread around the system?
Well, either one person has both
dollars, or two different
people each have $1.
How many ways are there
for two different
people to have $1?
Remember this from
high school?
n choose 2--
n times n minus 1 over 2.
So n choose 2-- that's how do
you choose 2 people out of n
to have $1--
is n squared.
It's n squared over 2 or n
times n minus 1 over 2.
So there's n ways that one
person has $2, n choose 2 ways
that two people each have $1.
n choose 2 is much bigger than
n. n squared is much bigger
than n for large values of n.
So it's far more likely that
if I have $2 in the system,
that each of two people will
each have $1 than that one
person has $2.
Now, that generalizes.
And in fact, this is an instance
of what physicists
call the concentration
phenomenon.
OK, here's the math.
Actually, I'll skip the
math, because I'm
getting close to 7:30.
So what maximum entropy
does, so given--
OK, let me say a little bit.
Given a distribution mu-- so
think of mu of i as being the
number of agents that have i
dollars-- the fraction of
agents that have i dollars.
So it's n of you.
Suppose n is 200.
If 18 of you have $3, that
means 9% of you have $3.
So mu of 3 would be 0.09,
9 over 100, right?
So mu of i is the fraction
of agents with i dollars.
Now, the entropy of the
distribution mu--
this is the entropy function--
is my of i log mu of i.
So it's the fraction of agents
that have $3 times the log of
the fraction of agents
that have $3.
Sum that up, put a minus sign
in front, that's the entropy
of a distribution.
For those of you who are
familiar with it,
that's what it is.
If not, don't worry about it.
But it turns out that this is
the key, the key mathematical
fact, that the number
of agents if the--
there are lots of different ways
to distribute capital N
dollars, like $10,000 among
500 people-- that's if one
person could have $5,000, one
person could have $0.
There's lots of different
ways of doing it.
But it turns out that ways
that end up having a
distribution of money that's
very close to the maximum
entropy distribution dominate
all other ways so that the
likelihood of having the
distribution of money be
characterized by probability
mu is characterized by the
entropy of mu.
The distribution with the
greatest entropy or
distributions close to that are
far, far more likely than
anything else.
What does that mean
in practice?
This is an answer to your
question about why I don't
have to know anything about
previous auctions.
I know if n is reasonably
large--
yes, so as long as I know how
much money is in circulation,
if n is reasonably large, then
I know almost for sure, like
with probability--
in 9,999 circumstances
out of 10,000--
very, very close to 1.
I know for sure what fraction
of agents have $0, what
fraction of agents have $1, what
fraction of agents have
$2, within a very small fudge
factor of epsilon.
It's characterized by the
distribution of maximized
entropy, that maximizes this
expression of this, the
entropy expression.
It's the distribution that
maximizes the entropy as the
one that almost surely
characterizes the fraction of
agents that had each varying
amount of money.
So I don't have to know
anything else.
I know what my competition
is going to be like.
If the threshold is 7, I know
almost for sure, like I would
bet huge sums of money on it--
you're much safer betting on
this than crossing the street.
That's how sure you are--
how many agents are going to
volunteer in each round.
I don't know who they are.
So I don't know which agents
have $3, which ones have $5.
But I know what fraction of
agents will have $3, what
fraction will have $5, and in
particular, if the threshold
is 7, what fraction will
have less than $7.
So I know what my competition
is going to be like.
I know every time I raise my
hand, almost for sure, how
many other people are going
to raise their hands.
Each round is going to be 112
people raising their hand,
almost for sure--
different people.
Are we together?
That's sort of the power
of mathematics here.
So I can now figure out--
it's a fairly straightforward
computation--
what my threshold ought to be.
That's the point where the
likelihood of the cost of not
raising my hand exceeds the
cost of raising my hand.
In other words, how likely is
it that I'm going to run out
of money if I have 10 tokens?
How likely will I run out?
Knowing what my competition's
like, I can figure out how
likely I am to be chosen every
time I raise my hand.
I can figure out how many times
it's going to be before
I run out of money, with
all probability.
Because I have the complete
probabilistic model, I can
figure out what's the right
amount of tokens I should have
if I'm going to do
a best response?
OK, that's what this tells me.
Now, here's a sanity check, even
if you don't understand
any of the mathematics.
So what I've done now is I
said clamp everybody at
following at threshold of 7.
Now, suppose I change that and
say clamp everybody at
following a threshold of 10.
Should my threshold
go up or down?
What's your intuition?
So if I know all the rest of you
are following a threshold
of 10, compared to 7--
so I figured out my
best response.
Let's say my best response, if
you're following the threshold
of 7, the right thing for me
to do is to have 10 tokens.
Suppose I raised your
threshold to 10.
So you're going to raise your
hand as long as you have 10
tokens or less.
Should I raise my threshold or
lower my threshold or no clue?
Any intuitions?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Raise.
Why?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: OK, let's make
it a bit more precise.
You're sort of right.
So that's the right answer.
If other people are raising
their threshold, I should
raise my threshold.
But why?
There's a really good
intuitive answer.
Anybody else?
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: You'll have
more competitors.
That was exactly the
right answer.
If everybody else is raising
their hands--
before, they were all raising
their hand if they
had less than $7.
Now they're raising their hand
if they have less than $10.
You have to be a little bit
careful, because if the
threshold is 10, the
distribution changes.
Because before, they only had
amounts between 0 and 7.
Now they have amounts
between 0 and 10.
So there's a slight
subtlety here.
But nevertheless, it's
the case-- and
it's not hard to show--
that if everybody's raising
their threshold to 10, you're
going to have more competition
than when the threshold was 7.
So you ought to raise
your threshold.
So your best response function,
as a function of
what everybody else's threshold
is, is increasing.
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: No, it's not
raising their price.
It's raising when they're
willing to volunteer--
raise their threshold.
It's not a price.
The price is always one token.
There's no change in price.
So it's raising your
comfort level.
Before, you were only going to
volunteer if you had fewer
than 7 tokens.
Now you're going to volunteer
if you have
fewer than 10 tokens.
Does that help?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Yeah,
but you shouldn't--
I mean, let's just stick
to thinking about
tokens and not price.
So using this, what happens
is there's--
so for the technically oriented,
there's the theorem
by a famous mathematician named
Alfred Tarski, that more
or less says, if you have a
monotonic function, so the
best response function, which is
what should I do given what
you're doing?
So you as everybody else.
So if I know everybody else's
best response is 7, what's my
best response?
If I know everybody else
is clamped at 10,
what's my best response?
If everybody's clamped at 15,
what's my best response?
That best response function
is monotonic.
As I raise what everybody
else's threshold is, my
threshold, my best
response goes up.
If you have a monotonic
function,
there's a fixed point.
The fixed point means everybody
else is doing 12, my
best response is 12, that's
an equilibrium.
In other words, if everybody
is playing 12, they're best
responding to everybody
else playing 12.
So even if I know that all the
rest of you are playing 12,
because 12 is a fixed point,
then I'm playing the best
response, and that's
a Nash equilibrium.
So this theorem basically says
that as long as delta is close
to 1, and there's enough
agents-- enough turns out to
be, like, 100--
then this is basically saying
that there's an epsilon best
reply that's a threshold
strategy.
And the best reply function
is monotonic.
And from that, we can conclude
that there's a greatest and
least fixed point, and we can
find the fixed point iterating
best replies.
Let me skip over this, and let
me jump to the punchline, that
we have a Nash equilibrium
and threshold strategies.
But this is what I was
saying before.
How much money--
suppose you know that.
You're the system designer.
No matter how much money you
pump into the system, there's
a threshold.
That's good.
There's a magic number, like
12, that the agents can
actually learn by just playing
and discovering.
Just best responding to each
other, they can figure out
what the best response is.
So there is a Nash
equilibrium.
We're all going to play 12.
We're all going to
happily play 12.
And it's nontrivial.
Again, it's a Nash equilibrium
if we all play threshold of 0.
That's not interesting.
But there's a non-trivial
Nash equilibrium.
But that hasn't answered the
question, that what's the
right amount of money to
pump into the system?
And there, the key is--
we did experiments.
We can both prove this
mathematically and observe it
by simulation, that as you pump
more and more money into
the system, you look at the
total happiness of the system.
Remember, you're
totally happy.
You've increased happiness by
making sure that when somebody
wants a job done, they have
money, following the threshold
strategy, and they're unlikely
to run out of money.
Again, it's always possible
to run out.
You're using a threshold of 10,
it's always possible that
10 times in a row, very quickly,
you'll want a job
done before you get a chance
to make an extra dollar.
It's possible, but unlikely.
As you pump more money
into the system, that
likelihood goes down.
But then you want that will it
be the case that people will
volunteer, and so there's a
magic threshold where people
all of a sudden stop
volunteering.
And that was a surprise.
There was a cliff, that it
happens all of a sudden that
you reach this magic number, and
people stop volunteering.
And this says something.
So now, let me talk about
system design issues.
I'll try to finish up in the
next five minutes or so.
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: No.
So we're not clamping
the threshold.
We're looking at what's
the equilibrium.
And the equilibrium will change
as you pump money into
the system.
So now I'm not setting
the threshold.
Now I'm assuming everybody
is a rational agent.
They're all going to play in
that Nash equilibrium.
So in other words, if I set
everybody at 7, then you might
think, well gee, why
should I play 7?
12 is better for me, right?
So the equilibrium is
everybody's playing 12, and
that's also the best response.
12 is the best response to
everybody playing 12.
Am I making sense?
Not so happy.
So
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: OK, so what
I'm claiming is there's a
unique, technically greatest,
fixed point.
So once you fix the money
supply, there is a magic
number that you can compute--
and I can even give you
the procedure, but let
me not do that now--
that gives you a fixed point.
That is, there is a
magic threshold--
let me say, 12, but
it's not 12.
But it is a function of
the money supply and
the number of agents.
What really matters, it turns
out, is not the money supply
or the number of agents
separately.
It's the average amount
of money per person.
So it's the money supply
divided by
the number of agents.
So that turns out to be the
key parameter, the only
parameter that matters.
But once you fix that-- so for
every average amount of money
per person, once
you fix that--
there's a magic threshold--
let me say 12, but it's not--
such that that's the greatest
fixed point.
So it's everybody playing 12
is the best response to
everybody else playing 12.
And there might be other fixed
points, but there will be
smaller than 12.
So that's what we're
looking at.
But that was assuming that the
system size was fixed.
Now, imagine a real-world
system that's dynamic.
People enter and leave.
And particularly, you're
interested in systems where
lots of people are coming in.
You have a growing peer-to-peer
system, and you
want people to do
jobs for you.
This really happens in
real systems, right?
It's not like I'm
making it up.
And the system grows.
And now you say, well, OK.
I'm the system designer.
What's my goal?
Well, what I've learned from
this talk is what I want to
keep is the average amount of
money per person fixed.
So if the right average amount
of money was, let's say, 7, so
if another 200 people enter
the system, I want to pump
1,400 more tokens
into the system.
So I don't want to do what the
Babysitting Co-op did and pump
lots of tokens in for
no good reason.
There's a magic number.
But of course, if more people
join the co-op, then you want
to increase.
Now, what's the right
way of doing this if
you're a system designer?
So one obvious way of doing
this is, OK, every time
somebody comes into the system,
you give them $7.
Why is that not a good
idea on the internet?
Now, you're a system designer,
and you want to keep the
average amount of money per
person fixed at 7, now your
system's growing.
You just had 1,000 people
joining the system.
This really happens.
And you could say, OK, one way
to keep the average amount of
money-- it was $7 before.
Make sure each person
coming in gets $7.
Of course, the average is
still going to be 7.
Why is that a bad idea?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Zombies.
Two accounts.
You're all saying
the same thing.
It's called--
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Right.
It's called the sybil problem.
That's the technical
terminology.
But it's I come in,
I get my $7.
I ask for seven jobs,
now I'm down to 0.
No problem.
I leave.
My twin brother comes
in, gets the $7 and
spends the $7 and leaves.
But then I also have
a triplet--
I have another sister
out there.
She comes in, gets $7, spends
it all, and leaves.
This is the zombie,
a sybil problem.
Many people are terminal.
So that's a very bad idea if
you're a system designer.
So what do you do instead?
Redistribute.
So what we suggest is bring
people in with $0 So suppose
the system doubles in size.
Before, you had 1,000.
Now you have 2,000 people.
You want to bring in a new 1,000
people with $0, because
otherwise, you're going
to get the sybil
problem, or zombie problem.
But you still want to keep the
average amount of money at $7
per person.
Well, you just give everybody
double the amount of money
they had before--
the people who were
there before.
The new people get $0.
So we have inflation here.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Yes, but you
still keep the average amount
of money at $7 per person,
and that turns out
to be what you want.
You want to keep the
averages right.
Oh, equivalently, you have
the price of a job.
Notice that giving everybody
twice as much money as they
had before, effectively, is
saying the price of a
job used to be $1.
Now it's $0.50.
Yeah.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: That would be
another way of doing it.
And again, this would tell you
what would be the right price
to set it at to solve
the zombie problem.
So you could say it costs $7 to
enter the system, and I'm
going to give you $7.
So that would be another
way of doing it.
It would maintain the average
amount of money at $7, but you
have no temptation, although it
turns out you do, and I'll
come back to that,
to have zombies.
Are we together?
I mean, this is the kind of
thing a system designer has to
think about.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So you've got to
think about these things as
a systems person.
And the hope is that the number
of people who are
entering is relatively small
relative to the number of
people that are there.
So what it means is when
you enter, you'd
have to do some work.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: OK, so maybe
now you have to--
I'm not denying what
you're saying.
So you need to think, maybe
the right thing to do is
charge a small amount
to enter.
You can't win here.
So if you give everybody $7,
then you're going to have a
sybil problem.
And otherwise--
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So we didn't,
because we didn't have the
exponential growth in
our simulation.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Right.
So again, these are
things you--
so we didn't do it.
And I think as a systems
person, you
clearly have to do it.
So let me just do one
or two slides.
I know I'm over 7:30, so let me
try to finish up quickly.
Now, in the real world,
as I said, not
everybody is rational.
Now, these are tokens.
There's no happiness
value associated to
these tokens, right?
You don't die happy if you have
$1 million stuffed under
your mattress, I'm told.
But--
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: In the real
world, there are people who
like to have money for the sake
of having money, but we
call them hoarders.
And there's other people who
are altruistic and love
babysitting your kids.
You don't have to pay me.
Your kids are just so great.
I'll do it for free.
Well, we call those altruists.
So I mean, I think there
are many types
of irrational behavior.
I'm not trying to suggest--
irrational only in the sense
that they're not treating
these as just pure
tokens, right?
I don't want to say,
necessarily, altruists are
really irrational.
Hoarders are really irrational,
but the
irrational--
Marc, just give me one
or two more minutes.
From the point of view of--
I know I'm over time.
From the point of view of our
framework in the system.
But my guess is that in the real
world, you'll find all
sorts of different flavors of
irrationality but a few rather
common flavors of
irrationality.
And I'm suggesting hoarding and
altruism are going to be
fairly common flavors
of irrationality.
So it turns out that hoarding
has the effect of removing
money from the system.
So if you understand that 10% of
the people in the world are
going to be hoarders, well, so
what happens now is that
critical point just moved
over a little bit.
You should pump a bit more
money into the system.
And that turns out to be just
right mathematically, as well
as intuitively.
Does that make sense?
Putting money under your
mattress is taking the money
out of the system.
So you put in a bit more money
into the system to compensate.
Now, altruists have the opposite
effect, sort of, but
it's a bit more complicated.
Roughly, they're like adding
money to the system, because
they're going to do
work for free.
Having some altruists
in the system makes
everybody better off.
So if I know there's, like, 10
people out there who will work
for free, well gee, great.
Because those few times when
with very, very low
probability I ran out of money,
you're there to babysit
for me because you just love my
kids, and you're willing to
do it for free.
Great.
Social welfare increases,
right?
You like babysitting my kids.
It didn't hurt you.
And I'm happier, right?
So a few altruists are good.
AUDIENCE: Grandparents.
JOSEPH HALPERN: Grandparents.
That's right.
They're called grandparents.
But too many can hurt
social welfare.
Let me give you the intuitive--
or I can
give you the graph.
So this is what happens.
You add altruists
to the system.
Things get better and better.
But then there's a crash.
Any intuition for the crash?
I can give it to you, but does
anybody have a sense for--
again, real-world phenomenon.
It has a real-world
explanation.
And this, actually, I mean, in
the academic world, I can give
you phenomena like this.
If you think there's a lot of
people who are willing to
referee a paper.
[INAUDIBLE], any thoughts?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Exactly.
So suppose I'm perfectly
rational.
And I say, hey, why should
I work to get a token?
There's a bunch of guys out
there who are willing to
babysit for me for
nothing, right?
Now, it's not true-- so
actually, this only makes
sense when we assume that--
so in our slightly more
complicated model, well, you
don't always volunteer, even if
you're an altruist, or even
if you need the money.
Let's say Friday night I'd
love to volunteer.
Sorry, I can't.
With some probability
beta, I'm busy, and
I just can't volunteer.
So we've assumed in this paper
that beta is 1, that as long
as you have less than your
threshold amount of money,
you'll volunteer for sure.
But in our more general result,
we assume that there's
a certain probability that even
if you're an altruist, or
even if you need the money, you
won't volunteer, because
hey, you're busy.
In the real world, in the
real systems world.
right now, as it happens, you
don't have to spare cycles to
do the job.
It's just a busy time.
So we assume that with some
probability beta, you won't
volunteer, even if normally,
you would.
Well, in that case, imagine
there's a bunch of people who
are altruists, and I say,
why should I volunteer?
Because when I want something
done, there is a good chance
that somebody out there
is an altruist and
will do it for free.
Now, unfortunately, that chance
isn't high enough.
So it's still not worth it for
me to volunteer, but there'll
be a bunch of times where I'm
going to want something done,
and there simply won't
be a volunteer.
So in a system that's tuned
right with no altruists, you
can be better off than in
a system where there are
altruists, because in terms of
total social welfare, there
will be times when I'm just
going to have to, because
there's nobody out there
volunteering for me.
And we can do this
by simulation,
but that's the intuition.
OK, just about done.
Sybils have subtlety.
So remember, sybils
are these zombies.
So you can stop the obvious
sybil attack by saying, OK,
you come in with $0, so it
looks like you have no
incentive to bring in sybils.
Ah, not so fast.
You do have an incentive
to bring in
sybils, provided that--
so suppose you raise
your hands.
Now, suppose I had
five sybils.
And suppose, OK, I tell all my
sybils, who are just me with
different IP addresses or
something, OK, I need a job.
All of us raise our hands.
And of course, if any one of us
is chosen, it's like I do
the work, and I get the token.
If we can do that-- and again,
whether or not we can do that
depends on features of the
system, and I'm not really
talking about that.
But if we can do that, that
means having sybils, I can
arrange it so my probability
of getting chosen is higher
than it would be
without sybils.
Does that make sense?
Having a sybil just doubled my
probability of getting chosen,
because now there are
two hands going
up, instead of one.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Right.
So it works if I have sybils
and nobody else does.
Now, of course, then we
get an arms race.
Everybody has sybils, right?
If everybody has another
sybil, I'm right
back where I started.
So there's all sorts
of issues here.
So it turns out that, again, we
can show, by simulation--
and I can't remember if
we actually had a
theorem about this--
that sybils have diminishing
returns.
Like even if nobody else is
going to have sybils, two
sybils, that helps.
Three sybils helps.
More sybils helps, but the
amount it helps goes down
pretty rapidly.
So it seems to me, in this
kind of a situation, you
probably do want to charge
a little bit to enter the
system, even if you're going
to come in with $0.
But again, this is a systems
question, not
a theoretical question.
A few sybils can be good.
Let me skip this.
So we have other results
and how to infer
the types of agency.
What's your alpha?
What's your beta, the
probability that you'll be
able to work, even
if you want to?
So if we have different kinds
of agents, unlike this
[INAUDIBLE], and everyone's
homogeneous, it turns out we
can infer that, which is
something that marketers want
to do, from the distribution
of wealth.
We understand how the system is
going to evolve over time,
so we do have some answers
to question
about convergence time.
It turns out that in general,
multiple equilibria will
exist, even in threshold
strategy.
But the one that we looked at
was the greatest fixed point
that has these properties.
So these are technical results
for those interested in them.
One thing we looked at--
last thing.
This is the very last thing I
want to say, is that we said,
well, how hard is it to
learn what to do?
It's not like somebody tells
you, hey, the right threshold
to play is 12, right?
You're entering the system.
How do you know?
Well, it turns out-- again,
precisely, if you don't have
this exponential growth, if the
amount of growth in the
system is relatively small, so
you have relatively few new
entrants for the amount of
people there, you can learn
the right thing to do simply
by experiment.
Let me try 10.
let me try 11.
Let me try 12.
Let me try 15.
Very little experimentation
will get you
to the right place.
This works if most people in
the system know what to do.
So they're not experimenting.
But now, if you have a bunch of
people entering the system,
and they're all experimenting
at the same time, totally
breaks down.
You get chaotic behavior.
You get really weird stuff.
We don't totally understand
what's going to happen.
So we do provide some
theoretical results for
learning large, anonymous
systems.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: I'm not sure.
It depends very, very much on
what you assume about--
I can make it converge
if I make enough
assumptions, so yes.
But I'm not going to try to tell
you those assumptions are
all reasonable.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: No, because
we're still keeping the price
of a job fixed.
So the question is,
is it equivalent
to an auction where--
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Right.
So let me just stop, so Marc
won't feel like I'm abusing
everybody's time here.
So just to summarize, that it
turns out mathematically, it
turns out rather amazingly--
like I said, we certainly
didn't expect this.
And actually, the way we got
here is I had Ian, who was my
student at the time,
do simulations.
Simulations were saying,
there really is a
nice best reply function.
There really is an
equilibrium.
And it felt like, there must
be something going on
mathematically.
And I had done other work where
I used maximum entropy.
And somehow, I sort of stared at
it for a while, and I said,
this sort of feels like
the other work.
Ian, go think about it.
That's why we have students.
And it turned out, magic,
maximum entropy worked.
So I don't want to say that,
boy, we were so brilliant, we
knew all this stuff was
going to happen.
We really did it--
and I'm a theoretician.
I usually start by
proving theorems.
This is probably the first
paper of my life where I
started by telling a student,
do simulations, because I
don't really have a good sense
of what's going to happen.
This simulation showed we were
getting really rapid
convergence with about 100
agents, and it looked like
there was a nice equilibrium
and threshold strategies.
And we said, god, this is
happening every time.
There must be some theorem
here that explains
what's going on.
So then we proved the theorem.
But I told the story not
the way it happens.
Of course, now that I'm giving
the talk, I tell you about the
theorem, and I explain it.
But that's not how
it happened.
So for a theoretician, it's
sort of nice that you can
totally characterize what's
going on in the system for a
fixed number of agents using
maximum entropy and using the
monotonicity and best-reply
functions.
That's the technical meat of
the paper here, but if you
don't follow that, don't
worry about it.
For the system designer, the
message here is the right
quantity to manage, the only
quantity you care about, is
the average amount of
money per person.
And more money is better up to
a point, but that point is a
critical point.
There's a cliff there, and you
might want to trade off
efficiency for some
robustness.
You want to back off from the
cliff just a teeny bit.
We can sort of understand how
to deal with, if you like,
standard irrational behavior,
like hoarders and altruists
and with sybils.
But I think there's a
lot more to be done.
And one of the things that I
think I would really like to
do-- so let me just close with
this-- is understand the
impact of auctions.
So people were asking
at the beginning,
shouldn't you have auctions?
And my intuition says it depends
on the kind of market.
If you have a market with many
buyers and sellers, auctions
won't buy you much, whereas
if you have rather small,
specialized markets, auction's
the right thing to do.
Now, that's an intuition.
The mathematician in me wants
to formalize that.
But let me just give
you the intuition.
I can go to the large grocery
store near my house, which is
a Tops or a Wegmans.
And even at 10 o'clock at
night-- now, it happens that
Wegmans is open 24 hours a day,
but even imagine a store
that closes at midnight.
And I can see there's a
lot of tomatoes left.
And I know that by tomorrow,
they're going to go bad.
They do not auction tomatoes.
There's a fixed price
for tomatoes.
Now, I know I can also
go to the [INAUDIBLE]
in Egypt or in Jerusalem, and
there, you can bargain for
vegetables.
But certainly in large grocery
stores, you can't, and they've
decided it's not worth it.
Now, there's lots of
reasons and stuff.
There's overhead in allowing
auctions, right?
You don't have them in
large grocery stores.
You do have them in more
personal, smaller settings
like markets.
So I think there's a technical
question here.
Assuming you're a system
designer, there's real
overhead in building
a system with--
OK, I know Google does it, so
it's the wrong place to be
saying this--
but certainly in some kinds
of markets, instituting an
auction, there's
real overhead.
And definitely, producers and
consumers like the certainty
of fixed prices.
I can plan a whole lot better
if I know this is going to
cost $5 for the next year.
I mean, I can figure out what
my expenses are going to be.
My intuitions is with very large
markets with a large
number of buyers and sellers,
you didn't have to have an
auction, because you're going
to end up at a certain fixed
price, anyway.
Things aren't going to change
much, again, assuming
everything's stable.
Things change if you have, all
of a sudden, explosive growth
in the number of players.
Then you get inflation.
And we can see these phenomena.
Yeah?
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: OK,
so they do.
So again, I'm not saying
it's silly.
It's not that common, though.
So again, I definitely-- you
should not take what I'm
saying as auction's bad,
fixed price is good.
I am saying that auctions incur
some overhead, and you
have to think hard about
when to do them.
They don't have them
all the time.
I assume they do it at the
very end of the day.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Maybe so,
but these were yuppies.
They didn't want to get into--
I mean, again, think about
the sociological context.
You don't want it
to be the case.
So it's not like I'm going out
to the workforce and posting
an ad on the internet.
This is a group of 30 people.
They all know each other.
They're not going to bargain
about the price of
babysitting.
I mean, maybe they should.
OK, there's some interesting.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Yeah, they're
part of the economy.
So markets-- it's 12
minutes to 8:00.
I mean, I don't mind cutting it,
and you come to me and ask
me questions.
I'm here.
Couple more questions,
and maybe we'll--
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Absolutely, it's
a cultural phenomenon.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Believe
me, I know.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Well,
I don't know about
the slug in the face.
And they like to
tell tourists--
oh, I've been to--
well, not Tehran,
but many other countries.
And my daughter calls me in to
bargain for her when she wants
to do some shopping at
the Jerusalem market.
And she says I'm a better
bargainer than she is.
But they like to pretend
the prices are fixed.
They certainly won't slug you in
the face if you take their
first offer.
They're very happy--
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Well, they might
think you're a fool.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Right.
Well, depends.
but yeah, it's definitely
a cultural phenomenon.
There are many issues here.
I'm not trying to say
there aren't.
But I think there's more to it
than just a cultural effect.
I think there are markets
where it's much simpler.
As I say, for a producer,
there's a comfort knowing a
fixed price.
There are some real benefits.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: So this
is the interesting--
I talk a lot to economists.
So I think you can say you leave
money on the table if
you don't have auctions.
It's more efficient
to have auctions.
In some settings, I
believe it's true.
And the question is, what's
the cost of inefficiency?
I mean, I have the paper
title written.
The paper's going to be calls,
"The Cost of Inefficiency."
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: There's
how much time, and
you're a system designer.
It's more complicated.
I mean, I think there are
many, many issues.
So--
MARC DONNER: Two
more questions.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: To take
a random stock.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Sure.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: Again,
it depends
very much on the setting.
That's an instance of where you
might argue that there's a
price to be paid for the
efficiency of auctions.
And so my own feeling is I would
love, as a theoretician,
to be able to characterize the
features of markets that make
fixed prices the way to go.
And you've sort of given
me one instance.
And my gut tells me, as a
theoretician, that there ought
to be a theorem here that
characterizes them.
But future work.
OK, well, 20 questions, and
there's room for one.
AUDIENCE: [INAUDIBLE].
JOSEPH HALPERN: That's
a great question.
My results have nothing
to say about it.
So we were assuming there's
only one market.
There's a fixed price.
You can't go elsewhere
to get your job done.
Now, you're talking about a
situation where there's
several markets, right?
So that if you don't like my $1
price, you can go somewhere
else and get it for $0.75.
AUDIENCE: Right.
JOSEPH HALPERN: Right.
That introduces a whole bunch
of complicating factors.
I'm sure all of my
results go away.
So as a technical matter, I have
nothing to say about it.
But it's certainly an
interesting question.
But again, why doesn't
everybody go there?
So you have to assume--
as an economist, you have to
understand what are the
features of the various
markets.
I mean, if all markets are the
same, then in the end, there
will be one market.
Because if the discount market
is in every way as good as the
non-discount market, then why
would you ever shop at the
non-discount market except for
maybe if you don't realize the
discount market is there.
So it's founded not in
irrationality but lack of
information.
So Marc says that was the last
question, but I'm willing to
stick around if you have
more questions.
MARC DONNER: Thank you
all very much.
[APPLAUSE]
