DOUGLAS CROCKFORD: So
I'm going to show
you some code today.
Ordinarily, I don't like talks
with a lot of code in them,
because I think they're
tedious and boring.
And this talk promises
to be no exception.
But code is the story.
And so it's not possible
to tell this otherwise.
So the topic this morning
is monads.
Monads, what are monads?
Monads are this strange,
mysterious, weird, but
powerful, expressive pattern
that we can use for doing
programming.
And they come with a curse.
The monadic curse is that once
someone learns what monads are
and how to use them, they lose
the ability to explain it to
other people.
There's lots of evidence
of this on the web.
Go and search for monads
tutorials, burritos, any of
those things, and you're going
to find a lot of really
confusing material.
So today I'm going to attempt to
break that curse, and show
you monads for what
they actually are.
So we're going to start with
functional programming.
If you can't do functional
programming, then none of the
rest of this is going
to make sense.
But I'm assuming you're all
professionals, and this should
not be difficult material.
So there are two senses of the
term functional programming.
One of them simply means to
program with functions, so
we'll look at that first.
Functions enter mainstream
programming with Fortran II.
This is a Fortran II function.
At that time, lowercase hadn't
been invented yet, so
everything was in uppercase.
But it looks quite a lot
like a modern function.
It starts with the
word function.
It has the name of the function,
and parameters
wrapped in parentheses,
separated by commas--
really conventional stuff.
So it's just a subroutine
that had the
ability to return a value.
And it returned that value in a
slightly strange way, using
assignment statement, which
would assign a value to the
name of the function.
And then the return
statement would
deliver that to the caller.
That has been changed since
then, and now everybody has a
return statement that takes a
value, and that works better.
Or in expression languages, the
last expression evaluated
by a function is the implied
return value.
But what we called functional
programming only becomes
interesting when we
have functions
as first class objects.
So we can take a function and
pass it as a parameter, or
store it in any variable,
or stick it in
the field in a record.
That's when it becomes
interesting.
Then we can have higher order
functions, in which you have
functions which operate on other
functions as arguments.
And this all became really
good stuff with the
invention of Scheme.
In Scheme we have what's will
closure, which means that any
function has access
to the variables
in the outer function.
And it turns out that was an
enormous breakthrough.
And that is now finding its way
into mainstream languages.
Anyone have any idea which was
the first mainstream language
to have this feature?
It was JavaScript.
That's right.
JavaScript is leading the way.
So the other sense of functional
programming is
sometimes called pure functional
programming.
And it means functions are
mathematical functions, which
is quite different than
subroutines that return arguments.
Because a subroutine is allowed
to do mutation.
And in a purely mathematical
function, that doesn't happen.
A function with some input
is always going to
have the same output.
And the argument in favor of
that style of programming is
it makes programs a lot easier
to reason about, because you
don't have bugs that will occur
as a result of side
effects of something
else happening.
If something works, it's
guaranteed to work that way
correctly until the
end of time.
And that could be hugely
beneficial.
But there are problems
that come with that.
So you could think of
mathematical functions as
being maps.
It will simply take some
parameter or parameters, and
map them to some value.
And that mapping is constant,
and so it
always works that way.
And so it's easy to predict.
So whether functions are
implemented as computation or
as association is just an
optimization trick.
All functions should
work exactly the
same way, either time.
So the difference is like the
difference between memoization
and caching.
Caching is a deal with
the devil, right?
We have something, which is
likely to be wrong but is
sometimes right, and we don't
know when it transitions from
being right and wrong.
With memoization, that
never occurs.
Once it's memoized it's right
and will always be right.
We can take any ordinary
programming language and turn
it into a purely functional
programming language by
deleting stuff.
So we can remove everything from
the language which could
have side effects.
So we could remove the
assignment statement.
We can remove loops, and use
recursion instead if we need
repetition.
We can freeze all array literals
an object literals so
that once minted, they
cannot be modified.
In languages like JavaScript,
it means removing Date.
Because every time
you call Date, it
returns a different value.
That's mathematically absurd.
Every time you call a function,
it should return the
same thing.
Same with random.
Every time you call random,
you hope to
get a different value.
But that's that doesn't make
sense mathematically.
So we want to remove that.
So remove all those things from
our languages, then we'll
have purely functional
languages, and we'll also find
we're not going to be able to
get anything done anymore.
Because most of our programs
are not doing pure
computation.
They're interacting
with the world.
And the world is constantly
changing.
And if our programs can't change
with them, then they
become brittle and
ineffective.
In the real world, everything
changes.
And immutability makes it hard
to work in the real world.
Now there are pure functional
programming languages like ML,
and like Haskell is
a better example.
And Haskell is a brilliant
language.
And they were doing interesting
computations with
it, and then there was the
idea, let's try to do
practical stuff in it.
And then suddenly it gets
really, really hard.
That kind of hard is sometimes
called impossible, because you
can't have any side effects.
You can't do I/O. How do you
interact with things without
being able to do I/O, without
being able to mutate?
So they discovered a trick.
There was a loophole in
the function contract.
And that loophole is that a
function can take a function
as a parameter.
And once it does that, every
time a function is called with
a function, every function is
going to be different, because
every function is a unique
thing, which is closed over
the thing that created it.
And so it sort of gives us
a way to escape from the
side-effect-free thing.
You just need to start thinking
about how you compose
functions in such a way that
you have the illusion of
having mutability without
actually mutating anything.
So the Haskell community had
reason to adopt something
called the I/O monad, because
it gave them the ability to
act as though the language had
I/O, even though it didn't.
And so as they're calling
things, they can be assembling
state, and that state kind
of follows forward in the
computation.
And it kind of works.
It's like, hurray.
It could be used for practical
programming, except
it's kind of hard.
And it turns out that you want
mutation, because that is how
the world works.
And so the I/O monad is a
solution to a problem you
should never have.
But it turns out there are lots
of other monads, and so
they're still worth
taking a look at.
There's some who will say, in
order to understand monads,
you first have to learn Haskell,
and you have to learn
Category Theory.
If you don't know these things,
there's just the no
way to start.
You can't learn monads.
I think that is true in exactly
the same way that in
order to understand burritos,
you must first learn Spanish.
And they're true in the same way
in that neither is true.
It turns out you can do a lot of
stuff with burritos without
knowing any Spanish.
You can order them.
You can eat them,
and enjoy them.
You can even learn
to make them.
You can even learn to invent new
kinds of burritos, which
you and your friends
can enjoy.
And you can do that without
learning any Spanish.
I'm not saying you shouldn't
learn Spanish.
There are lots of good reasons
to learn Spanish.
You can learn a lot more about
Mexican cuisine, for example.
And that will allow you to
make burritos and other
things, which are more
authentic, and
perhaps even better.
It'll allow you to have
interactions with some
wonderful people you would never
interact with otherwise.
And if you're one of the job
creators, it gives you an
opportunity to talk directly to
the people who are actually
doing your work for you.
That's a good thing, right?
So in the same way, it's
good to learn Haskell.
Haskell has a lot of
good stuff in it.
It can teach you a lot.
It's a good language to learn.
It's just not necessary to
learn Haskell in order to
understand monads.
Some people will disagree with
me, and say no, you have to
start with Haskell.
And I say no.
If you have the chicharrones,
you can learn
monads without Haskell.
Some people will say, well you
at least have to start with
the types, that you need
a really strongly typed
language, or a super strongly
typed language in order to
manage monads.
You have to understand
this before we begin.
And I'm going to tell
you, no, that's
actually not true, either.
Haskell has a wonderful
type system.
And it does a lot of type
inference, so that you can
under-specify what the program
does, and it will try to
figure out all the types.
But it doesn't always
go right.
And so if it is trying to solve
your program, and it
finds an inconsistency,
it stops.
And you get this completely
opaque message, which is
indicating that it found an
inconsistency in a place which
is probably miles away from
where the error actually is.
And so getting that stuff
to work and compile
can be really hard.
And once it's done, there's
folklore which says, having
gone through that ringer, you're
guaranteed your program
is going to be error free.
And it turns out it's not, that
there are subtle errors
that happen in Haskell, as
happen in all other languages.
And the type system actually
gives you no leverage in
dealing with that stuff.
Stuff gets complicated, and
when there's complexity,
things go wrong.
And that happens in
all languages.
It turns out that it's easier to
reason about monads if you
don't have to deal with
the type stuff.
You don't actually need to
understand what that is in
order to build a monad.
Now there's some who will say,
no, that's not true.
You don't dare go that
way otherwise.
But I say, my friends, if you
have the huevos, you can.
So what do you say we sock up
and look at some monads?
It turns out, the language you
need to learn first is
JavaScript.
Because it has the higher
order function
stuff that we need.
And it doesn't have any the type
stuff to get in your way.
So you can just think about
what's going on.
So what we have here is a story
of three functions.
We have a function called
unit that takes a value
and returns a monad.
We have a function called bind,
which takes a monad and
a function that takes a value
that returns a monad.
That's it.
So all three of these
functions return
monads, and that's it.
That's monads.
Thank you very much.
So you're probably wondering,
still, what is a monad.
A monad is an object
in this case.
It could be something else.
But generally it's an object.
So if you know anything about
JavaScript, you're looking at
the unit function, you're
going, wait, unit is a
function that takes a value and
returns an object, so it
must be a constructor.
And yeah, right., that's
exactly right.
So unit is constructor.
So nothing magical there.
So all the magic must be
in the bind function.
So that's it.
And there's not a
lot of magic.
So there are three axioms that
you have to hold in order to
be a monad.
The first to describe the
relation between bind and
unit, which is basically that
unit creates a monad that
represents some value, and the
bind function allows another
function to have access
to that value.
The interesting axiom is the
third one, which tells us how
we can do composition
on the bind method--
that we can have nested bind
methods, and that does the
same thing as calling
bind with a
function that calls bind.
That's it.
That is monads.
Now we can make it easier to
deal with these monads by
converting it from functional
notation
into methodical notation.
That's a really easy thing
to do in JavaScript.
Everybody does that
all the time.
You need to understand the
mapping between functions and
methods, and once you can do
that, then we can very easily
transform the way we invoke
the blind method.
Instead of saying bind passing
a monad, we'll call the
monad's bind method.
It's just an easier
way to do that.
Some languages, say Common Lisp,
you would recognize that
there are lots of different
possible varieties of monads
that you might want to
implement, but they all have
the same basic pattern.
So you'd want to implement some
kind of macro to make it
easier for defining all those
different kinds of monads.
JavaScript unfortunately
does not have macros.
But it does have functions, and
so we can create a special
kind of function called a
macroid, which acts like a
macro, which helps us
to do that kind of
thing that macros do.
So this is one of
those macroids.
I want to say something
about the coloring.
You've all seen syntax
coloring, right?
That's something we put in our
text editors to make it easier
for kindergartners to
do programming.
Because each of the elements of
the language is a different
happy bright color, and so it's
easy to recognize, oh
that's a variable, and that's
a string, and so on.
I don't get a lot of value out
of that because I am more of a
grown up, and I'm a professional
programmer.
And I really don't need the
colors to figure out what's a
variable and what's a comment.
But when I'm doing functional
programming, I would like to
have color help me deal with the
nesting of functions, and
deal with the closure.
So I wish someone would
make a text editor for
me that does this.
So I want all my global-level
stuff to be white.
All the top-level functions,
I want them to be green.
The functions defined inside
of those would be yellow.
The ones inside of those would
be blue, and so on.
But the color of a variable is
the color in which it was
defined, and that allows me
to see how things close.
It gives me a view of the
visibility of the variables
and their life expectancy,
and so on.
And that turns out to be
really useful stuff.
So I'd really like to have this
kind of colorization.
And I'm going to be using this
sort of colorization through
the rest of this talk.
So we've got our macroid.
And we're going to use
it to define a monad.
And we're going to start with
the identity monad.
So the identity monad will
build the identity
constructor.
And we'll call the identity
constructor, passing it the
"Hello world" string.
And then when we call
monad.bind, passing alert, the
alert function as its method,
we'll get the
"Hello world" thing.
So this is the simplest slightly
useful monad, the
identity monad.
Let's look at the axioms again,
using the methodical
notation that we just
came up with.
We previously looked
at it functionally.
Now we look at it
methodically.
And I think it actually makes
more sense in this notation.
I think it's easier to see what
the relationship between
unit and bind is.
But even better is the thing
that happens in composition.
We now have this thing where
we've got a monad.
And we call bind.
And it returns another monad.
And we can call bind again.
This is a much easier
composition pattern than the
other one, in which we had bind
nested inside of bind,
because with the nesting, you
have to read the expressions
from the inside out.
And that's a hard thing
for us to do.
But in the methodical
form, we can read it
from left to right.
And so composition
is a lot easier.
I can just keep tacking things
on, and the thing keeps
getting longer, and more
interesting, more complex.
Any of you who have ever done
any Ajax programming might
notice there's something about
this pattern that's familiar.
I've got an object.
I call a method on it.
Then I call another method
on the result.
Where have I seen that before?
It's the Ajax monad.
All of the Ajax libraries
do this--
JQuery, YUI, everybody.
We've been doing
this for years.
It turns out we've been doing
monads all along.
This is an example of something
I did in 2001.
The Interstate library was my
third JavaScript library.
The first one was just something
to help me manage
the differences between Netscape
4 and IE 4, which
were horrendous.
And after I wrote that, I
looked at what sort of
patterns we were using in using
it, and then trying to
figure out a way to incorporate
more of that into
the library to make
it easier to use.
And this is my third
iteration.
And in this one I realized that
if I have an object which
wraps a DOM node, in this case,
a text form node, and if
that object, that monad keeps
returning itself, then I can
cascade all of these
other things on it.
And it becomes really
expressive.
And lots of other people
figured this
trick out as well.
And so this is now standard
equipment in Ajax libraries.
In 2007, I developed a system
called ADsafe which was
intended to make the web safe
for doing online advertising.
And it took the same idea of
taking a node and wrapping it
in a monadic object, but
also added a security
dimension to it.
So it would guarantee that there
was no way that the node
could be extracted
from the object.
So that meant we could give one
of these ADsafe nodes to a
piece of untrusted code, for
example, an advertisement, and
be confident that it could not
break the containment, that it
was only able to do with that
node what we intended it to be
able to do with that node,
and nothing else.
It couldn't use it to traverse
the rest of the document.
It couldn't use it to
get to the network.
It couldn't use it to
steal our cookies.
It couldn't do any
of those things.
All it could do was display
an ad in that window.
And ADsafe worked.
And it used the same
monadic pattern.
So let's improve our macroid to
allow us to do Ajax stuff.
So we've already seen we
can take an object and
call a bind on it.
But what we really want
to be able to do is
call a method on it.
Also, we want to be able to
have methods pass some
variable number of parameters,
as well.
So we'll expand our bind method
to now take an optional
second parameter, which is an
array of the arguments that we
want to get to the method.
And we'll extend our macroid by
first creating a prototype
object, which will be an object
which inherits nothing.
This is where we're going to
keep the methods of the monad.
We can use object.create of
null to make that for us.
It makes an object that
inherits nothing.
This is a great new feature
that came in the ES5.
And then when we create our
monad, we will have it inherit
from that prototype object.
So anything that goes into the
prototype object will be
inherited by the monads
that we make.
Then we'll modify the blind
method to take the second
argument, which is the set of
arguments that we want to pass
into the method.
And unfortunately, because of
a profound stupidity in the
way JavaScript was designed,
we have to manipulate the
arguments object.
And that's really
hard, because it
is not a real object.
And so the things you have
to do to it are horrible.
Fortunately, ES6, the next
edition of the language, will
probably have this dot dot dot
operator, which happens to do
exactly the right thing.
This is going to be my second
favorite feature in ES6 if it
ever comes out.
So I'm looking forward
to that.
Because it took those three
extremely ugly lines, and
turned it into one very neat
line, which obviously does
what it does.
We're then going to create
a unit method on the
constructor, which will allow
us to add additional methods
to the prototype.
It simply takes its arguments,
and assigns them to the
prototype, so that's
pretty easy.
And it also returns unit, so
that we can then call dot
method dot method dot method
on the constructor.
So it's monadic in that
dimension, as well.
But we can do even
better than that.
It assumed that the functions
that get called to method
understand about monads.
But in some cases we want to be
able to wrap functions that
know nothing about monads,
but have them work
in the monadic context.
So we're going to add another
method to the constructor,
called lift.
And lift will take an ordinary
function and
add it to the prototype.
But it will wrap that function
in another function, which
will call unit and bind,
as suggested in
the first two axioms.
So that it allows that function
to act as though it
knew about monads, even
though it didn't.
So it'll call bind for us, and
it will then wrap it's result
in a monad if it needs to.
So this makes things
a lot easier.
So let's use this one.
We're going to call our
macroid again to
make our Ajax monad.
And we're going to use
lift to turn alert
into a method on it.
So we could change
the name here.
But we're going to keep
the name the same.
We're going to take the alert
function, and use it as the
alert method.
So now I can call the Ajax
constructor to make my monad.
And the monad now has
an alert method.
And if I call it, it'll say,
"Hello world." Hooray.
It turns out we've been
doing this for years.
We just didn't know it.
Ajax has always been monadic.
One of the difficulties we have
in some of our modern
languages is the problem
with null.
Null as the thing that
represents a
value that is not there.
And if you try to do something
to null, generally something
bad will happen.
Sometimes it seems that Java
was optimized for the
generation of null pointer
exceptions.
So you end up having--
the null pointer exception
doesn't actually ever tell you
anything, except that there
was a null that you didn't
check there to avoid.
And so your code tends to get
filled with lots of, if null,
don't do that, if null,
don't do that.
Which is just a waste of time.
It's completely unproductive.
We knew we didn't
want to do that.
We shouldn't have to say so
every time we touch a variable
that might want to null.
So there's a thing called
the maybe monad.
The maybe monad takes null
pointer exceptions and simply
removes them from your model,
so you never have to worry
about them anymore.
It's similar to the weight
that NaN works.
A long time ago, in Fortran and
another languages, if you
ever accidentally divided
something by zero, your
program would stop.
Exception, thing crashes,
cores, done, stops.
So as a result, you had to
write in front of every
division, but if we're
dividing by zero,
then don't do it.
You never intended to have it
happen, but you had to guard
against it all the time.
So we now have NaN.
And NaN represents
Not a Number.
It's sometimes the result
of dividing by zero.
And so if we divide by zero,
we get NaN instead.
And the program keeps going.
At the end you can ask, by the
way, is the answer NaN.
No, OK, something happened.
We can ignore the result.
So that's much nicer than having
to put guards around
every operator to
make sure that
nothing's going to go wrong.
The maybe monad allows us to
do a similar thing with
pointers or references,
so we don't have to
worry about that anymore.
Those errors don't happen.
So we're going to modify our
macroid in order to be able to
deal with maybe monads.
So we're going to add a
parameter to the macro, which
takes a modifier function.
And that modifier function will
allow us to intercept
things that we're
constructing.
So the unit method, as part of
its doing work, will look to
see if the modifier function
is present.
And if so, it will call
it, passing the
monad and the value.
And that will allow that
function that we pass in to do
something with the thing
that we're making.
One way we can do that is we can
use the macroid to make a
maybe monad by passing in this
function, which will look at
the value, and see if the value
is null or undefined.
And if it, then we go, this is
going to be a null monad.
And it's going to have this
amazing property, in that
we're going to change its bind
method to do nothing.
It will simply return
the monad.
So it turns it into an identity
function, and crashes
don't occur.
So now we can make
our maybe monad--
this case, we'll make
a null one--
and if I call bind on alert,
nothing happens.
It's great.
So if you incorporate this
kind of stuff into your
system, you never again
have to worry about
null pointer errors.
It's just amazing.
They just go away.
Bind will prevent it eve from
getting called, and everything
works right, which is nice.
It's a liberating thing.
So that's our friend the monad.
That's it.
And we looked at three specific
monads-- the identity
monad, the Ajax monad,
the maybe monad.
There are lots more,
but they're all
variations on this pattern.
And now that you've been through
this talk, you might
want to look at the other
monad tutorials.
Go to Bing and Google for
monad burrito, and
see what you find.
It's going to be
baffling stuff.
But it'll work.
So I have some time left.
So I want to talk about
concurrency.
Concurrency is when you try to
make lots of things happen at
the same time.
And there are a number of models
for how you do that.
The most popular is
to use threads.
And the problem with threads
is that they are evil with
respect to mutation.
If you have a process that's
trying to do read, modify,
write, and another process
that's trying to do read,
modify, write, and they're doing
it on the same memory,
there's a strong likelihood that
they're going to clobber
each other.
That's called races.
And races are horrendously
bad for reliability.
The way we mitigate that is
with mutual exclusion.
Mutual exclusion has its
own set of problems.
It can cause bad performance
problems, or more likely it's
going to cause deadlocks
and starvation.
And that's a bad thing too.
There are a couple of
other alternatives.
One of them is to go with purely
functional programming.
Because when we're purely
functional, we never mutate.
And so that's not a problem.
But not mutating is
its own problem.
Another alternative is turn
based processing.
This, I think, is the
right way forward.
So in a turn based system,
everything is single-threaded.
An as a result we are race
free and deadlock free.
That turns out to be great.
But it requires that we respect
the law of turns.
The law of turns says, your
code must never wait.
It must never block.
And it must finish fast.
A turn cannot sit there
and loop, waiting for
something to happen.
It has to get out as
quick as it can.
So not all programs can be
easily adapted to that.
But it turns out quite
a lot of them can.
So event driven systems
tend to be turn based.
Message passing systems
tend to be turn based.
You don't need threads.
You don't need mutual
exclusion.
It's a much simpler programming
model, and it's
really effective.
It turns out all web browsers
use this model.
Most UI frameworks
use this model.
So it's something we've been
doing a long time, anyway.
We're now seeing this model
getting more popularity on the
server side.
So there's Elko for Java.
There's Twisted for Python.
No JS for JavaScript.
Take the same turn based model,
and make it available
on the server.
Now some people complain
that asynchronicity
can be hard to manage.
And there are some things
you need to do in
order to adapt to it.
One of the problems is that if
you have multiple things that
each serially depend on each
other, the naive way to write
that is with nested
event handlers.
And that turns out to be
extremely brittle patterns.
So you don't want to do that.
A much better alternative
is to use promises.
Promises are objects which will
represent a future value,
and a way of interacting
with that future value.
So promises are an excellent
mechanism for dealing with
asynchronicity.
Every promise has a
corresponding resolver, which
is used ultimately to assign
a value to that promise.
And once the value is assigned,
then interesting
things can happen.
So a promise, when it's created,
will have one of
three states.
First state will be pending.
And depending on what happens
to it in the future, it can
change to either
kept or broken.
Or it may always be pending.
So a promise is an
event generator.
Once the promise is resolved,
then it can fire events on
consumers who are interested in
the result of the promise.
So at any time after making a
promise, an event handling
function can be registered
with the promise.
And those will then be called
in the order in which they
were registered when the
value was known.
And a promise can accept
functions that will be called
with a value, once the promise
has been kept or broken.
And we can do this with
the when method.
The when is sort of like on.
It allows us to register event
handlers, but it will register
two of them-- one to be called
if the promise is kept, and
the other to be called it
the promise is broken.
So here's a function for
making up a promise.
I'm calling it vow.
So a vow will produce
an object.
And that object will have
three methods--
keep, break, or promise.
Promise is not actually
method.
It's an object, which represents
the promise itself.
So I can take the promise
and hand it to you.
And in the future, when I know
what the result of that
promise is, I can call either
keep or break, and then your
promise will change its state,
and good things happen.
So here's an example.
One of the problems with
filesystem APIs, going all the
way back to Fortran,
is that they block.
If I want to read something from
the card reader, or from
the terminal, or if I want to
send something to the printer,
or to the disk drive, my program
stops until that
operation has completed.
In some cases, my program can
stop for a long time.
I don't want to stop, because
that breaks the law turns.
The law of turns says
I never stop.
I never break.
I have to finish.
A way to do that would be to
have the file system, instead
of blocking, it immediately
returns a promise.
So my program can then
continue going.
I'm not blocked on it.
So here I've got a read
file instruction.
Name is probably the
name of the file.
And it's going to return
a promise.
And I can tell that promise,
when you are resolved, call my
success function.
And it will receive the result
of the file operation, and
things are good.
And if the file operation failed
for some reason, like
file not found, or whatever,
then call my
failure function instead.
You might be wondering why do
you call my failure function?
Why don't you just throw
an exception?
You have to think about this
stuff as time travel.
So what an exception does is it
unwinds the stack to some
earlier point in time.
And we can then recover
from that point.
But in a turn based system, the
stack gets reset all the
way down to zero at the
end of every turn.
So there's no way I can unwind
into a previous turn, because
the stack is gone.
There's no way to
go back in time.
You can only go forward
in time.
So we need eight time travel
mechanism which goes forward.
It turns out promises do that.
That's the whole point.
So promises can have a positive
consequence or a
negative consequence.
That negative consequence
is like an exception.
So the way we deal with
exceptions is by having
failure functions instead, which
will be called in the
future, once we know that the
thing actually failed.
I think I said all of that.
Exceptions modify the
flow of control buy
unwinding the state.
Turn based system, the stack
is emptied every turn.
One of the nice things about the
way failure functions work
is that we can nest promises.
So each when actually returns
another promise, based on the
resolution of that
particular when.
And we can cascade
those, as well.
So we can say, when we know the
result of that, do this.
When we know the result
of that, do that.
And so on.
And if any of those fail, and
if they don't specify their
own failure, then the
failure propagates.
It's contagious.
It goes forward.
And so the last one, the last
failure specified, will catch
all of the previous things.
So it acts like a try, except
this is something that's
happening over many,
many turns.
So it gives us a way to
manage asynchronicity.
So one of the nice things about
promises and the when
method is how they compose.
So when I say .when.when, that's
doing the same thing as
when passing in a function
which calls
when on another promise.
And some of you might be
thinking, wait a minute, this
looks eerily familiar.
Where have I seen this before?
You might be thinking, this
looks like the third axiom.
I would say, you are right.
This is the third axiom.
It turns out promises
are monads.
Now they're a different kind
of monad, because all the
other ones we've looked at,
the value of the monad is
known at the time that
it's created.
And because it's purely
functional, that value cannot
be modified.
This is a little different,
because we don't know the
value at the time that
the thing's created.
That's going to be resolved
in the future.
So it gets filled in later.
Also because monads don't have
the problem where they might
fail to ever get that value,
we only need to provide one
function to bind.
But when needs the ability to
have two functions, because it
has to deal with
a failure case.
But otherwise, it works exactly
like the monads.
Let's look at how we could
build the vow function in
order to do this.
Now this is about a page
worth of code.
And I'll show you the page at
the end, but you're not going
to be able to read it.
So I'm going to be zooming
in on pieces of it.
So we've got a function that is
going to return an object.
And that function is going to
have a couple of functions in
it that the yellow function
will close over.
That's one of the nice
construction
patterns that we have.
And the thing that we're
assigned to vow is not the
green function, it's the result
of the green function.
Because we're invoking it
here, at the bottom.
So that little pair of parens
is really easy to overlook.
But it turns out it's
really critical to
understanding this.
So just leaving it hanging out
there like a pair of dog
balls, I don't think is
useful to the reader.
I want the reader to
have a bigger clue
that this is important.
So I recommend wrapping
the entire implication
expression in parens.
So it makes a lot easier for the
reader to see this is all
part of the same thing.
This is important.
If I see a function in parens,
that probably means something.
And I need to look for that.
OK, so let's zoom in on
the make function.
It's going to have a couple of
arrays where it's going to
keep the success functions and
failure functions that get
registered with it.
It's going to have a variable
for the ultimate fate of this
promise, once it's known.
Its status starts
off as pending.
JavaScript doesn't
have symbols.
It doesn't really need them,
because strings were
implemented correctly.
Two strings containing the same
letters are equal, which--
it would be stupid for them not
to be equal, wouldn't it?
And then it returns of an object
containing the break
method, the keep method,
and the promise
itself, which is an object.
We'll look at its construction
in a moment.
And break and keep both call
a herald function, which is
going to announce
to the world the
resolution of this promise.
So we'll look at herald.
Herald can only be
called once.
So if the current state is not
pending, then we can throw.
In this case, throwing is OK,
because it's a local thing.
It's not something that
we need to throw
into a different turn.
We'll set the fate, and we will
enlighten the queue of
waiters, and let them know
that the fate is known.
And we'll then zero out the two
queues to make sure that
none of those functions
ever get called again.
So let's zoom in on something
else now.
We're going to zoom
in on the promise.
The promise will have a property
that just identifies
itself as a promise, to make
it a little easier to
recognize it.
And it contains the
when method.
The when method is going to
register the two event
handlers that depend on the
result of the promise.
And how it will do that will
depend on the current state of
the promise.
If they're pending, then they
simply get added to the queue.
If the promise has already been
resolved, then it will,
depending on whether we're
relying on the failure case or
the other case, will queue
it and then enlighten
immediately.
So it doesn't matter when the
promise is resolved versus
when we register with
the when method.
So there's no race there.
You can register after the
promise is resolved, and it
works exactly the same way.
You don't need to care about
how that race may occur.
And then at the end, we
return the promise.
This is the business that
happens when cue the thing.
I'm getting bored now, so I'm
just going to skip through
this stuff.
And there's that.
So the code is available
on GitHub, if you want
to play with it.
It's all written
in JavaScript.
This is it.
There's just one page.
You might want to
write that down.
So our friend, the monad.
So we saw the identity monad,
the Ajax monad, the maybe
monad, and the promise monad.
I contend that promises
are monads, which--
this is my contribution
to this, I guess.
I don't think this result
had been known before.
Don't forget your semicolons,
people.
It's really important.
I've got some further
viewing for you.
Carl Hewitt was at MIT.
I think he's at Stanford now.
He came up with the actor model,
which inspired the
development of the scheme
language, and a
lot of other stuff.
I think the actor model contains
the solution to most
of our problems.
But Carl has not written any
accessible material about it.
It's all kind of--
you're read it, obviously.
It's scary stuff.
But he did do an interview with
Channel Nine at Microsoft.
It's actually a very nice
introduction to the stuff.
So I recommend you take
a look at that.
Channel Nine apparently likes
the really long URLs.
I doubt that there's anybody
in this room who could
actually type that in.
I'm sure if you go search
for it, you should be
able to find it.
Then Mark Miller is one of the
first people to come up with
promises, which is based on an
idea called futures, which
also came out of Carl's
actor model.
He's implemented it in a
number of languages.
And you just saw an
implementation in JavaScript.
He has a really interesting
couple of talks that are
available on YouTube, which talk
about this, and some of
its implications for doing
automated contracting, and
financial instruments, and
lots of other really
interesting stuff.
So I recommend you take a
look at that as well.
It's certainly worth
your time.
And Miller works here,
by the way.
He is a Googler, good guy.
And that's it.
That's all I've got
for you this hour.
Think you and good night.
MALE SPEAKER: I'm guessing you
have a few minutes, maybe to
answer a couple of questions?
In an effort to keep the video
in sync with questions, I've
got a lav here.
So I walk around to anybody who
wants to ask a question,
just so we can capture it.
And while we're talking
about that, nothing
confidential, please.
This will be going public
afterwards.
Do you want to start?
AUDIENCE: Thanks for the talk.
I was wondering, what's
your take on the
cancellation of promises?
Because that's one of the
questions that hasn't been
really resolved.
Cancellation of promises, when
I register my listener to a
promise, and I'm not interested
in the result anymore.
DOUGLAS CROCKFORD: So that's a
really easy thing to resolve
on your side.
You can have a Boolean at the
top of your responder, which
says, am I interested
anymore or not.
So you can do it that way.
But promises also compose
really nicely.
You can cascade them
together and stuff.
So you can have a canceller
in that string.
You can compose cancellation.
There are lots of higher level
patterns that you can build
out of the simple promises.
AUDIENCE: Is there a way to
communicate to the resolver
that you are not interested
anymore, therefore it doesn't
need to do the work
to generate--
DOUGLAS CROCKFORD:
Is there a way to
communicate to the resolver?
Not in the promise
system itself.
But you can always
send it message.
And it's similar to a problem
you might have in, say timers.
You can have a queue
of timers.
You might want to say, I'm not
interested in this timer
anymore, and so you
want to cancel it.
But it might be that by the time
you get the cancellation
to it, it's already fired.
So it's likely you're going
to experience races.
So that's probably not a pattern
you want to pursue.
But it is available to you.
AUDIENCE: It seems that
debugging is a real challenge
for turn based programming.
If you set a break point in a
conventional program, you can
see the call chain that
gave you the context
for why you're there.
You could step over
subfunctions, and skip whole
sub trees of the evaluation
tree.
And that seems to be a lot more
complicated in debugging.
Is there practical tools for
debugging turn based programs?
DOUGLAS CROCKFORD: I think it's
a lot easier than trying
to debug threads.
The hardest thing I've ever done
in my career is try to
debug a real time problem where
two threads were chasing
each other.
I think turns are a lot easier
to manage than that.
Debugging is always
going to be hard.
And as we get more temporal
complexity, it gets harder.
But I think turns manage that
complexity much better than
threads do.
MALE SPEAKER: As soon as I have
the floating mic, I'm
going to make my way to
the front in a second.
We've got one more here.
AUDIENCE: Do you have any
thoughts on emulating
do-notation in JavaScript?
Have there been any
attempts at that?
DOUGLAS CROCKFORD:
Thoughts on what?
AUDIENCE: Emulating
do-notation?
The monadic do syntax.
DOUGLAS CROCKFORD: No.
One thing we're seeing in
JavaScript now is a lot of
experimentation with
new syntax.
Coffee scripts launched that--
there were other examples
before that.
But there are lots of people who
were trying to experiment
with changing language, or
making it more expressive.
I expect we'll see that
research continue.
We don't see promises
as a feature in the
ES6, possibly ES7.
But I'm not confident as to
what's going to make in the
ES6 right now.
So it's dangerous to predict
what's going to be in seven.
And whether it will bring new
syntax with it as well, I
don't know.
AUDIENCE: Is the cost of making
closures in JavaScript
make these monadic approaches
infeasible right now?
DOUGLAS CROCKFORD: No,
closures aren't that expensive.
They're just function objects.
And they're great.
A lot of stuff gets
enabled by this.
You could take a simpler
approach to
some of these things.
There is a cost.
But very few of our JavaScript
programs are compute bound.
We're mostly bound by
everything else.
So I don't think
it's a concern.
I think this is easily
affordable.
AUDIENCE: Is this
a public video?
MALE SPEAKER: Yes.
Any non-competition questions?
You guys are quite a bunch.
All right, I think you stunned
into silence, Douglas.
DOUGLAS CROCKFORD: All right.
Thank you.
