So let’s talk about machinery. Let’s say
you’ve got a robot friend who’s been smashed
into a million pieces. You’d probably hope
he has the right technology to put himself
back together, and in the right order. And
it’s the same for language - we want to
be able to build up meanings from broken-up
parts. But our current semantic machinery
just isn’t advanced enough to reconstruct
all the words in even the simplest of sentences.
To make sure we’ve got what it takes, we’re
going to have take a more functional approach.
I’m Moti Lieberman, and this is The Ling
Space.
One of the guiding principles of semantics
is compositionality. This principle says that
the meaning of an expression is composed out
of the meaning of its parts. In other words,
when we work out the syntactic structure of
a sentence, every part of our tree has to
have some kind of meaning attached to it,
along with some rule telling us how to connect
those meanings together.
But what is meaning, exactly? Well, a big
part of semantics is about taking sentences,
which can be either true or false, and figuring
out what makes them true or false. This means
working out their truth conditions — the
exact circumstances under which people judge
a sentence to be true. Knowing the meaning
of a sentence means understanding its
truth conditions. And because the meanings
of sentences are composed out of the meanings
of their parts, finding out the meanings of
words means digging into just how each word
contributes to the overall truth conditions
of a sentence.
Let’s look at an example! If I told you
that Hogarth lost his rifle, and Agent Mansley
found it, you’d know that the only way that this
could be true was if both parts of
it were true. That’s because you already
know how “and” works.
So, as we discussed in a previous episode,
the meaning of logical words like "and"
can be represented using a truth table. This
truth table depicts the contribution of the
word “and” to the truth conditions of
the sentence. Namely, it tells us that the
overall statement will only be true if both the first
and the second half are true, and false otherwise.
When we try draw the structure underlying
this sentence, though, we start to run into
some problems. First, let’s see how this
all works. If we represent our sentence using
a tree, we get something like this:
And we can watch how the meanings of the parts
contribute to the whole. If that first clause
is true, and that second one is, too, then
the whole thing comes out true!
Now, you might notice that we haven’t said
anything about how each half of this sentence
got its meaning. So, how do we know when each
of those are true? What are their truth conditions?
Well, like we’ve seen before, logic can
get us at least part of the way to an answer.
The tools of predicate logic, in particular,
give us the power to break open these sentences
and poke around inside, to see how their meanings
are put together. Let’s back up a bit and
start with a simple sentence, like “Annie
sighed.”
If we think of the verb “sighed” as a
predicate and the noun “Annie” as an individual
that the predicate might apply to, predicate
logic can tell us the contribution of each
piece to the truth conditions of the sentence.
Specifically, predicate logic tells us that
that “a” refers to a person like Annie,
that the predicate Sx refers to a list of
every individual it applies to - so every
person that sighed - and that the whole sentence
ends up as true, just as long as that person
can be found somewhere inside that list.
Okay; so far, so good. But when we ramp up
the complexity of our sentences, even just
a little, logic fails us. The problem is that,
in a sentence like “Hogarth lost his rifle,”
treating the verb phrase “lost his rifle”
like a simple predicate just won’t work.
If we do it this way, we ignore the internal
structure of the verb phrase, and lose the
fact that it shares some bit of meaning in
common with the verb phrase in “Agent Mansley
found it” -- namely, Hogarth’s rifle!
If we want to be able to say how each and
every piece contributes to the overall meaning,
this just won’t do.
And it only gets worse if we put “his rifle”
back in the tree and treat “lost” like
a two-place predicate — one that applies
to pairs of individuals. If we try that, we
end up supposing that our predicate combines
with both “Hogarth” and “his rifle”
simultaneously. That means that the verb and
its object can’t form a separate constituent
before taking in the subject. And, so, our
tree ends up looking a bit out of this world.
Our syntax wants the verb to join with the
object first, but our semantics wants to mash
everything together at once.
The basic problem is that we know from our
syntactic rules that verbs combine with their
objects first, forming verb phrases. And only
then do verb phrases combine with a
subject to make up the rest of the sentence.
There’s an order to it. And because there’s
an order in the syntax, there has to be one
in the semantics as well. But even predicate
logic, as sophisticated as it is, can’t
handle that. We need some kind of new machinery
that gives transitive verbs like “lose”
a meaning that can combine with one thing
at a time, while keeping track of the order.
Enter the lambda calculus! Named after the
Greek letter lambda, it’s a handy way of
expanding on the tools we’ve built so far.
And the best part is, if you’ve already learned
about functions in school, you already know
how it works! Take a simple function like
“y = x + 1.” It’s really just a rule
that relates two numbers; if you start with
a number like “1,” you can just plug it into
the function, which adds “1” and outputs
“2.” Plug in “2” and you get “3”;
“3” gives you “4”... you get the idea.
The lambda calculus is just another way of
spelling out functions. Instead of writing
“y = x + 1”, we write “λ x . x + 1.”
And it works the same way: apply the function
to “1,” you get “2”; “2” and you
get “3.” And so on.
It may not look like much, but the real power
of the lambda calculus comes from the fact
that it lets us break up larger, complicated
functions into smaller, simpler ones. Like,
if we had the function “y = x ÷ z,” we’d
have to put in two numbers before it gave
us anything. Say we just plugged in just “4,”
or “2”; what would that even mean? We
don’t get anything from the function until
we feed it both numbers. And we have to be
careful about the order we do it in, too,
or else we'll only get a fraction of what we want.
The lambda calculus solves these problems
by breaking up a two-place function, like
this one, into two one-place functions. So,
“y = x ÷ z” becomes “λ x . λ z . x ÷ z.”
That first lambda says to plug whatever number
you give it -- say, 4 -- into the “x”
position. And when that’s done, you’re left
with another function which says to plug the
next number -- say, 2 -- into the part with
the “z.” And when everything’s in place,
you’re ready to math!
To see how the lambda calculus works for language,
let’s start out with an intransitive verb, like
“sigh.” According to the language of set
theory, we might think of this word like a
set -- the set of everyone who sighed. But
every set has what’s called a characteristic
function, which is just the function that
spits out “true” every time it’s applied
to a member of that set. So, we can just reinterpret
“sigh” as that function -- specifically,
“λ x . x sighed.” When it’s applied
to a noun, like “Annie,” it comes out
true just so long as Annie’s somewhere in
the set.
Just by sprinkling a few lambdas around, we’ve
managed to describe exactly how the verb contributes
to the overall truth conditions of a sentence.
So, where we used to say that a word like
“rusted” just refers to the set of all
rusty things, we can now replace that set
with a function, like “λ x . x is rusted”,
which can combine with whatever else it ends
up with in the sentence in a more straightforward
way, as functions do.
So, a sentence like “that car by the side
of the road is rusted” comes out true, just
as long as we can find the car in question
in the set that the function replaced.
Most importantly, though, we finally have
a way of defining more complicated words,
like transitive verbs. Let’s take another
look at the sentence “Hogarth lost his rifle”.
We can see that our meanings finally fit
into our tree. Assuming that “Hogarth”
and “his rifle” just refer to the individual
entities that they pick out, we can see how the
function associated with “lost” works.
First, it combines with Hogarth’s rifle to produce
a second function: “λ x . x lost Hogarth’s rifle.”
Then, that function combines with
Hogarth to spit out the value "true"!
And once we start thinking with lambdas, we
make some pretty neat predictions! If you
remember back to our video about generalized
quantifiers, like “some” and “most”,
we said they’re the sorts of words that
compare sets. In the language of the lambda
calculus, they compare characteristic functions.
So, the meaning of a word like “most”
would look like this.
It takes two functions, F and G, and spits
out true just so long as more than half of
the things referred to by the first function,
F, are also part of the second, G.
In a sentence like “most townspeople saw
the Giant,” it combines with the subject
“townspeople” first, followed by the verb
phrase “saw the Giant”, and then spits
out a truth value!
But in a sentence like “the Giant saw most
townspeople,” we seem to have a problem.
The phrase “most townspeople” looks like this:
While the verb “saw” looks like this:
And neither of those functions can combine
with the other. “See” is a complex function
looking for an individual, but “most townspeople”
isn’t an individual - it’s another function
that’s looking for something it can’t
find, something simpler than “see”. So
it doesn’t look like they can connect, and
everything has to leave unsatisfied. And that
should make the sentence bad.
But we know from things like questions that
words can move around. Let’s say that the
noun phrase “most townspeople” can sneak
past the verb unnoticed, up into a higher
part of the tree. That leaves it in a position
where it can comfortably combine with what’s
left, since “the giant saw” represents
exactly the kind of function “most townspeople”
is looking for.
Now, the idea of parts of your sentences moving
around by themselves like automatons might
be tricky to get your mind around, but here’s
the thing: the formal framework of syntax
and semantics is there to reflect how we think
our brains work out the nuts and bolts of
language. It’s there to capture the data.
So one way we can work out whether a theory
makes sense is to see whether it makes predictions
that fit with how people interpret sentences.
In this case, if this kind of sneaky syntactic
movement happens generally, even if we don’t
hear it, we expect to find ambiguity in sentences
with two or more quantifiers, depending on
which one lands up higher in the tree.
And, that’s exactly what we see! A sentence
like “every soldier fired a gun” can mean
every soldier fired the same gun, or
that they all fired different guns, depending
on what moves where. This is what underlies
the kinds of ambiguities we’ve been talking
about since our eighth episode.
We even see evidence that this kind of so-called
covert movement must be taking place. So, we’re
saying there’s movement because we can get
multiple interpretations. So if we want to
test this, we should stick a word somewhere
where it can’t move, like a syntactic island.
If we only get one interpretation, that’s
pretty strong evidence that words move
around when we can’t see it.
So let’s take the sentence “some general
who every soldier obeys dislikes Mansley.”
That “every” is trapped inside a relative
clause, and so it can’t move out. Sure enough,
we’re down to one interpretation, where
it’s all the same general. It’s impossible
to imagine that there are different generals for each
soldier doing the obeying. Just like we predict!
So the lambda calculus has the power to handle
even the most complex of sentences. It gets
our semantic machinery functioning smoother
than ever before.
So, we’ve reached the end of The Ling Space
for this week. If you managed to apply the
right functions, you learned that the meaning
of a sentence is composed out of the meanings
of its parts; that logic alone can’t handle
the complexity of language; that the lambda
calculus works by defining how words contribute
to the truth conditions of a sentence, using
functions; and that it gives us a nifty way
of explaining certain kinds of ambiguity.
The Ling Space is produced by me, Moti Lieberman,
and directed by Adèle-Elise Prévost. This
week’s episode was written by Stephan Hurtubise.
Our editor is Georges Coulombe, our music
is by Shane Turner, and our graphics team
is atelierMUSE. We’re down in the comments
below, or you can bring the discussion back
over to our website, where we’ll have some
extra material on this topic.
Check us out on Tumblr, Twitter, and Facebook,
and if you survived your venture into the
lambda calculus, try dropping by our store,
where we have just the shirt for you! And
if you want to keep expanding your own personal
Ling Space, please subscribe. And we’ll
see you next Wednesday. Tókša akhé!
