Now we've seen what a VP looks like -- we can
start thinking about what the structure of
a whole clause might be. So if all phrases
are projections of a head, what projects the
whole clause? What projects the sentence?
And one very reasonable way to think about
this is to say that actually the head of the
sentence is the verb. So that would mean that
that label we've been giving it, S, we should
replace with VP. So, this is saying that what
we've been thinking of as sentences are really
VPs. Notice that means that what we've been
thinking of as VPs would now be the smaller
projections, those would be v-bars. And that
is a very reasonable hypothesis, and it looks
like it would allow us to account for at least
simple sentences. So, uhm, "Donald forgot
Pauline's name." But, we may run into some
problems when we look at slightly more complex
sentences, so if we look at sentences which
include, for example, a modal or another auxiliary,
so 'have' or 'be', we need more structure
to accommodate those, so where do those appear?
Again, we could hypothesize about those that
since modals and auxiliaries seem to be types
of verbs, so in English for example they carry
the same kind of tense information that verbs
do, so they seem to have verbal properties,
we could say that these are kinds of verbs,
and that we have a kind of recursion here.
That is to say, what you would want to say
about auxiliaries and about modals is just
that they are verbal heads, they project then
VPs, and they take a complement. What introduces
the recursion here is that their complement
is also a VP. And again, that's a pretty reasonable
analysis for sentences.
However, there is a different way to think
about this. Not very different, but slightly
different. And one thing to notice is that
modals and auxiliaries also have different
properties to other verbs. We see this particularly
with modals. So one property that's different
from other verbs is that modals only have
finite forms. We don't find infinitive forms
of modals, so you can't say "It would be nice
to can swim", although that seems a perfectly
sensible thing to want to be able to say,
we have to express it differently. We can't
use, there is no infinitive form of 'can'.
Similarly, there's no past participle form
of 'can' or 'will'. So modals can't occur
in non-finite forms. And in fact, this is
one reason why thinking of this in terms of
recursion looks a little bit odd. Because
modals require that the verb that follows
them be non-finite. So you say things like
"He may be ready". If modals themselves can't
be non-finite then, in fact, you can't get
sequences of modals: a modal can't take another
phrase headed by a modal as its complement.
And this is true in standard varieties of
English. So it seems that actually the structure
isn't really recursive.
So we can instead focus on some of the things
that are different about modals in particular,
and hypothesize that they actually belong
to a different category. It's a category that's
associated with verbs, but it's a distinct
category. So it's a closed-class category;
a category that has a very small number of
different members. What has also come to be
called in syntax a functional category, and
we'll be seeing more of these functional categories
as we proceed. So the category that people
associate with modals is the category of Inflection
or Infl or I. And inflection here covers agreement.
It also covers tense. An alternative category
that people sometimes propose for modals is
in fact to say that they belong to category
T, for tense. For now, we're going to stick
to the hypothesis that it's I, but the distinction
is not that important for us at the moment.
So let's say that modals are of category I,
for inflection. So, following X-bar schema,
they're going to project an I-bar and an IP.
So now instead of saying that the modal is
a verb which takes another verb phrase as
its complement, we would simply have the same
kind of structure except that we would say
that the modal is an I category, it projects
an I-bar, and that I takes a VP as its complement.
So that now a sentence, rather than being
a VP, is an IP. And it contains a VP, the
VP that's the complement of the modal.
This will actually allow us to explain another
difference between the behaviour, the syntactic
behaviour, rather than morphological, of modals
as opposed to verbs, as opposed to main verbs.
And that is, the position, the relative position
of modals, main verbs and negation, for one,
and also certain adverbs, for another. So,
modals always preceed sentential negation.
So you get, so you get examples like "Iris
will paint the door", "Iris will not paint
the door." Main verbs in English, on the other
hand, can't occur before negation. So in modern
English you can't say, for instance, "Iris
paints not the door." You find a similar pattern
with adverbs. So adverbs have a lot more freedom
in the way they occur than negation does,
so adverbs sometimes occur at the beginning
of sentences, so you'll get things like "Quickly,
Arthur opened the tin", or they can occur
at the very end, so "Arthur opened the tin
quickly", but many adverbs can also occur
in the middle of the sentence, in which case
we call them sentence-medial adverbs. And
if we look at the ones when they're occurring
in the middle of sentences we notice that
modals can immediately precede adverbs when
these adverbs are in the middle. So you'll
get things like "Arthur may quickly open the
tin". What you don't get is where the main
verb immediately precedes the adverb, in which
case the adverb would separate the verb from
the direct object. So what is ungrammatical
is something like "Arthur opened quickly the
tin". So again, that's a difference in the
relative order of a verb with respect to something
in the middle of the sentence: an adverb or
a negation, and the order that a modal takes
with respect to those elements. And now, we
actually have a way that we could explain
that, or at least describe it in a systematic
way.
So, focusing for the moment just on negation,
the generalization would be that negation
can follow immediately modals, but has to
precede main verbs. So what we could say is
that negation is itself a head, again it would
be a functional category, so we could say
there's a functional category of negation.
Following the X-bar schema, that's going to
project a phrase, so we'll have Neg and NegP,
and if we say that that phrase takes the VP
as its complement but doesn't take IP as its
complement, that will have the effect that
negation will occur immediately to the left
of the main verb, but will never occur to
the left of a modal.
So now, all we need to do is to make certain
that modals can take either Neg phrases as
their complements or VPs as their complements,
and that will give us those orders correctly.
But we now have a problem if we think about
how we got to this final structure.
If we set aside negation for the moment, and
just think about what the elementary trees
would have to be like for the modal and for
the main verb to produce the sentence we've
just seen, we'll wind up with these two elementary
trees where, so the subject is going to appear
in the elementary tree for the modal, and
the object is going to appear in the elementary
tree for the verb. But, that can't be right,
because the subject is not an argument of
the modal; the subject is an argument of the
verb. It gets its participant role from the
verb, just like the object does. And also,
in fact, semantically, modals take scope over
the entire proposition.
So a sentence like "Donald may forget Pauline"
means something like: "It's possible that
Donald forgets Pauline", not "Donald"... and
then some possibility about something. So
the scope of the modal should include the
subject and all of the rest, that whole proposition.
So that leads us to suggest that those elementary
trees can't be right. They really don't reflect
the selectional properties of those heads,
which is what elementary trees do; that's
what they represent. Instead, we really want
to show that the modal isn't selecting the
subject. So the subject doesn't appear in
the elementary tree for the modal. It does
appear for the elementary tree for the verb,
just as we've already seen when we've been
looking at elementary trees for verbs. But
of course, the problem with these trees is
if we assemble them by the process of substitution
that we've already seen, so we put these trees
together and we put in the arguments, of course
the problem we have now is we have the wrong
order for a declarative sentence. The subject
is appearing at the left edge of the verb
phrase not in front of the inflectional node,
not in front of the modal.
It's also worth noting that I've focused here
on the selectional properties of the items,
and stressed that the subject as an argument
is selected by the verbal head. It is also
true, however, that the subject is entering
into some kind of relation with the modal.
It's not a thematic relation; it's not getting
a thematic role from the modal. But it is
entering into a kind of grammatical relation.
You actually can't see this with modals in
English, but you see it if you use another
auxiliary, so if we switch now to 'have' and
'be', what you see is that there is a relation
of agreement between the subject and the auxiliary.
So you get "the children are playing" not
"the children is playing", even though 'the
children' is an argument of the verb 'play'.
So the phenomenon that we're seeing here is
that that subject argument seems to be participating
in two different relations: in a relation
of agreement with what we're calling the inflectional
node, and in a relation of selection with
the verb. And not only that, but one of those
relations looks as though it would require
that subject to be in a different position.
What is selected by the verb should be within
the projection of the verb.
So this kind of situation, and we're going
to see other examples of this kind of situation,
forces us to propose another kind of operation
in our syntax, and that operation could be
called "movement" or it's sometimes also called
"copying". The idea is that once a phrase
has been substituted into the structure in
the position where it's selected, so in the
case of the subject of a verb that would be
the subject position inside that VP, because
it's an argument of the verb, after that operation
has taken place, it's part of the structure,
it can subsequently move to a different position
in the structure; it can move up in the structure
o satisfy a different relation at a different
position in the structure. So in this case,
the subject would be substituted in the subject
of the VP. It would satisfy the selectional
requirements that the verb is required to
have an agent argument. And then it would
move to the specifier position of the IP,
a position in which it agrees with the inflectional
node. So now we've introduced another way
of building structure. We had substitution,
and now we've added this operation of movement.
Now there are different ways to theorize this
relation of movement. In particular, there's
different ways to think about what is left
behind when the phrase moves to the higher
position. One way to think about this is to
hypothesize that what happens is that the
element that moves is actually a copy, so
that you copy the nominal phrase in this case,
and it's the copy that appears in the higher
position. And if this was to happen again,
you would wind up with a chain of copies of
the same phrase. Clearly, what you'd need
in addition, or as part of this process of
copying, is you would need a process that
would guarantee that only one element in that
chain of copies is pronounced: the highest
element.
An alternative way of describing this is to
say that what gets left behind by movement
is a special type of category. It's similar
in all respects to what's moved, but it is
inherently unpronounced, and this special
type of category is called the trace. It's
often given a subscript matching a subscript
on the moved element so that it can be clearer
what element it is that moved and left that
trace.
These two proposals for how to think about
what gets left behind in movement, whether
it's a copy or a special trace category, can
actually be distinguished in their empirical
predictions, so they're not entirely equivalent,
but for many purposes they are equivalent,
and here we're not going to go into the what
the differences might be down the line of
choosing one variety over another. So we're
not going to be making a distinction between
these two ways of theorizing this. The crucial
point here is this idea that a phrase can
be moved from the position in which it's substituted
into the structure to a position higher in
the tree.
There are various kinds of data that people
have observed or discovered in looking at
this question of the structure of the clause,
and in particular, at this hypothesis that
we've now been examining, namely that the
subject of the clause, what you see in the
specifier of the IP, originates in the specifier
of the VP. That hypothesis has been termed
the VP-internal subject hypothesis, that is
that the subject of the sentence originates
in a position internal to the VP.
So, just as a couple of pieces of evidence
that relate to this... so, one has to do with
sentential idioms. So, idioms come in various
sizes, but there are a class of idioms where
the subject is part of the idiom, and the
verb phrase is also part of the idiom, so
it looks like maybe it's the whole sentence
is idiomatic. So examples of that would be
things like "heads will roll" or "the shit
hit the fan". But actually, when you look
more closely at those idioms, it seems it
isn't exactly the whole sentence. The idiom
seems to skip a bit in the middle. That is,
the subject has to have exactly that form,
the verb phrase has to have exactly that form,
but you could have different modals and auxiliaries,
and that doesn't disturb the idiom, it still
remains idiomatic, so if you say "heads may
roll", "heads will roll", "heads have rolled",
it doesn't make any difference. The "heads"
part is part of the idiom, the "roll" part
is part of the idiom, but it seems to skip
the bit in the middle.
But now, now that we've seen that the way
to think about subjects is that they actually
originate in an elementary tree together with
the verb, this actually is really as expected.
That is to say, these idioms, the chunk that
is parceled together and memorized is actually
the VP, which includes the subject, now, in
this way that we're looking at it. So now
it's unsurprising that there should be idioms
where the modal is free, but the subject and
the rest of the sentence form the idiom.
There's an argument from the way quantifiers
behave, or rather, the position in the sentence
in which some quantifiers occur, which seems
to follow from the VP-internal subject hypothesis,
and this is an argument which was one of the
original arguments supporting this idea in
fact. When you have a quantifier like "all"
modifying the subject, it can occur in more
than one place, so one common way to have
it is it immediately precedes what looks like
a DP, in which case it actually forms a constituent
with that DP. So you can get a sentence like
"All the workers may leave". And in this case,
'all the workers' forms a constituent, so
"All the workers may leave. Who may leave?
All the workers." Works fine.
You can also get 'all' following the subject,
so you can get "The workers all left." And
in this case, it doesn't seem to form a constituent
with the subject. So if you say "The workers
all left. Who left? The workers all." That's
ungrammatical. And what's perhaps even more
surprising is that that quantifier doesn't
even have to be next to the subject when it
follows it. So if you have a modal, you can
get something like "The workers may all leave."
So now, the quantifier is occurring after
the modal, whereas the subject is occurring
before it.
So one possible hypothesis here is that this
happens because the quantifier and the noun
phrase such as 'the workers', they begin together,
perhaps as a noun phrase inside another noun
phrase, and then either the whole structure
can move, in which case you'd get "All the
workers may leave", or the sub-part, just
'the workers', may move on its own, in which
case you'd get "The workers may all leave."
So that's, that's a case where the quantifier
is described as being stranded, so it looks
as though the quantifier's been left behind.
Notice than, that that gives us a kind of
clue as to where the subject originally was.
That is, after the modal, just before the
verb.
So an obvious question that's probably occurred
to you already is how to deal with simple
sentences, so now we've dealt with sentences
with modals, and we've said that they're IPs,
that the inflection node is projected from
the modal, and that that creates then an IP,
and that the verb phrase is the complement
of the I. So sentences with modals are IPs...
would we want to say that sentences that don't
have modals are some different category? And
clearly we wouldn't. Apart from anything else,
sentences that have modals and sentences that
simply have finite verbs in them have exactly
the same distribution. So you never find an
environment where something says "I want to
have as my complement a sentence that has
to contain a modal" or "I want to have as
my complement a finite sentence, but it mustn't
contain a modal".
So it looks as though we want sentences containing
modals and sentences just containing a finite
verb to have the same category, so we want
them all to be IPs. So what we'd say for a
sentence which just contains, say, a past-tense
verb or a present-tense verb, is that tense
itself is the inflectional element. This tense
head is actually itself silent. It has an
effect on the form of the verb in the VP.
That's what seems to happen for English. So,
one way to think of this would be to say that
if you think about how a modal occurs with
a particular form of verb in the VP that it
takes as a complement, so a modal selects
for the bare infinitive form of the verb in
its VP complement, so you could say that a
past tense selects for a particular form of
the verb that occurs in its VP complement,
the past-tense form. And a present tense selects
for a particular form of the verb that appears
in its VP complement, the present-tense form.
Exactly the relationship between tense and
verbs seems to work differently in different
languages, and this is something that we're
going to come back to in a later class.
For many clauses, and in particular for root
clauses, the structure that we've seen so
far seems to be sufficient, but if we look
now at embedded clauses, it seems that we're
going to need a bit more structure, because
embedded clauses are often, not always in
English, but often introduced by another element.
So in a root clause, you might have something
like "Betsy would never go to Peru"; as an
embedded clause, you can get "Ian thought
Betsy would never go to Peru" or "Ian thought
that Betsy would never go to Peru". And in
fact, in many languages, that optionality
of the element corresponding to 'that' isn't
there, so in many languages that something
like the 'that' that we're seeing sometimes
in English is always present.
There are other elements like 'that' that
introduce subordinate clauses, so 'that' introduces
subordinate declarative clauses. Subordinate
interrogative clauses can be introduced by
another element, so what correspond to yes/no
questions are introduced as subordinate clauses
by 'if', so "She wondered if Betsy was going
to Peru".
So we've got these elements 'that' and 'if',
and there are other cases as well, which introduce
these subordinate clauses. These elements
do possibly two things. One is to say that
is what they're introducing is indeed a subordinate
clause. The other, though, is to say what
type of clause it is, so 'that' is associated
with declarative clauses, whereas 'if' is
associated with interrogative clauses. These
elements that introduce subordinate clauses
are called complementizers. This, you can
think of this as related to the fact that
the subordinate clause is going to be the
complement of the verb that introduces it.
So, a complementizer is another functional
category. We've already seen a functional
category of inflection, and now we've got
a functional category complementizer, which
is often abbreviated to C, so the 'that' that
you get in "Thomas knew that Betsy would never
go to Peru" is an instance of a complementizer.
So now we've introduced this new lexical item,
and it is going to have an associated elementary
tree, so following the X-bar schema we'll
have an elementary tree rooted in a complementizer
or C category, and it's going to project a
C-bar and a CP. So, that complementizer then
will take as its complement the structure
we've been seeing so far, an IP.
One case where the X-bar schema doesn't seem
to give us enough positions to accommodate
all the phrases that we want is modification.
So we've seen with modifiers that, unlike
complements and specifiers, you seem to be
able to get a virtually unlimited number of
modifiers. Both we've seen this for VP modifiers,
we've seen it for modifiers inside noun phrases.
So we don't want to include positions for
modifiers inside our elementary trees for
verbs, for example. We want the elementary
tree for a verb to include the phrases that
it selects, but not all the possible modifiers
that it could occur with. For two reasons:
one is because there's an almost unlimited
number, so that would suggest that these items
we're storing in our lexicon can be extremely
large, which is not the way we think things
are, and also, and this is a related fact,
what we're putting in the lexicon is the items
that are selected by the head, the phrases
that are selected by the head, but modifiers
don't seem to be selected by heads, so it's
not a fact about a particular verb that you
can modify its verb phrase by something which
tells you about the time of the event or the
reason for the event or the person for whom
the event was done. So we don't want to have
those in those elementary trees. But that
means we need another way of getting those
modifiers into the structure.
So one crucial insight about modifiers is
that when you add a modifer to a phrase, the
result is a phrase of exactly the same type.
Now that's quite different from what happens
if you add a complement, or if you add a specifier.
And we've seen this, for example, with verbs.
So we've seen that you can't replace a verb
by 'do so'. So you can't say, for example,
"John ate a banana, and Gerald did so an apple".
On the other hand, adding a modifier doesn't
affect whether you can replace something by
'do so' or not. So you can say "John ate a
banana and Geraldine did so too", and if you
added another modifier you would also be able
to replace that with 'do so'. So when you
add a modifier to any category, the result
is something of that same category, so this
is now going to give us a recursive structure.
So modifiers introduce recursion. That is
to say, a modifier has a sister and a mother
that are of the same category. So how are
we going to achieve that in our trees? To
do that, we're going to introduce a third
operation, so we have substitution, we have
movement, and now we're going to add adjunction.
So the way adjunction works is this: you have
some node, let's say a V-bar, and you want
to adjoin some phrase, so let's say in this
case a PP, it could be an adverb phrase. So
you take your V-bar, and you create another
V-bar above it, you create another node of
exactly the same type. And now, the modifier,
the PP, is going to be added as a daughter
of this new node, and it becomes then a sister
of the original node. And we call such an
element, then, an adjunct. So this is an adjunct
position, the modifier occupies an adjunct
position. Now because the mother that you
added is the same category as the daughter,
you can do this again. So, you could add yet
another modifier, you would take the V-bar,
you would create another V-bar node, you would
take your modifier, you would make it a daughter
of the new node and it'd become a sister of
the old node. So you could do this an indefinite
number of times, and that's exactly what allows
you to have a whole stack, a whole staircase
of modifiers. In this case, I've illustrated
it with adjunction to V-bar, but you can adjoin
to other categories as well, and we've already
seen adjunction of modifiers inside noun phrases,
so that process will work exactly the same
way.
