Jun 182020

*** UPDATE: While I still stand behind the content of this post, the proofs provided below were, upon reflection, not as precise as I would have wanted, and I have since worked to “tighten” them up. So when you get to the parts in the post where the proofs are discussed, I suggest mentally “swapping in” the proofs from these slides (in particular, slides 5 through 21) in their place. ***

For a while now, I have been pondering the prospects of a realizational/interpretive theory in which spellout to PF and spellout to LF involve separate collections of rules, and where individual spellout rules crucially map from sets of syntactic terminals to exponents or to meaning primitives. (For a given spellout rule to be applicable, the set of nodes in its input specification must appear contiguously in the input structure. In this post, I’m not going to concentrate on how contiguity is defined at each of the interfaces. Email me if you would like some details on my in-progress thinking on these matters.) The sets may happen to be singleton sets, but that is not an architecturally-privileged state of affairs. This way of thinking of spellout makes a number of desirable predictions; for example:

  • There should be instances where the applicable PF-spellout rule and the applicable LF-spellout rule stand in a relation of partial overlap.
  • There should be terminals for which there happens to be no singleton PF-spellout, or for which there happens to be no LF-spellout rule; such terminals will have no elsewhere form, or no context-free interpretation, respectively.

As I’ve noted elsewhere, these things are well-attested.

The picture that emerges is in some ways similar to Nanosyntax, where the candidates for spellout are also sets of nodes (that are contiguous in a specific technical sense). It departs from Nanosyntax in entirely divorcing spellout to PF from spellout to LF (see above), as well as in defining contiguity differently for PF-spellout rules and LF-spellout rules.

Such a system, in which the input to PF- and LF-spellout rules is a set of terminals rather than an individual terminal, seems to fly in the face of the claim made in Embick & Marantz’s 2008 paper, Architecture and Blocking (henceforth EM08). This paper argues that architecturally speaking, “words” do not enter into competition and blocking with phrases, nor do they even enter into competition and blocking with other “words”. The only blocking that is part of the grammatical architecture occurs at the morpheme level, where one exponent competes with another for insertion (see also Embick 2017). Crucially, this requires a framework in which lexical insertion specifically targets individual syntactic terminals.[1]Confusingly enough, the DM literature refers to syntactic terminals as ‘morphemes’. I have translated this back to more sane terminology, here. It is this claim, that vocabulary insertion (what I termed PF-spellout, above) is restricted to individual syntactic terminals, that I am interested in here – since it appears at first glance to stand in direct conflict with the system I am envisioning. I am going to argue instead that, depending on how one chooses to understand EM08, this claim about vocabulary insertion is either vacuous or wrong. That is not to say that EM08’s other claims, e.g. about the empirical picture concerning “word” and phrase blocking effects, are incorrect. I am only taking aim at the particular claim about the locus/granularity of vocabulary insertion.

Let’s consider the alternations in (1) as our test case:

a. dog (dɔg) – dogs (dɔgz)
b. goose (gu:s) – geese (gi:s)

There are a few clarifications to make about (1) before we get started. First, the plural morpheme is not some Agr node “added at PF” (though you will find claims like this in the DM literature). Number in the noun phrase comes from a dedicated functional projection, NumP. The degrees of freedom are whether the overt phonological material associated with the plural is the spellout of the head Num0, or the spellout of a feature (call it [plural]) whose occurrence on the noun (or more accurately, on n0) is conditioned by the presence of the right variant of Num0, perhaps via syntactic Agree. But neither of these things (the plural variant of Num0 or the feature [plural] on n0) are “added at PF.”[2]One might even say that the “added at PF” gambit suggests that when syntacticians talk, DMers nap. (This is a callback to Marantz 1997, and a loving one at that!) Second, I am going to treat geese as suppletive, relative to goose, even though some treatments of alternations like these would characterize this as a “readjustment rule.” What is clear here is that there is no reasonable phonological rule of English (please note: “rule” implies productive knowledge) that would trigger this alternation. If your phonological theory is sufficiently sophisticated to treat geese as the result of affixing some autosegmental/suprasegmental material to goose, more power to you, but then you’ll have to change this example, in your mind’s eye, to one in which such a move is not possible (person-people or whatever you prefer).

Okay, with that out of the way, let’s talk about (1) from the perspective of EM08. If the English plural /‑z/ is the (elsewhere) spellout of plural Num0, then there is actually no way for geese to block gooses at the level of the individual syntactic terminal. An EM08-adherent would therefore be forced into one of two positions. The first is what I’ll call featuralization, and the second is mutually-conditioned allomorphy.


Suppose that instead of assuming that /‑z/ is the (elsewhere) spellout of Num0, the spellout of English plural Num0 was always null, and /‑z/ was the spellout of the feature [plural] on n0, a feature whose occurrence is conditioned by the presence of plural Num0 (e.g. via syntactic Agree). Now geese could block goose, but we would have to assume that, in those cases where [plural] surfaces as its own exponent (e.g. /‑z/), Fission has applied, to enable [plural] to be targeted for vocabulary insertion separately from the rest of the content of the noun.

Crucially, we can generalize what we just did with Num0 and [plural] into a recipe for turning any blocking effect that seems to involve multiple successive heads into a version that is EM08-compliant. Suppose we are looking at the following state of affairs:

a. Let <X10, …, Xn0> be a series of successive heads, where for every 1≤in-1: XiP is the complement of Xi+1.
b. Let none of <X10, …, Xn-10> have overt specifiers.
(I will refer to this as a PF-contiguous span of heads.)

And suppose that we see what seems to be a blocking effect involving <X10, …, Xn0>; for example: the spellout of {√GOOSE, n0, Num0[pl]} as geese competing with, and blocking, its spellout as gooses. We can bring this into compliance with EM08 as follows:

a. Assume the spellout of <X20, …, Xn0> is invariably null.
b. Define a set of features <F2, …, Fn>, where:
    i. For every 2≤in: Xi0 is base-generated with a valued [Fi] feature.
    ii. X10 is base-generated with unvalued Fi features for all values of i (2≤in).
    iii. For every 2≤in: X10 enters into Agree in Fi with Xi0 (thus acquiring valued [Fi]).

We can now recast any blocking involving <X10, …, Xn0> as blocking that is occurring exclusively at X10. Anything that looked previously like vocabulary insertion at Xi0 (2≤in) can now be handled as Fission of Fi from X10. We thus have a recipe for bringing any multi-terminal blocking that meets the criteria in (2) into compliance with EM08, indicating that the restriction of formal competition (and thus, blocking) to individual syntactic terminals is not doing the work that EM08 suggested it was doing.

An EM08-adherent might find solace in the fact that, since (3b.iii) involves Agree, there is a built-in restriction (e.g. phases) on how far away from X10 a head can be while still contributing an Fi feature that will participate in blocking at X10. That is all well and good, but it is still equivalent to just saying that there is blocking among sets of terminals, so long as the members of those sets all occur inside a single phase (which seems trivial on any approach where spellout is cyclic).

I am about to turn the discussion to mutually-conditioned allomorphy, but before I do so, I think it’s worth pointing out that it’s not at all clear how one could possibly rule out what is described in (2‑3) (at least as long as one assumes there is such a thing as Fission). This is important because it means that even if one prefers the mutually-conditioned allomorphy treatment of goose-geese (or person-people, etc.), the loophole described here still exists. Thus, whether we like it or not, the restriction of competition to insertion of exponents at individual syntactic terminals is unable to do any work that is not already done by restricting competition to PF-contiguous spans that are contained in a single phase.

Mutually-conditioned allomorphy

As an alternative to featuralization, suppose we instead attempt to rescue the EM08-compliant treatment of (1) in a different way:

a. GOOSE → geese / [plural] 
b. Num[pl] → ∅ / GOOSE

On this view, geese (or people, etc.) arises as something of a conspiracy, wherein the elsewhere form of the plural Num0 (/‑z/) is overridden in the presence of the root GOOSE, while the elsewhere form of this root (goose) is overridden in the presence of the plural Num0.

At this juncture, it is useful to take note of a particular property of the goose-geese example, which I have been neglecting so far, and which demonstrates that (4) is in any case too simplistic of a treatment. Consider (5):

(5) The corrupt accountant gooses(/*goose/*geese/*geeses) the earning reports every quarter.

I point out (5) because it shows that the occurrence of the form goose isn’t dependent on nominal number even being present in the structure. That is, goose is not the counterpart of geese in the presence of Num[sg]; it really is the elsewhere form, and presence of Num[pl] triggers a contextual allomorph of that form. For concreteness, let us assume that goose is the spellout of {√GOOSE, n}, and that the verbal use in (5) involves the (common in English) null v denominal verbalizer. In other words, the verb stem in (5) is the spellout of {√GOOSE, n, vdenom}, or more accurately, the spellout of {√GOOSE, n} (which is goose) plus the spellout of {vdenom} (which is null).

a. {√GOOSE, n, Num[pl]} → geese
b. {√GOOSE, n} → goose
c. {vdenom} → ∅

This is not the only way to capture what is going on with goose-geese (incl. the verbal paradigm), but this way of characterizing the data explicitly sets up a scenario where the spellout of one span (6a) competes with, and preempts, the spellout of a smaller span contained therein (6b) – precisely the sort of thing that EM08 wants to architecturally rule out. So if we can show a recipe that translates (6) into an EM08-compatible characterization involving mutually-conditioned allomorphy, but where vocabulary insertion is restricted to terminals, we will again have shown that the architectural restriction in EM08 is not doing the work it is purported to do.

As before, I will start by translating this particular example into an EM08-compatible implementation, and then generalize the mechanism of inter-translation. Let us begin with the following ‘elsewhere’ rules for the exponents of √GOOSE, n, and Num[pl]:

a. √GOOSE → goose
b. n → ∅
c. Num[pl] → /‑z/

One thing we could do at this juncture is observe that (7b) is a null exponent, and that (7a) and (7c) would therefore be adjacent as far as the overt structure is concerned. This is the approach taken by Embick (2010) (though it’s worth noting that it is explicitly rejected by Bobaljik 2012, for example, in his treatment of comparative & superlative morphology). But since this would reduce the span in question to a rather trivial one – involving only 2 nodes – let us make things harder on ourselves, and assume that we cannot ignore (7b): it still intercedes between √GOOSE and Num[pl], disrupting the kind of adjacency required for contextual allomorphy in an EM08-style system. (Everything I’m about to say will of course also work if n0 is “pruned” à la Embick 2010, but as I said, I’m intentionally choosing the route that will make an EM08-style treatment harder to construct.)

Nevertheless, we can take a page from the featuralization approach, above, and assume that n0 acquires the [plural] feature from Num0 derivationally (e.g. via Agree). In a move that may seem more controversial – but I will argue, shortly, is not – I will assume that n0 can also enter into a syntactic relation with √GOOSE resulting in the identity of the root being reflected in the syntactic content of n0 itself.

The reason this last move may seem controversial is that a tenet of conventional DM dogma holds that roots are not individuated in the syntax. In essence, the thinking goes, there is only one root object in the narrow-syntactic lexicon (the list of available syntactic atoms). Because roots are assumed to be featureless, syntax wouldn’t have any way of telling multiple root objects apart anyway. If this were true, the last step, above – where n0 derivationally acquires featural content reflecting the identity of its root complement – would be impossible.

However, there are both empirical and conceptual reasons to reject the conventional DM premise regarding roots being “featureless” and, therefore, unindividuated in the syntax. Empirically, Harley (2014) has shown that roots cannot be individuated semantically or phonologically, leaving syntactic individuation as the only option still standing. (Importantly, this conclusion holds both on the strong version of her 2014 claim, whereby all arguments are selected by roots, and also on the weaker version of the claim, which Harley settles on in the reply to the commentaries on her target article, whereby some arguments are selected by roots and some are selected higher up, by syntactic categorizers. The latter view, as far as I can tell, is also compatible with Merchant’s 2019 observation that categorizers often do affect the selectional properties of the roots they attach to.) But I think the conceptual argument, in this case, is even stronger: as I have discussed elsewhere, any version of DM in which the identity of roots is negotiated post-syntactically is an equivocation of modularity, anyway. Ignore what DM declares itself to be doing: any line of communication between PF and LF is syntax, and so there is no version of DM in which roots are not individuated in the syntax. And what does it mean to be individuated in the syntax? It means individual roots (like √GOOSE) have properties legible to the syntax. Let me repeat that: √GOOSE has properties legible to the syntax that distinguish it from √DUCK. I am not proposing this, so much as I am pointing out that it follows from any reasonable definition of how the grammar is modularized.

Given this, there is also no obstacle to assuming that n0 acquires, in the course of the derivation, syntactic properties reflecting that its root complement was √GOOSE (and not √DUCK, or √ESSAY, or …). This is possible because the difference between √GOOSE and other roots is, by definition, legible to the syntax.

After these feature transmissions occur in syntax, the structural representation handed over to PF will be as follows:

Num0[pl] » n0[pl, GOOSE] » √GOOSE       (where ‘»’ indicates immediate c-command)

We can now recast (6) as in (9):

a. √GOOSE → geese /      n[pl] 
b. Num[pl] → ∅ /      n[GOOSE]

At this juncture, one might object on the grounds that there are reams of work in morphology indicating that allomorphy is a highly local business, and the kind of non-local interactions just sketched are unavailable, empirically speaking. (Another way of putting this: Embick and others had good empirical reason for proposing their stringent conditions on allomorphy.) My response to that is that those empirical generalizations apparently still await explanation, because the mechanisms put forth to account for them (i.e., to rule out non-local interactions of the kind seen here) are technically unable to do so. In other words: I don’t deny the empirical basis DMers had for proposing these restrictions; I deny that the restrictions proposed get the job done.

It is now time to generalize this treatment, i.e., to show that any account like the one in (6), above – where an exponent associated with one PF-contiguous span competes with and blocks the insertion of an exponent associated with a smaller PF-contiguous span – can be restated in terms of mutually-conditioned allomorphy, with lexical insertion restricted to individual terminals. To the extent that we are able to provide a general recipe of this sort, we will have shown once again that the restriction of insertion to individual terminal does no empirical work.

Let us start again with the state of affairs in (10) (repeated from (2)):

a. Let <X10, …, Xn0> be a series of successive heads, where for every 1≤in-1: XiP is the complement of Xi+1.
b. Let none of <X10, …, Xn-10> have overt specifiers.
(A PF-contiguous span of heads.)

We can then recast any interaction among multiple heads inside this span in terms of mutually-conditioned allomorphy of individual terminals, as follows:

Define a set of features <F1, …, Fn>, where:
a. For every 1≤in: Xi0 is base-generated with a valued [Fi] feature.
b. For every 1≤in, 1≤jnji: Xi0 is base-generated with an unvalued [Fj] feature.
c. For every 1≤in, 1≤jnji: Xi0 enters into Agree in Fi with Xj0 (thus acquiring valued [Fj]).

We can now implement mutually-conditioned allomorphy of any head in the span in (10) based on the features of any other head in the same span, up to restrictions on the locality of feature transmission in (11c) (e.g. up to the phase boundaries restricting Agree). As was the case with featuralization, above, it seems natural enough that competition for span-based PF insertion would have to occur within the bounds of a single phase, anyway, so there seems to be no meaningful distinction here, either.

While (11b) requires n features on each head in the span, and (11c) requires a number of Agree relations that is on the order of n2, in practice many of these will do no work in the translation of span-based competition to competition based on individual terminals. For example, in translating the example in (6) along the lines in (11), any n0-based features copied to Num0 will play no actual role in conditioning any allomorphy, and so in practice they need not exist, and any Agree relations they are involved in need not occur. This will be the case for many of the feature relations generated in principle by (11c). None of this is relevant, however, to our main point, which is that nothing beyond Agree is necessary to recast span-based competition in terms of competition at individual terminals.

It is important to note that there is no sui generis mechanism of mutually-conditioned allomorphy at play here, only Agree and garden variety feature-based contextual allomorphy. Thus, unlike the conclusions in the Featuralization section, this result obtains independently of one’s position on the existence of particular operations like Fission.


We have seen that imposing a restriction on competition and blocking, so that they only take place among different exponents vying for insertion at a single syntactic terminal, does not achieve anything that is not already achieved by restricting competition to PF-contiguous spans within a single phase.

I presented two different recipes for recasting competition and blocking among PF-contiguous spans in terms of competition and blocking at individual terminals only. Eliminating the operation of Fission from the grammar would rule out one of the two recipes, namely, the featuralization one; but it is much less clear how one would rule out the other recipe – which, as noted, does not appeal to any sui generis mechanisms beyond Agree and feature-based contextual allomorphy. One could imagine adding some sort of meta-principle that rules out what we have descriptively characterized as mutually-conditioned allomorphy. But this, as far as I am able to tell, would render the system incapable of capturing alternations like goose-geese (or person-people, or …). It is for this reason that I stated at the beginning of this post that the restriction in question on competition and blocking is either vacuous (on the assumption that Fission and/or mutually-conditioned allomorphy exist) or wrong (if they don’t).

As a side note to all this, banning both Fission and mutually-conditioned allomorphy may not even be sufficient to tear down the equivalence between insertion at terminals only and insertion into spans. As Pavel Caha points out in his thesis (pp. 57‑60), and again here (pp. 7‑9), any system with Fusion and insertion into terminals is also equivalent to a system with insertion into spans.

I see all this as very good news, since I think the view whereby the locus of insertion is a span of contiguous heads has a lot going for it (more on that some other time), and so I’m happy to discover that adopting such a view does not cede any meaningful ground to the EM08 alternative.

Thanks to Pavel Caha, Neil Myler, and Asia Pietraszko for helpful discussion. They are not responsible for the contents of this post.

1. Confusingly enough, the DM literature refers to syntactic terminals as ‘morphemes’. I have translated this back to more sane terminology, here.
2. One might even say that the “added at PF” gambit suggests that when syntacticians talk, DMers nap. (This is a callback to Marantz 1997, and a loving one at that!)
Notify of

Inline Feedbacks
View all comments
Comments welcome!x
| Reply