May 242021
 

Artemis Alexiadou & Uli Sauerland have put forth a proposal recently for an alternative view of generative grammar, which they term the Meaning First Approach (MFA). The animating idea behind MFA is that the grammar is fundamentally a “compression” mechanism, which maps between complex (i.e., hierarchical) structures of thought on the one hand, and structures that are suited to the articulatory & perceptual systems on the other.

Understood in the weakest possible way, this is neither controversial nor innovative; in Chomsky’s “inverted Y” model, grammar is also a device that relates hierarchical representations at the meaning interface (“Cā€‘I”) with articulatory/perceptual representations at the morpho-phonological interface (“SM”). The same is true for the (arguably superior) cousin of the inverted Y model, Bobaljik’s (1995, 2002) “Single-Output Syntax.” But, on my understanding, Alexiadou & Sauerland are making a stronger claim (and thus, one that is innovative), namely, that hierarchical conceptual representations are computationally prior to syntactic ones. I suspect this is the reason why, in the couple of texts they have made available describing MFA, they make a fairly big deal out of late-insertion phenomena (by which I mean, those phenomena that motivate a theory with late insertion) at the morpho-phonological level. If linguistic computation begins from structured meaning, then it would be perfectly natural ā€“ unavoidable, perhaps ā€“ for morpho-phonological content to be inserted at some point later than the start of the derivation. Thus, this kind of evidence in favor of late-insertion seems to be a feather in the MFA cap.

Unfortunately for MFA, the very same arguments that motivate late-insertion of morpho-phonological content can be replicated as it pertains to semantic content. Just as the minimal units of morphological computation must be associated with derived structures assembled by syntax, so must the minimal units of semantic computation be associated with derived structures assembled by syntax.

Before turning to the evidence in question, I’d like to clarify a few other theoretical matters that loom over this discussion, to avoid potential confusion. (Not that I’m suggesting Alexiadou & Sauerland are confused about these matters; it is clear that they are not.) First, we should not lose sight of the fact that this is a discussion about what is computationally prior, not what is psycholinguistically prior. Psycholinguistically, it is fairly obvious that, in acts of language comprehension, some amount of morpho-phonological representation (which is computed on the basis of the perceptual intake) is constructed prior to the construction of its syntactic and semantic counterparts. And in acts of language production, it is arguably the other way around: intended meanings probably precede, in psycholinguistic time, the construction of corresponding syntactic (not to mention morpho-phonological) representations. This is the classic pedagogical challenge when teaching beginners in theoretical linguistics what a competence theory is a theory of: the fact that we draw a derivation that starts from lexical items and churns them through syntax and finally into PF and LF representations does not mean that we find out only at the end of this process what we intended to say, or what we heard (lol).

This brings me to the second clarification. Even on a syntax-first view (like the canonical inverted Y model, or its Single-Output cousin), there is a way to algorithmically start with one of the two outputs of the system ā€“ a PF representation or an LF representation ā€“ and find a derivation that leads to that output. This is in fact what production and comprehension are, on a standard, syntax-first model of grammar. Here’s a restatement of that:

(1) COMPREHENSION:
a. given a PF representation x;
b. find the set D of all syntactic derivations whose PF output is x; and let L be the union of the possible LF outputs of each derivation d āˆˆ D;
c. associate one of the meanings in L with x.

(2) PRODUCTION:
a. given an LF representation y;
b. find at least one syntactic derivation d whose LF output is y;
c. associate the PF output of d with y.

And that brings me to the third clarification. It is well-established that any derivationally-stated model of grammar can be translated into a representationally-stated one. At worst, one could build a data structure consisting of tuples of structures, where each member of the tuple represents what-in-the-derivational-model-would-have-been an intermediate state of the derivation; one could then translate the derivational theory into a theory of admissible and inadmissible consecutive pairs of members of these tuples. Anything that was an “output filter” in the derivational theory can be a constraint on admissible last-members-of-a-tuple. Finally, stipulate that the “output” of this system is the last member in any well-formed tuple. And voilĆ : a representationally-stated theory that does the same thing.

Now, in a representationally-stated system, there is no content to the notion of “computationally prior”; these are all static, declarative statements about well-formedness. Then what are we even talking about??, you might ask. Well, I cannot speak for Alexiadou & Sauerland, but I think that those of us who like phrasing our theories derivationally do so in part because derivationally prior stands in correspondence with ontologically prior. Thus, for example, late-insertion of morpho-phonological content (Ć  la Distributed Morphology) is not just a claim situated within a derivational vernacular, whose contents evaporate if we transition to the (mathematically equivalent!) representational parlance. Rather, it is a claim that exponents are ontologically secondary, in that they are associated with derived syntactic structures. Exponents are of course epistemologically prior; we see/hear them,[1]Well, we don’t really. There are no “exponents” in the sensory input. Exponents are abstractions, just like syntactic structures are. But they are ever-so-slightly closer to the … Continue reading whereas the syntactic structure has to be inferred. But ā€“ and this is the crucial part ā€“ if exponents were ontologically prior, and exponents are pairings of phonological content with bits of syntactic structure (one of the lessons of DM), then bits of syntactic structure would have to exist… independent of syntactic computation as such.[2]If you have only a cursory familiarity with DM, you may be surprised to encounter the claim that the pairing of exponents with bits of structure is characterized here as “a lesson … Continue reading This would be quite problematic, and thus the argument in favor of late-insertion is contentful even in a representationally-stated model of grammar. Claims about “late”-insertion are really claims about ontological primacy, with the ā€œlateā€/ā€œearlyā€ lingo serving as a derivationally-stated proxy for the ontological claims.

Okay, so back to MFA: I take the claim to be that hierarchical structures of meaning are ontologically prior to syntax. And if this is the claim, then evidence for late-insertion of semantic content would show that it is wrong.

There’s way too much of that evidence to go through here, and furthermore, most of it was unearthed long before I started educating myself on these matters. First and foremost, Neil Myler’s thesis and subsequent 2016 monograph are, in a sense, one extended (and convincing!) argument for late-insertion of semantic content. Second, as Borer has discussed in detail, the moment we are decompositionalist about our morphosyntax, the semantic indeterminacy of roots entails late-insertion of semantic content. (Harley has recently stressed this, as well.) Third, recall the existence of expressions like in cahoots, newfangled, short shrift, etc.; these are complex expressions that contain sub-constituents devoid of any identifiable meaning (cahoot, fangle, shrift), but these complex expressions in their entirety are nevertheless associated with a meaning. (And don’t forget: “complex expression” means “assembled by syntax.” See also Pesetsky’s 1985 paper, Morphology and Logical Form, which touches on many of the same themes ā€“ and was, in many ways, ahead of its time on these issues.) These are all arguments for late-insertion of semantic content.

But I couldn’t end the post without walking through one example in a little more detail. Even this example doesn’t originally come from my own work, but was instead “gifted” to me by Dan Siddiqi. Consider (3):

(3) I read the shit out of this book.

There’s lots to say about this construction, but I will confine myself to what is strictly necessary here. First, its meaning is roughly that of a verbal intensifier. (I.e., (3) means something like I read this book intently / intensely / comprehensively / enthusiastically.) That is already interesting, given that the material in this “idiom” (the shit out of) seems to occur in a position where it would compose with the book before composing with the verb. But that, in the grand scheme of things, is unremarkable ā€“ at least given the existence of cases like I drank [a quick cup of coffee], where [quick] is also located way too low for its event-modifier interpretation. (A cup of coffee is neither quick nor slow in isolation from the event in which it is involved.)

Second, and more important: [the shit] in (3) is a constituent to the exclusion of [out of this book], as can be seen in the have + small-clause passive example in (4) (something Luke Adamson first pointed out to me).[3]It is worth noting that the corresponding verbal passive doesn’t seem to be possible: #The shit was read out of this book. But it is a well-known (and ill-understood) fact that different … Continue reading

(4) This book had the shit read out of it.

Third, the verb (read) is not part of the “idiom.” Cf. (5):

(5) I ironed the shit out of these clothes.
(roughly: “I ironed these clothes intently / intensely / comprehensively / enthusiastically.”)

Taken together, this means that the elements of the “idiom” in (3), underlined here for perspicuity, are spread over multiple constituents:

(6) I read [ the shit ] [ out of this book ].

Now, if (6) were a transparent representation of the expression’s meaning, one could imagine its hierarchical structure being first assembled in the meaning component, and then mapped to a syntactic structure through whatever transduction MFA might posit. But the whole point is that this is an “idiom” ā€“ which is the name we give a piece of syntax whose structure does not stand in this kind of relation to its meaning. Furthermore, the pieces of this “idiom” are distributed over multiple constituents, which do not themselves form a single, bigger constituent to the exclusion of other material. What we’re looking at here, then, is a case where a semantic unit (roughly, the aforementioned verbal-intensifier meaning) is associated with a piece of derived syntactic structure. Hence, semantic content must be late-inserted, just like morpho-phonological content.

ETA: Let me try to make this even clearer. There is no constituent with which this meaning ā€“ the verbal-intensifier meaning in question ā€“ can be paired. It is paired with a series of nodes in a derived structure. There is no way to capture this in a meaning-first system since, by hypothesis, there is no way to state in a way that is computationally prior to syntax what it is that the meaning in question is paired with, in the first place!

In light of this evidence ā€“ and all the other evidence cited (but not detailed) above ā€“ it seems that the ontological claims underlying MFA, if I have understood them correctly, are DOA. (Okay, that was obviously over-the-top. I just couldn’t resist rhyming two acronyms. The reasonable and responsible version would have ended with “… seem to be problematic” or something like that. But this is a blog! Rhyming puns should count for something.)

ETA: In case anyone harbors doubts about these conclusions on the grounds that late-insertion of semantic content seems restricted to “open class” meanings or “idiomatic” expressions: (i) it’s not that simple (see Myler’s work, cited above), and (ii) this wouldn’t matter even if it were true (for the reasons discussed towards the end of this post on the semantic indeterminacy of roots).

footnotes:
footnotes:
1. Well, we don’t really. There are no “exponents” in the sensory input. Exponents are abstractions, just like syntactic structures are. But they are ever-so-slightly closer to the percept, which is the point here.
2. If you have only a cursory familiarity with DM, you may be surprised to encounter the claim that the pairing of exponents with bits of structure is characterized here as “a lesson of DM.” The formalism of DM, after all, seems to pair exponents with individual syntactic terminals. But the truth of the matter is that the mechanism of contextual allomorphy, pervasive in DM, renders it equivalent to a system that would pair exponents with structures rather than terminals. (Because the context in contextual allomorphy is, at least some of the time, structural.) See here for further discussion.
3. It is worth noting that the corresponding verbal passive doesn’t seem to be possible: #The shit was read out of this book. But it is a well-known (and ill-understood) fact that different idiomatic expressions allow/resist verbal passive versions to different degrees. And so the fact that this one seems to resist a verbal passive is interesting, but not necessarily relevant to the constituency facts under discussion here. What is relevant is that (4) is clearly possible.
Subscribe
Notify of
guest

48 Comments
oldest
newest
Inline Feedbacks
View all comments
48
0
Comments welcome!x
()
x
| Reply