Mar 112021
 

It’s been longer than usual since my last blog post. I’ve been busy with various research- and teaching-related activities, which I love dearly, but which have kept me away from blogging. But I’m back with a doozy, length-wise. So strap in…

(Also, it’s worth pointing out that this post is an extended and somewhat informal version of my WCCFL 39 abstract. So if this stuff interests you, come see my poster (shameless plug…), and tell me all the ways in which it is right, and/or all the ways in which it is wrong!)

Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·

If you follow this blog, you know that I have become very interested in the idea that the fundamental units of the mapping from syntax to semantics, and from syntax to morpho-phonology as well, are (contiguous) sets of syntactic terminals, rather than individual syntactic terminals. Since it will be important to what follows, I’ll point out that on the view I have in mind, there is nothing in the architecture of grammar that forces the sets relevant to the mapping to semantics to align with the sets relevant to morpho-phonology, in the general case. (See here for a representative example.) This is a departure from, e.g., the notion of “Spans” as it is used in some varieties of Nanosyntax.

As anyone familiar with Montague Grammar and its intellectual descendants (up to and including Heim & Kratzer 1998) knows, those frameworks are primarily concerned with the composition of meaning ā€“ that is, capturing the systematicity in the meanings of complex expressions relative to the meanings of their subparts. Now, everyone who has thought about meaning for more than a second acknowledges that not all meaning is compositional. The meaning of /dɔg/, for example, cannot be computed from the meaning of /dɔ/ and /g/, or of /d/ and /ɔg/. This is an exceedingly mundane observation; compositionality must bottom out at something. And so an essential component of any discussion of compositionality is the question of where compositionality stops: what precisely the units are that cannot be interpreted compositionally. In the Heim & Kratzer (1998) textbook, the answer given is the syntactic terminal (cf. their “Terminal Nodes” rule; p. 43, 48).

What I would like to show in this post is one particular source of evidence that this is the wrong answer.

Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·

Our story begins with Semitic languages. It is often remarked upon that some roots in Semitic have a downright dazzling array of different interpretations when placed in different nominal and verbal templates. Aronoff (2007) notes the root /kā€‘{b,v}ā€‘Źƒ/ in Hebrew, which, in combination with different templates, runs the gamut of meanings from ‘pickles’ to ‘roads’ to ‘conquest’:

(1) a. /kā€‘{b,v}ā€‘Źƒ/ + CaCuC: kvuŹƒim (‘pickles’)
b. /kā€‘{b,v}ā€‘Źƒ/ + CCiC: kviŹƒ (‘road’)
c. /kā€‘{b,v}ā€‘Źƒ/ + Ci(C)CuC: kibuŹƒ (‘conquest’)

This way of framing things invites the conclusion that there is something unusual, and thus noteworthy, about roots like /kā€‘{b,v}ā€‘Źƒ/. But this is actually not the case. Consider the root /xā€‘Źƒā€‘{b,v}/ (etymologically, /ħā€‘Źƒā€‘b/). This root is supposed to contrast with a root like /kā€‘{b,v}ā€‘Źƒ/, above, in that the meanings of all nouns and verbs derived from /xā€‘Źƒā€‘{b,v}/ have something to do with cognition or computation:

(2) a. /xā€‘Źƒā€‘{b,v}/ + CaCaC: xaŹƒav (‘think’)
b. /xā€‘Źƒā€‘{b,v}/ + CiC(C)eC: xiŹƒev (‘calculate’)
c. /xā€‘Źƒā€‘{b,v}/ + hiCCiC: hixŹƒiv (‘consider’)

Impressionistically, the examples in (2aā€‘c) may seem more well-behaved, in terms of their interpretations, than (1aā€‘c) are. Crucially, however, it is still the case that the meanings in (2aā€‘c) are not predictable from their respective root + template components. Particular verbal templates ā€“ like CaCaC, CiC(C)eC, and hiCCiC, in (2) ā€“ do have implications for the meanings of the verbs derived from them. But these implications concern voice (e.g. active vs. passive), causativity, reflexivity/reciprocality, and the like (see Kastner 2020 and references therein). Importantly, the examples in (2aā€‘c) are not distinguishable on these grounds. Each of the meanings in (2) could in fact have been associated with the other two templates on the list. That is: there is no way (that I am aware of) to predictively determine which of the encyclopedic meanings in (2aā€‘c) would be associated with which of the forms.

On the assumption that (i) consonantal roots are grammatically real entities of the mental grammar of Hebrew speakers, and (ii) all combinatorics that are not exclusive to the phonology or exclusive to the semantics take place in syntax[1]This one isn’t so much of a “assumption” actually; unless and until someone comes up with an adequate, non-circular definition of ‘word’ (one that’s not about … Continue reading (i.e., no multiplicity of generative engines; Marantz 1997, i.m.a.), it follows that both roots and templates are (separate) syntactic terminals, and that:

(3) Expressions like (1aā€‘c, 2aā€‘c) involve mappings from sets of syntactic terminals to listed meanings.

That’s because combining a root with a template affects both the phonology and the semantics of the resulting expression. Such combinatorics are syntactic by hypothesis, and thus both the root and the template must correspond to syntactic constituents. (Technically, this doesn’t entail that the root and template are themselves terminals; they could each be internally-complex bits of syntax. But that would make it even harder on the terminals-are-interpreted view, so let’s assume for now that roots and templates each correspond to a single syntactic terminal.)

Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·

We could “conceal” (3) if we wanted to, by saying that listed meanings are associated with individual syntactic terminals but, e.g., /xā€‘Źƒā€‘{b,v}/ is many-ways polysemous, and its syntactic context (in this case, the verbal template it combines with) disambiguates among these various allosemes. This is essentially the approach taken in Distributed Morphology, as far as I can tell. But there are several reasons to reject such a move.[2]As I am reminded by Jeffrey Punske, there is a bit of a bibliographic curio here, in that Alec Marantz made some of the same observations 25 years ago in his paper Cat as a phrasal idiom ā€“ and … Continue reading

First, this DM treatment is equivalent, in its expressive power, to the theory outlined at the top of this post. In terms of descriptive adequacy, then, the two tie, and so the DM treatment is just a less perspicuous description of the same thing.

Second, the DM treatment is less explanatory. Here’s why. As I learned from Heidi Harley (and as I’ve noted elsewhere on this blog), there are cases where things that are unambiguously syntactic constituents have no interpretation by themselves, and only “gain” an interpretation by virtue of occurring in the context of other, very specific syntactic material. I’m talking about the underlined material in expressions like these:

(4) in cahoots, newfangled, short shrift, dribs and drabs

This is crucially different from an “idiom” like kick the bucket, which, alongside its non-compositional interpretation, also has the interpretation where it’s about a foot making contact with a bucket. The underlined elements in (4), on the other hand, lack any identifiable meaning unless they are in the context of the appropriate syntactic material.

It is also important to note that, syntactically speaking, the expressions in (4) are all exceedingly ordinary: in cahoots has the structure of a PP containing a plural indefinite noun phrase (cf. in rows), short shrift has the structure of an English adjectivally-modified noun (cf. short film), and so forth. The only interesting thing about the expressions in (4), then, is their respective mappings to semantics. And in particular, that they contain syntactic parts that lack any meanings of their own.

Which brings us to the following point: from the terminal-centric, DM perspective, there would be nothing odd about a natural language that lacked cases like (4) entirely. Such a language would constitute an entirely unremarkable DMian entity. In contrast, consider the system outlined above, where the fundamental units of the mapping from syntax to semantics are sets of terminals. In such a system, nothing guarantees that for every available syntactic terminal Ī“, there will exist a mapping from the singleton set containing Ī“ alone to a listed meaning. English, then, just happens to contain no mappings from {āˆšFANGLE} alone, or from {v,āˆšFANGLE}, to any listed meanings. There may perhaps be certain quantitative learning pressures that favor a system with many singleton mappings over one with fewer such mappings; but there is nothing in the architecture of grammar that affords singleton mappings any kind of privileged status. On this view, a language in which there exists a singleton mapping from {Ī“} to some listed meaning for every Ī“ in the language represents an extreme edge case. It is an exceedingly unlikely natural object. And I’m willing to wager that there is in fact no natural language that is completely devoid of cases like (4). The sets-based rendition (of what are, again, expressively equivalent systems) gives us an explanatory handle on that fact.

Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·

Now back to Hebrew. There is absolutely nothing exceptional about the state of affairs shown in (2), as far as Hebrew is concerned. This is essentially the state of affairs for every root in the language that can combine with at least two distinct derivational templates. We can therefore assume, without loss of generality, that this is so even when it comes to those few roots that only ever combine with one derivational template (say, /Ź”-Źƒ-l/ + CeCeC = Ź”eŹƒel ‘tamarisk’). That is, even for such roots, we could still assume that the meaning arises from a joint mapping from root+template to a lexical meaning, as we must anyway assume for the cases in (1ā€‘2). Essentially, then, every open-class item in Hebrew is an “idiom”, insofar as that term is taken to denote many-to-one mappings from syntactic terminals to lexical meanings.

Nevertheless, there is something fundamentally misleading about consigning these facts to the terminological bin of “idiomaticity”. The term ā€œidiomā€ is typically understood to indicate a marked departure from the linguistic norm ā€“ an expression whose syntax-semantics mapping is unusual. But these many-to-one mappings in a language like Hebrew are not a departure from the norm; they are the norm, at least as far as open-class items are concerned. Under this usage of “idiom”, more or less the entire open-class lexicon of Hebrew would consist of idioms only.

It’s possible, of course, that Semitic languages are “special”, and that in other languages, like English, an expression like dog represents a one-to-one mapping of a syntactic terminal to a lexical meaning. However, given that English n0 is often phonologically null, the phonological spellout of the root āˆšDOG alone would be string-identical to the spellout of this root and n0 together. English would therefore look the same even if it worked exactly like Hebrew does (i.e., if open-class items always involved, at minimum, a joint mapping from root+categorizer to lexical meaning, and never a mapping from the root alone). In fact we have already seen reason to believe that this is so. The assumption that the fundamental units of the mapping from syntax to semantics are sets of terminals (rather than individual terminals) provides a more explanatory handle on why semantically uninterpretable terminals would exist (see (4) above, and the surrounding discussion). And if sets of terminals are the fundamental unit of mapping, there is no reason not to adopt the working assumption that English is exactly like Hebrew ā€“ for example, that the English dog is a mapping from the set {n0,āˆšDOG} to a listed meaning.

Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·   Ā·

So far, I have been talking almost exclusively about the interpretation of open-class items. And it is a fact that most contemporary work in formal semantics is concerned with the interpretation of closed-class items. Nevertheless, my understanding is that this focus on closed-class items is a heuristic choice. The idea ā€“ which seems exceedingly reasonable to me, for what that’s worth ā€“ is that whatever rules and principles are involved the interpretation of an expression like beauty and how it composes semantically with other expressions, will also be involved in the interpretation of things like every and how they compose semantically with other expressions. Except that accounting for the interpretation of beauty will also involve the added challenge of developing a theory of the conceptual encyclopedia. The latter has bedeviled philosophers and linguists for centuries, and there is no sign of this bedeviling abating any time soon.[3]There has been lots of progress on theories of the distribution of open-class items (like dog and beauty), making use of sparse vector representations, for example. But if you ever feel tempted to … Continue reading

As noted above, this heuristic choice seems like a very reasonable one. But one should not lose sight of the fact that it is a choice of opportunity: it is a gamble, predicated on the assumption that what we learn from the semantics of one sub-class of vocabulary will apply to the other. And more to the point, there is no theoretical principle (that I am aware of) that says that closed-class vocabulary is guaranteed to provide a more direct window into the workings of semantic interpretation than open-class vocabulary is.

By parity of reasoning, then, in those instances where we do manage to learn something from the interpretation of open-class items, those lessons should be taken to be general, as well. And specifically, they should be taken to hold of closed-class vocabulary too, unless and until that is shown to be incorrect.

Here, I have provided you with an argument that context-free interpretation of individual terminals is not something that exists in the domain of open-class items. Not in Hebrew, and maybe not in English, either. The assumption that individual closed-class terminals (say, the determiner every) are ever submitted to semantic interpretation all by themselves is therefore suspect, on the same, widely-accepted methodological grounds. In other words: there’s every reason to think Heim & Kratzer’s (1998) “Terminal Nodes” rule never applies.

footnotes:
footnotes:
1. This one isn’t so much of a “assumption” actually; unless and until someone comes up with an adequate, non-circular definition of ‘word’ (one that’s not about orthography or phonology, that is), syntax-all-the-way-down is really the only game in town. And, well, if I were you, I wouldn’t hold my breath.
2. As I am reminded by Jeffrey Punske, there is a bit of a bibliographic curio here, in that Alec Marantz made some of the same observations 25 years ago in his paper Cat as a phrasal idiom ā€“ and yet his work in the years since has remained wedded to the very same terminal-centrism that this blog post argues against.
3. There has been lots of progress on theories of the distribution of open-class items (like dog and beauty), making use of sparse vector representations, for example. But if you ever feel tempted to conflate the distribution of an item with the conceptual content associated with that item, please grab your friendly neighborhood philosopher-of-language and have them disabuse you of that notion.
Subscribe
Notify of
guest

18 Comments
oldest
newest
Inline Feedbacks
View all comments
18
0
Comments welcome!x
()
x
| Reply