Omer

Oct 202019
 

I just got home from Oslo, where I had many really interesting interactions with several linguists. One of them was a conversation with fellow visitor Jonathan Bobaljik.1Who, it should go without saying, should not be held accountable for any of what I write here. We were talking about the relatively well-known observation that for many alleged “syntax-semantics mapping phenomena,” the expected mappings only go through if the syntax independently allows at least two different configurations. As Jonathan helpfully points out, this is an observation that goes back to Grice, if not Jespersen. But just because an observation is “old,” we shouldn’t overlook the consequences it has for contemporary syn-sem theories. And the consequences are very interesting.

Here’s a prime example. (It’s one which I turn to often, though that doesn’t mean it’s not a good one! That said, if you are truly bored with this test case, see the bottom of this post for a partial list of other empirical domains I could have run the same argument on.) The example involves the Heim-Diesing Mapping Hypothesis: the idea that specificity and/or definiteness are determined by structural height relative to one or more structurally-fixed operators. So, for example, when it comes to Object-Shift and specificity (Diesing 1992, 1997), the idea is that you are interpreted as non-specific if you remain within VP (because there is an existential-closure operator at the VP periphery), and specific if you manage to make it out of VP (and thus, out of the scope of the aforementioned operator).

This would be a truly beautiful example of syntax-semantics mapping, if it, you know, worked. But as Diesing herself already noted (see also Vikner 1997), it only seems to work for those noun phrases that can move out of the VP. So, for example, if you are in a Scandinavian language, and you’re in a clause where the verb cannot undergo head movement, the object is stuck in the VP (viz. Holmberg’s (1986) Generalization). And, magically, the “mapping” part in the Mapping Hypothesis then goes away: you can be interpreted as either specific or non-specific, both while remaining in situ. Diesing’s solution to this was to say that the relevant constraint on movement (the “stuck” part) only holds in overt syntax, and covert movement is exempt from it, so that the object still moves out of the VP at LF (salvaging the “mapping” part). This may seem like a reasonable idea for the verb-movement case (where it has been argued that the nature of the constraint may be related to linearization in the first place; see Fox & Pesetsky 2005, i.a.). But the pattern in question is quite a bit more general. In many languages, only the structurally-highest noun phrase in the VP can undergo object shift. When this is the case, a lower noun phrase in the VP (e.g. the other internal argument in a ditransitive) gets to be specific or non-specific all while remaining in situ. Now, the constraint limiting object shift to the highest nominal in the VP is almost certainly structural, not linear. (E.g. it is operative in Tagalog, where it is dissociable from the actual linear order of the elements involved; see Rackowski 2002, Rackowski & Richards 2005.) And therefore, there is absolutely no reason to assume that it would hold for overt but not covert movement.

Indeed, we can go to a higher level of generalization, and say the following: for a great many cases of contrasts related to alleged syn-sem “mapping” (see the bottom of this post for a partial list), the mapping breaks down when independent factors conspire to make one of the two contrasting structures unavailable. Unless every single one of these independent factors turns out to be PF-related – an exceedingly unlikely eventuality; see above – then the maneuver of exempting “LF movement” from the movement-limiting factors is entirely ad hoc, and amounts to a restatement of the mapping breakdown, not an account of it.

What seems to be going on here is this: when the rest of the grammar happens to make some contrast available (e.g. a given noun phrase can vacate the VP or stay in situ), semantics can pin some meaning contrast on this grammatical contrast (e.g. a specificity contrast). But this is not because semantics is “read off” of the syntactic structure, and it is certainly not because of some semantic operator at the VP periphery. The latter types of explanation fail to account for the fairly general fact that when the rest of the grammar makes the same contrast unavailable, semantic interpretation seems perfectly fine “reading” both options off of one and the same structure.

This may seem like a fairly subtle distinction to be drawing, but I think it’s actually quite important, if what we’re interested in is not just a description of the facts, but an understanding of the causal forces at work. On one view – the Mapping-Hypothesis-style view – the different meanings each arise because of the respective syntactic structures: high object → specific, low object → non-specific, where the arrows don’t just represent correspondences, they represent causation. On the other view, the causation is not so direct: when syntax happens to make a given contrast available (i.e., when it allows at least two variants of the structure), semantics can pin a contrast in meaning to this grammatical contrast. But there is no sense in which each of the different structures is “driving” each of the different meanings, since this would fail to account for how both meanings can arise from just one of these structures, too. Importantly, the facts quite strongly favor the latter view: meaning contrasts are parasitic on syntactic contrasts, not caused by them.

This is bad news on at least one front: one of the charms of, e.g., the Mapping Hypothesis, was that it provided an explanation of why this grammatical contrast (VP-internal noun phrases vs. VP-external ones) got mapped onto this meaning contrast (non-specific vs. specific). The view that seems empirically correct, however, takes a hammer to this explanation. So it would seem that other semantic contrasts could in principle be pinned on the contrast between VP-internal noun phrases and VP-external ones, not just specificity. Wait a minute, that’s actually true! In other languages, it is animacy/non-animacy (rather than specificity/non-specificity) that is pinned on the same grammatical contrast – and good luck deriving this from some kind of “inanimacy closure” operator at the VP periphery. At the same time, it’s also true that not any semantic contrast can be pinned on any grammatical contrast. For one thing, there probably aren’t “flipped” languages where VP-internal noun phrases are interpreted as specific and VP-external ones as non-specific. Furthermore, while an animacy contrast can replace a specificity one in correlating with object shift, the set of things that can do so is far from unbounded. (There might be as few as three options: specificity, definiteness, and animacy. Maybe also pronominality.) So there’s still a major gap in our understanding.

Okay, let’s wrap this up: if you think that grammatical contrasts map onto semantic contrasts because of very strictly compositional semantic interpretation, neutralization patterns are bad news for you. These patterns exist – and to repeat: they’re not actually restricted to Object-Shift/specificity, that’s just my favorite example; do have a look at the list below – and they suggest that causation between syntax and semantics doesn’t flow all that directly. To put it as it is phrased in this post’s title: meaning contrasts are not generated by syntax, they are parasitic on it.

Appendix: some other empirical domains that show the same pattern

  • viewpoint aspect: when outside conditions block the appearance of perfective, the imperfective can be interpreted perfectively (courtesy of Sabine Iatridou)
  • the Definiteness Effect:
    • only holds of those DPs that could have moved to canonical subject positions – where, crucially, this could have is modulated by purely morphosyntactic factors – and not of those DPs that couldn’t have (me, building on the work of Halldór Ármann Sigurðsson)
    • conversely, when the definite article is required for DP-internal reasons, no Definiteness Effect arises (courtesy of Sabine Iatridou)
  • when a relative clause allows both a resumptive pronoun and a gap, the two result in different scope (in particular, the resumptive triggers obligatory reconstruction); but when a resumptive is the only grammatically-allowed option, it is compatible with both a reconstructed reading and non-reconstructed one (cf. Sichel 2014)
  • when a quantificational DP (say, every NP) can occupy two different overt positions, above and below another scope-bearing element, each position tends to be associated with a distinct scopal reading – as is the case, for example, with scrambling in German or Japanese – but when it cannot (as in English), the single available structure2We’re used to thinking of this one as a difference “only at PF,” i.e., that the English ambiguity involves Quantifier Raising. I have to say, looking at it against the backdrop of these other patterns, I’m now starting to wonder to what extent that is a necessary assumption: it seems to fit quite nicely into the rest of this picture even without assuming QR. can have multiple readings (cf. Bobaljik & Wurmbrand 2012)

Oct 032019
 

Here’s a thing that I’m sure happens to everyone from time to time:

  1. You read or hear about phenomenon X or generalization X or theoretical proposal X.
  2. Time passes.
  3. You happen upon some new data or a new idea, for which X proves relevant.
  4. However, it turns out that you have imperfect recall of X. Unbeknownst to you, what you have in your head is actually some rejiggered version of X – let’s call it X’ – which conveniently-and-suspiciously suits your current theoretical or empirical needs.

Now, if this were the end of it, this would be a story about how you misremembered some X as X’; were briefly under the impression that it was a perfect fit for your current interests; and were then disabused of that notion when it was pointed out to you that the data / generalization / proposal in question was actually X, not X’.

But sometimes, that’s not what happens.

Sometimes, this imprecise recall can actually turn out to have value of its own.

By way of illustration, I want to share three times that this happened to me in recent years, and what I learned from them.

The first story concerns Rackowski & Richards 2005 (henceforth, R&R). This paper is about how long-distance extraction in Tagalog requires the verbs along the movement path to each agree with the XP argument out of which long-distance extraction proceeds. So, for example, in the example below, the matrix verb shows “OBL” agreement, even though the element being relativized is an indirect object (=”DAT”) of its original predicate. That’s because the clause out of which extraction is proceeding, here (the clausal argument of ‘say’) is an “OBL” argument in its own clause.

(Tagalog; R&R:586, ex. (51b))

In their paper, R&R propose that this effect arises because:

  • featurally-speaking, CPs are themselves candidates for the relevant kind of movement;
  • this triggers a minimality effect (in particular, an A-over-A type of minimality effect), making the CP node closer to the higher probe than any XP properly contained inside the CP would be;
  • however, agreeing with the CP in its entirety satisfies the Principle of Minimal Compliance (PMC; Richards 1997, 1998, 2001), rendering subsequent probings into this CP exempt from the relevant minimality constraint;
  • then, and only then, can an XP be extracted from inside this CP.

Crucially, on R&R’s story, there is no Phase Impenetrability Condition (PIC) as such. The PIC arises as a consequence of the logic above.1The measure of structural distance used in R&R’s minimality/A-over-A calculation is such that the specifier of CP is already equidistant to CP relative to higher probes, whether CP has been agreed with or not. This means that anything in SpecCP is already in what is effectively an “escape hatch,” irrespective of agreement with the entire CP.

Fast-forward several years, and I have been asked to write a commentary paper for NLLT, responding to Mark Baker’s (2011) When Agreement is for Number and Gender but not Person, a paper summarizing and extending the SCOPA proposal from his 2008 book. For reasons I won’t bore you with,2They are interesting reasons! But you can read about them yourself here. I was making use of (something like) R&R’s proposal. But I reconstructed it imprecisely: I assumed there was a sui generis PIC, but also agreement with the phase had the effect of “disabling” its phasehood / rendering the PIC irrelevant for that particular phase.

Fast-forward a few more years, and the same Richards is co-authoring a paper with van Urk, about successive-cyclicity in Dinka (Nilo-Saharan). And lo and behold, van Urk & Richards (vU&R) are adopting essentially the same “imprecise” version of R&R that I had mistakenly reconstructed: “… we propose a modification of Rackowski and Richards 2005, in which the need for a syntactic relation between v and the CP from which extraction takes place is independent of phase impenetrability. This allows us to preserve the insight behind Rackowski and Richards’s proposal without jettisoning the traditional view of successive cyclicity, for which Dinka offers such striking evidence” (vU&R 2015:114).

Now, in isolation, there are several possibilities here:

  • This could be a coincidence!
  • Or, it could be that R&R had it right, and both I and vU&R have it wrong.
  • But it could also be that the R&R story was close-but-not-perfect, and the data I was trying to account for (SV‑VS agreement asymmetries in many head-initial languages), as well as the Dinka data that vU&R were trying to account for, both exerted the same subtle force pulling us in the same direction.

If you allow for the possibility that this third option might be correct, then I find it interesting that – at least for me – it manifested itself via the imprecise recall of R&R’s original proposal. That is: staring at the data I was trying to account for, I had reconstructed a version of R&R in my head that was inexact in a very particular way. And that way, while ad hoc in the context of what I was up to at that moment, turned out to have a little more than that going for it: it did useful work in an entirely unrelated empirical domain (successive-cyclic movement in Dinka), as well.

In case of any lingering doubts, let me be clear: none of this excuses imprecision in the final product. (In the 2011 commentary paper, I was explicit about the theory I was assuming. Though I must come clean and point out that I mistakenly attributed it verbatim to R&R…) Instead, it is meant to highlight the fact that sometimes – just sometimes – your head rearranges stuff in subtle but productive ways. If you misremember phenomenon X or generalization X or theoretical proposal X, it can be fruitful to ask yourself:

  • Why do you misremember it as X’?
  • If X was a proposal or a generalization, how does X’ fare in accounting for the original data that X was put forth to capture?
  • What other arguments (besides what you are currently after) can be marshaled in weighing X against X’?

The second story, which is much more recent, concerns the Anaphor Agreement Effect (AAE). In his seminal discussion of the AAE, Rizzi (1990) characterizes the AAE as a constraint on where (reflexive) anaphors can & cannot occur. In the last couple of years (starting from the wonderful LinG1 workshop), I have developed an interest in the AAE. But it took me a couple of years and two polite-but-stern anonymous reviews to finally realize that I had been working under an entirely different assumption than Rizzi about what the AAE even is.

The characterization of the AAE that I had in my head has significant precedents in work by Woolford (1999) and Tucker (2011); but my view is that they didn’t go far enough. As I argue here, there is good reason to think that the AAE is about is restricting nontrivial agreement with anaphors (where “nontrivial agreement with XP” means the verb has at least two overtly-distinguishable forms, and the choice between these two forms is governed by the person/number/gender features of XP). And, crucially, I argue that the AAE, properly construed, is essentially mum about the distribution of anaphors. There are still languages where it looks like Rizzi’s distributional constraint holds, of course, and anaphors cannot even occur in the relevant positions (English, Italian, Icelandic). But there are plenty of languages where it doesn’t, and anaphors can occur in the relevant positions so long as nontrivial agreement is avoided (Albanian, Georgian, Basque). Importantly: I know of no theory that can predict which of these two behaviors you’ll get. The form of the anaphor (varying or fixed, simplex or complex) doesn’t adequately predict which kind of language you’ll be, nor do any apparent properties of the language itself (as far as I’m aware). The only statement of the AAE that enjoys any cross-linguistic generality is the one about nontrivial agreement, not the distributional one.

The point of the story, though, is that I already had this characterization of the AAE in my head without having even noticed that (i) it differs from Rizzi’s, and (ii) I can argue in favor of it an against the alternative.

As with the R&R story, this is not meant to excuse imprecision in the finished product. The reviewers had it exactly right in pressing me to clarify and motivate this distinction. But again, it seems that my brain had played a useful trick on me: I had glimpsed some fleeting, peripheral-vision-of-the-mind’s-eye image of the data I was looking at, and my mind had already reformulated the AAE in a way that worked where the original formulation wouldn’t have. This reformulation needed to be interrogated, to reveal that it can be argued for etc.; but without it having happened, it’s entirely possible that I would have just stared at the data, baffled and puzzled.

The third story is the chronologically earliest, and in my view somewhat less interesting, so I’ll try to be brief. It goes like this: my 2009 paper on Basque agreement morphology presupposes without argument that agreement relations (as well as clitic doubling) can fail without “crashing” the derivation, and concentrates on what happens when they do fail. It was only later that I fully realized that most contemporary generative syntacticians didn’t think this presupposition was valid at all, and thought this was something that needed to be argued for. (Thankfully, in this case, the reviewers of the original paper were as oblivious to this under-motivated premise as I was!)

With the help of my grad advisors, as well as Rajesh Bhatt, I eventually realized that my implicit assumptions on this front were not universally shared, and needed to be argued for. From that point on, I sort of “had my antennas up” for data that could help make that case in particular. Soon thereafter, I attended a reading-group presentation by Lauren Clemens about the Agent-Focus construction in K’ichean, the light clicked, and I had a dissertation topic.

Again, imprecision here was anything but an endpoint. In fact, coming to terms with this imprecision and the arguments needed to bridge it turned out, in this case, to be a monograph-sized project. But the start of it was still a mental “autocorrect” which turned out to be in the right general direction.

So there you have it, three stories about imprecise recall that turned to be rather valuable. Of course, there are many many cases where I’ve recalled something imprecisely and my version was both wrong and useless. My only point here is that’s not always the case. So the next time you find out that, “No, actually, that’s not what SoAndSo says,” take a moment to reflect on the differences between what you remembered and what you “should” have remembered, and what might underlie the difference!

Talk in Oslo

 Posted by on 09/19/2019  Comments Off on Talk in Oslo
Sep 192019
 

In October 2019, I will be giving a talk titled The Anaphor Agreement Effect: further evidence against binding-as-agreement, at the University of Oslo. See my talks & handouts page for further information.

You can download the handout here.

Published in Glossa: “The Agreement Theta Generalization”

 Posted by on 08/29/2019  Comments Off on Published in Glossa: “The Agreement Theta Generalization”
Aug 292019
 

My squib with Maria Polinsky, “The Agreement Theta Generalization,” has been published in Glossa. In this squib, we propose a new generalization concerning the structural relationship between theta assigners and heads showing morpho-phonologically overt agreement, when the two interact with the same argument DP. This structural generalization bears directly on the proper modeling of syntactic agreement, as well as the prospects for reducing other syntactic (and syntacto-semantic) dependencies to the same underlying mechanism. (This work began as Section 7 of the unpublished manuscript “Agreement and semantic concord: a spurious unification,” but has now been expanded into a standalone squib.)

The published version is freely available for download here. (Yay for Open Access!)

If you prefer a pre-print, that is still available here.

bibtex

@article{PolinskyPreminger:2019,
	Author = {Polinsky, Maria and Preminger, Omer},
	Doi = {10.5334/gjgl.936},
	Journal = {Glossa},
	Pages = {102},
	Title = {The {{\em{{A}greement {T}heta {G}eneralization}}}},
	Volume = {4(1)},
	Year = {2019}}

Aug 072019
 

This is a post about listedness: the nature of the idiosyncratic information that is listed in the grammar.

In traditional, lexicalist approaches, the listed atoms were lexical items. A lexical item contained, at minimum, a phonological form, a semantic interpretation, and some syntactic information. The syntactic information included syntactic category, subcategorization and/or c-selection properties, and potentially other stuff too.

However, as all right-thinking linguists now know, lexicalism is wrong. That is because lexicalism is founded on the hypothesis that the minimal unit of idiosyncratic meaning aligns with the minimal unit of listed sound (and that – allowing for potential exceptions for “idioms” – the two in turn align with what can act as a syntactic terminal). This is a substantive, empirically-contentful hypothesis, and it is one that has turned out to be overwhelmingly false. (Illustrative examples will be given later in this post.) Thus, no lexicalism for you!

What replaces the traditional ‘lexicon’, then? One line of thought, the one associated with Distributed Morphology (DM), divides the listed information in the grammar into three lists:

  1. the Narrow Lexicon: contains the list of possible syntactic atoms
  2. the Vocabulary: the list of insertion rules, i.e., pairings of contexts (made up of morphological features) with phonological material; insofar as there is “listed sound” in the traditional DM model, it consists in what’s on the righthand side of these insertion rules – pieces of phonological material that the morphological component makes reference to
  3. the Encyclopedia: the repository of idiosyncratic meaning

There is much, much more to say about all of these, of course; for example:

  • The entries in (2) are normally thought to stand in a specificity‑based relation to one another, so that when a given morphological context is compatible with multiple insertion rules, the most specific among these wins out. This, of course, raises questions about the mathematical nature of this ordering. Is it total? (I.e., are every two insertion rules guaranteed to stand in an asymmetric relation with respect to specificity?) And if not, what happens in the case of ties?
  • Does (1) contain just one ‘root’ object corresponding to what we used to think of as “lexical root” (with the differentiation between different roots emerging as a “negotiation” between (2) and (3)), or are roots already individuated in the syntax (in which case, (1) contains a list of roots)?
  • What is the nature of the items in (3), and what does the context for their insertion look like? Is it reserved for meanings of ‘roots’, or all are meanings (“grammatical” meanings and “lexical” ones) stored in there?

Relatedly, we could ask: what is the fate of a morpheme in this brave, non‑lexicalist world? Traditionally, the morpheme was thought to be the minimal, non‑decomposable pairing between sound and meaning. (E.g. /dɔgz/ – whether or not it was a “word” – was assumed to consist of two morphemes, /dɔg/ and /z/, because these were the units in this expression whose meanings could not be computed from meanings of their parts). If you’ve been following along, you know that this definition of ‘morpheme’ died along with lexicalism itself, since this definition too assumes that units of idiosyncratic meaning align with units of listed sound (more on this below). So is there any useful notion of ‘morpheme’ within this view of the grammar? Terminologically, DM seems to use ‘morpheme’ to refer more or less to syntactic terminal (or to whatever a syntactic terminal has been mapped onto by the time we’re looking at a morphological, rather than syntactic, structure). But this is both somewhat redundant (we already have ‘syntactic terminal’, not to mention membership in list (1), to refer to such entities), and insufficient. As will be shown below, there is a natural entity for the term ‘morpheme’ to refer to even in this non‑lexicalist theory, and for which no other term exists (as far as I can tell).

We’re now at the point where looking at some representative data would be instructive. What follows is pretty much cribbed from various passages in Heidi Harley’s (magnificent) textbook English Words (Blackwell, 2006).

First, consider the following:

  1. a) horrify, horrible, horrific
    b) terrify, terrible, terrific

Clearly, there are entities – let’s call them ‘roots’ – corresponding to horr(i)- and terr(i)-, whose contribution (in meaning and sound) to the expressions horrify/horrible and terrify/terrible, respectively, is a systematic and predictable one. But while horr(i)- seems to again make the same systematic and predictable contribution in meaning and sound to the expression horrific, this is markedly not so when it comes to the relation between terr(i)- and terrific, meaning‑wise.

If ‘morpheme’ was the minimal unit of sound-meaning correspondence, we’d find ourselves in the intuitively problematic position of saying that terrific is a morpheme (since its meaning is non‑compositional), while horrific is not. This is of course just an example of a much broader phenomenon, namely idiomaticity. Given the aforementioned criterion of “non‑compositional sound-meaning pairing,” we’d also find ourselves having to say that kick the bucket is a ‘morpheme’. The point of the example in (4) is that this is not restricted to what is classified as an ‘idiom’ traditionally (read: in a lexicalist view); the same patterns arise “word-internally” (whatever that means).

So we need a term for an expression having a non‑compositional meaning, and the term ‘morpheme’ is intuitively ill‑suited for this task. The term ‘idiom’ would do fine, I think, except that people also have intuitive unease (for whatever reason) with the idea that /dɔg/ is an ‘idiom’ (even though it, too, is an expression whose meaning is not the result of the composition of meanings of its parts). We can therefore use a different term for “expression having a non‑compositional meaning”: listeme. Thus, we could say that dog, horr(i)-, terr(i)-, terrific, and kick the bucket are all listemes.

This doesn’t yet specify precisely the theoretical nature of a listeme, though. It is a pairing of a(n idiosyncratic) meaning with something; but with what? A piece of syntax? A piece of phonology? In this regard, examples like kick the bucket (as well as terrific, once you’ve accepted the syntax-all-the-way-down view) are instructive: it would be misleading to say that what the idiosyncratic meaning is paired with is a piece of phonology, since this is demonstrably false – cf. kicked the bucket, kicking the bucket, and so on. At least in this example, then, it is obvious that what the idiosyncratic meaning is paired with is a piece of syntax. And since roots are necessarily individuated in the syntax (or whatever you want to call that-portion-of-the-derivation-before-the-PF-LF-split), there is no obstacle to adopting this view uniformly. I.e., dog, horr(i)-, terr(i)-, and terrific are also pairings of idiosyncratic meaning with a piece of syntactic structure; they end up (indirectly) associated with different pieces of phonology by virtue of the respective pieces of syntax including different roots, which are, in turn, associated with different pieces of phonology (more on this below).

What is the relation between listemes and morphemes? As the examples above show, listemes can sometimes be (what we’d intuitively classify as) morphemes. This is the case for /dɔg/, plural /z/, horr(i)-, and terr(i)-. But this is not the case for terrific and kick the bucket. The reason for this should, by this point, be evident: if the units of idiosyncratic meaning do not systematically align with the units of listed sound (the founding observation of non‑lexicalism), it is empirically impossible for listemes (units of idiosyncratic meaning) to systematically align with the units of listed sound. And the latter is what a morpheme really is.

Now, if you were raised in the traditional, Saussurian view of ‘morpheme’ as the minimal sound-meaning pairing (as I myself was), you might be asking yourself right about now something along the lines of, “What on earth does it mean for a unit of sound to be ‘listed’ if the criterion for listedness is not meaning-related?!”

Before giving an answer, I’d like to show some data (again, courtesy of Harley) showing that the move in this direction is mandated not only on the conceptual grounds just outlined, but on empirical grounds as well:

  1. a) complete, completion
    b) compete, *competion (cf. competition)
  2. in cahoots

Consider: what is the status of the element cahoot in (6), and of the ‑ti (or ‑it) piece that differentiates *competion from competition? These are pieces of sound that are not associated with any meaning. (Obviously, the expression in cahoots has a meaning; but cahoot does not seem to have a meaning outside of this context. The exact same thing seems to hold of the relation between this ‑ti/‑it piece and competition, with the possible exception that this ‑ti/‑it piece may not be a root, while cahoot almost certainly is.) This seems to underscore empirically the point that was made a moment ago on conceptual grounds: there is such a thing as ‘a piece of listed phonology’, where its idiosyncrasy is not determined the traditional, lexicalist way (=having a meaning that is not composed of the meanings of its parts).

A related point can be made on the basis of data like the following:

  1. a) receive, deceive, conceive, perceive
    b) reception, deception, conception, perception

Obviously, there is a piece of listed sound that undergoes the ‑ceive/‑cept alternation; we don’t want to treat the pattern in (7a‑b) as a coincidence. But there is no meaning associated with that piece of listed sound itself. (The same is true for several tri‑consonantal roots in Semitic, as Aronoff 2007 and Harley 2014 discuss.) So ‑ceive/‑cept cannot be a sound-meaning correspondence; what is it, then?

Here is an attempt at a set of working definitions. First:

  1. Let the morphological terms of an expression be the parts of its phonological content that can productively participate in other complex expressions.

In an expression like /dɔgz/, the morphological terms are /dɔg/ and /z/. In an expression like competition, the morphological terms are compet-, ‑it, and ‑ion. (I’m being non-comittal about the precise locus of morphological boundaries in the competition example, since all that’s relevant here is that there are these three units, not where they begin and end with perfect phonological precision.) Expressions like /dɔg/ or /ðə/ (the) are single morphological terms.

An immediate methodological question now arises, which is: how can we know that, e.g., /dɔg/ is not in fact composed of smaller morphological terms – say /dɔ/ and /g/ – without making reference to meaning? What I’d like to stress is that even if this is methodologically impossible (i.e., meaning is an indispensable heuristic in determining the decomposition of an expression into morphological terms), this cannot be the ontological content of morphological termhood. The reasons for this have already been given – see the discussion of in cahoots, competition, and ‑ceive/‑cept, above. These examples also show that meaning is not the only tool in the methodological toolbox for determining what’s a morphological term. The reason we know that cahoot and ‑ti/‑it are morphological terms is because, when we’re done peeling off the things that have been identified as morphological terms using a meaning-based methodology, what remains must – by (8) – be a morphological term as well.

We can now give a definition of morpheme based on the above:

  1. morpheme is any expression that does not have morphological terms smaller than itself.

A morpheme, in contrast to a listeme, can be thought of as a piece of phonology (see (8)). And a vocabulary item (see (2)) can now be thought of as an association between a morphosyntactic context and a… morpheme. (The earlier discussion, involving ordering-by-specificity, applies equally here.)

We can now complete something of a parallelism between “spellout” to PF and to LF. Here’s what I mean:

  1. a) the mapping of syntax to PF involves the insertion of morphemes based on the available vocabulary items, which are pairings between a morphosyntactic context and a morpheme.
    b) the mapping of syntax to LF involves the insertion of ≪X≫s based on the available listemes, which are pairings between a morphosyntactic context and an ≪X≫.

What are these ≪X≫s, then? Obviously, they would have to be something like the available semantic terms – the inventory of semantic primitives that the LF-side insertion rules can pair with morphosyntactic contexts. It is not the point of the current post to adjudicate the issue of semantic terms. Suffice it to say that there is a relatively common impression among outsiders that much of formal semantics lacks anything in the way of a restrictive metatheory, and so the theory of semantic terms is not where it should be. But it’s not like nobody’s thinking about this issue, and since I declared this to be outside the purview of the current post, you – dear reader – have full license to assume that the last few lines of prose are simply false, and there is a fully worked out restrictive metatheory of semantic terms out there that I just happen to not know about. (But do tell me about it in the comments!) The point is, (10a‑b) tells you exactly where such a theory would fit into the wider theory of grammar.

Whether we do or don’t currently have a theory of these ≪X≫s, I want to stress that we have arrived at the picture in (10) independent of that question. And so I’d like to close by pointing out something else about (10): it is, as far as I can tell, a fully symmetric conception of spellout to PF and LF. That is: meanings are inserted in a context-sensitive manner (where the context consists of some morphosyntax), and phonology is inserted in a context-sensitive manner (where the context consists of some morphosyntax). I think this parallelism, if it is indeed theoretically and empirically tenable, is a good thing.

Finally, while I used DM as a jumping-off point for my discussion of non‑lexicalist theories, I don’t think much of what I said here ends up depending on the choice between DM and, say, Nanosyntax. In fact, as far as I can tell, Nanosyntax has bitten this parallelism bullet (cf. (10a‑b)) from the get-go, and so, at least in that respect, has this part of the picture exactly right (in contrast to portions of the DM canon).

I’d like to thank Asia Pietraszko for help in the writing of this post.

New version of paper “The Anaphor Agreement Effect: further evidence against binding-as-agreement”

 Posted by on 07/24/2019  Comments Off on New version of paper “The Anaphor Agreement Effect: further evidence against binding-as-agreement”
Jul 242019
 

I’ve posted a new version of my paper “The Anaphor Agreement Effect: further evidence against binding-as-agreement.” It is somewhat unusual for a ‘new version’, in that the paper has been completely rewritten, top to bottom, following some feedback from reviewers! You can read more about this project on my research page.

The paper can be downloaded here.

(Backup link in case lingbuzz is down: here.)

Jun 082019
 

There’s been a fair amount of generative linguistics work over the past 15 years or so that identifies itself as “morphosemantics.” There are several reasons why I don’t think morphosemantics is a coherent notion. In this post, I’d like to detail some of these reasons. You’ve probably heard ~1.5 of them before, though, so if that’s the case feel free to skip ahead as needed.

The first reason is conceptual. As already discussed on this blog, there is – definitionally – no direct “line of communication” between morphology and semantics. Syntactic structure is ‘interpreted'(=morphologized) by the morphological component, and syntactic structure is ‘interpreted'(=interpreted) by the semantic component. But the grain-size of the former mapping (syntax→morphology) doesn’t even align, in the general case, with the grain-size of the latter mapping (syntax→semantics). In other words, morphemes don’t have interpretations, and meanings don’t have a spellout. (Well, maybe some morphemes have meanings, and some meanings have spellouts; but it’s not generally true.) Talking about morphosemantics is therefore tantamount to syntax denialism. There is no “morphosemantics”; only morphosyntaxsemantics.

The second reason is empirical. In the one domain I know something about – phi‑features – it is manifestly the case that syntax involves a different representation & computation than morphology and semantics do.

The third reason, though it might be less decisive than the previous two, is potentially more interesting (and may find traction with a slightly broader audience). It basically amounts to this: I struggle to think of many empirical domains where the fundamental entities of morphology line up with the fundamental entities of syntax which line up with the fundamental entities of semantics. (Or, if you prefer, you can run this all in the other direction.) In a recent talk in Tromsø, at the much-blogged-about Thirty Million Theories of Features workshop, I recycled some observations that I originally collected for a talk at the Brussels Conference on Generative Linguistics (BCGL) in late 2017. These observations involve cases where semantics lines up imperfectly with syntax, which lines up imperfectly with morphology. They are, by and large, very familiar and mundane cases. But maybe that’s the point: people have gotten so used to the existence of these cases that they no longer internalize what these cases can teach us. Here’s the first one:

Another way to phrase what (4) is saying: there is neither a necessary nor a sufficient morphological condition for ‘verbhood’, nor is there a necessary or a sufficient semantic condition for ‘verbhood’. And this is fine! No one takes this as evidence that there is no such thing as a ‘verb’, or as some colossal failure of the theories of syntax and/or morphology and/or semantics. This is exactly what you’d expect, in fact, if the three were different modules.

And that’s the real lesson here. Of course there are significant portions of the relevant mappings that are systematic – what else would you expect from a system that, at the end of the day, is learnable and learned? – but the idea that reliable mappings are intrinsic to the very nature of the system is plainly false.

There’s plenty more where that came from, of course:

(The content of the footnote on ‘Agent role’ is: “It is likely that there is a syntactic correlate of Agenthood, of course (e.g. base-generation in [Spec,vP]). But tellingly: the latter, syntactic property has no consistent morphological correlate – not even in ergative languages (see Baker & Bobaljik 2017 for a recent review).”)

And more:

And more:


As I said in the handout from which these snippets are taken: the mapping between morphology, syntax, and semantics is certainly not random. If nothing else, the system in its entirety has to be learnable, after all. But the claim that it is a reliable, transparent mapping is clearly false. It amounts to taking what is a worthwhile methodological heuristic and elevating it, artificially and incorrectly, to the status of grammatical principle. That would be like seeing Newtonian physics and concluding, “Oh, gosh, I guess all objects really are points in a frictionless vacuum!”


The consequences of all this for the notion of “morphosemantics” should be clear: if the mapping from morphology to syntax to semantics is not a reliable, transparent mapping in the first place, there can be no “morphosemantics.” There’s the (imperfect) mapping of particular syntactic structures to morphology; and there is the (imperfect) mapping of potentially overlapping (but seldom identical) syntactic structures to semantics. Could you imagine anyone talking about the “morphosemantics” of open-class event predication? Well I don’t think there’s any reason to believe that other empirical domains would be any different. And judging by the state of affairs with respect to phi‑features (see above), that doubt seems justified.

Apr 302019
 

A while ago, I posted the following on facebook:

“the morning star” is to “the evening star” as
“my analysis resorts to expletive pro” is to “my analysis is wrong”

To illustrate what I had in mind, compare the following two claims – which, as far as I can tell, are extensionally equivalent:

  1. the EPP is not universal
  2. the EPP is universal and it can be satisfied with a null expletive

Since writing that facebook post, people have occasionally been pointing me to various attempts at arguments in favor of null expletives, i.e., arguments that the equivalence above does not hold. In this post, I’d like to discuss one such argument and why, in my opinion, it doesn’t work.

The argument comes from Sheehan’s 2010 “‘Free’ inversion in Romance and the Null Subject Parameter” (see also Sheehan 2016, “Subjects, null subjects, and expletives”). [Ed. note: Michelle Sheehan informs me that she herself no longer believes in the account I’m addressing in this post, either, and her newer thoughts on the matter – apparently, without recourse to null expletives! – can be found here: https://ling.auf.net/lingbuzz/004063.] The argument hinges on the Definiteness Effect arising in a particular set of environments in certain Romance languages (Spanish, Italian, and European Portuguese). The environments in question are VS word orders where V is unaccusative and an overt locative PP follows the subject. Here is a demonstration of the effect from European Portuguese (p. 242):

Why must one go to these V‑S‑PP word orders to see the effect? Because, as Sheehan notes, the languages in question have Locative Inversion. And in Locative Inversion, there is no Definiteness Effect associated with the post-verbal subject (I’m demonstrating here with English, but the facts are the same in Romance):

(97) Into the room walked a professor / the professor / Chris.

If null subject languages allow a null locative, then VS orders without an overt locative will be ambiguous between, on the one hand, run-of-the-mill VS, and, on the other hand, Locative Inversion (viz. PP‑V‑S) in which the locative just happens to be null. Thus, because these VS sentences have at least one parse (the Locative Inversion one) in which no Definiteness Effect is expected, they indeed do not give rise to the effect (pp. 239–240; see also Pinto 1997, “Licensing and interpretation of inverted subjects in Italian”):

(Note also the obligatory locative/directional interpretation of VS with a definite post-verbal subject; compare (18b′) vs. (18b′′).)

What does all this have to do with null expletives? Well, following Pinto (1997), Sheehan takes the manifestation of the Definiteness Effect in these cases to indicate the presence of an expletive pronoun, which, given that it is nowhere to be found in the string, must be null.

At the risk of sounding like a broken record, this is just incorrect; the Definiteness Effect has nothing to do with expletives (overt or otherwise), and everything to do with subjects remaining low. Even in a language with overt expletives, whose occurrence gives rise to the Definiteness Effect as expected, the effect arises just the same if something else (other than the expletive) occupies the canonical subject position. Compare these Icelandic sentences:

With these:

(Data from Vangsnes 2002, “Icelandic Expletive Constructions and the Distribution of Subject Types.”)

So what does the Romance data show us? Here’s what it looks like to me: if you can ensure that the post-verbal subject is indeed low (as the V‑S‑PP word order allows you to do), you get the Definiteness Effect (see (27), above). If you cannot – i.e., if the data has at least one parse where the subject is not low – you don’t get the effect. And that is the case with a string that is ambiguous between run-of-the-mill VS and Locative Inversion (PP‑V‑S) with a null locative: it has at least one parse, the Locative Inversion one, where the subject is not low.

Note that I take Locative Inversion to involve the subject raising to its canonical “pre-verbal” position, followed by raising of the verb to an even higher position, with the locative situated in a left-peripheral position. The raising of the subject to canonical “pre-verbal” position could be optional in null-subject Romance languages; it wouldn’t change anything here. And finally, note that this high-subject analysis of Locative Inversion is not circularly motivated by my desire to undermine the argument for null expletives; it is required independently – given the fact that the Definiteness Effect has nothing to do with expletives – to explain why subjects in Locative Inversion constructions do not exhibit the Definiteness Effect.

Now, how does the subject get to stay low in, e.g., V‑S‑PP sentences? Here are two proposals (stop me if you’ve heard this before…):

  1. the EPP is not universal
  2. the EPP is universal and it can be satisfied with a null expletive

And since expletives have nothing to do with the Definiteness Effect, option 1 and option 2 are extensionally equivalent. As far as I can see, then, there’s no argument for null expletives here (… or anywhere else).