Omer

Mar 112021
 

It’s been longer than usual since my last blog post. I’ve been busy with various research- and teaching-related activities, which I love dearly, but which have kept me away from blogging. But I’m back with a doozy, length-wise. So strap in…

(Also, it’s worth pointing out that this post is an extended and somewhat informal version of my WCCFL 39 abstract. So if this stuff interests you, come see my poster (shameless plug…), and tell me all the ways in which it is right, and/or all the ways in which it is wrong!)

·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·

If you follow this blog, you know that I have become very interested in the idea that the fundamental units of the mapping from syntax to semantics, and from syntax to morpho-phonology as well, are (contiguous) sets of syntactic terminals, rather than individual syntactic terminals. Since it will be important to what follows, I’ll point out that on the view I have in mind, there is nothing in the architecture of grammar that forces the sets relevant to the mapping to semantics to align with the sets relevant to morpho-phonology, in the general case. (See here for a representative example.) This is a departure from, e.g., the notion of “Spans” as it is used in some varieties of Nanosyntax.

As anyone familiar with Montague Grammar and its intellectual descendants (up to and including Heim & Kratzer 1998) knows, those frameworks are primarily concerned with the composition of meaning – that is, capturing the systematicity in the meanings of complex expressions relative to the meanings of their subparts. Now, everyone who has thought about meaning for more than a second acknowledges that not all meaning is compositional. The meaning of /dɔg/, for example, cannot be computed from the meaning of /dɔ/ and /g/, or of /d/ and /ɔg/. This is an exceedingly mundane observation; compositionality must bottom out at something. And so an essential component of any discussion of compositionality is the question of where compositionality stops: what are the smallest units that cannot be interpreted compositionally. In the Heim & Kratzer (1998) textbook, the answer given is the syntactic terminal (cf. their “Terminal Nodes” rule; p. 43, 48).

What I would like to show in this post is one particular source of evidence that this is the wrong answer.

·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·

Our story begins with Semitic languages. It is often remarked upon that some roots in Semitic have a downright dazzling array of different interpretations when placed in different nominal and verbal templates. Aronoff (2007) notes the root /k‑{b,v}‑ʃ/ in Hebrew, which, in combination with different templates, runs the gamut of meanings from ‘pickles’ to ‘roads’ to ‘conquest’:

(1) a. /k‑{b,v}‑ʃ/ + CaCuC: kvuʃim (‘pickles’)
b. /k‑{b,v}‑ʃ/ + CCiC: kviʃ (‘road’)
c. /k‑{b,v}‑ʃ/ + Ci(C)CuC: kibuʃ (‘conquest’)

This way of framing things invites the conclusion that there is something unusual, and thus noteworthy, about roots like /k‑{b,v}‑ʃ/. But this is actually not the case. Consider the root /x‑ʃ‑{b,v}/ (etymologically, /ħ‑ʃ‑b/). This root is supposed to contrast with a root like /k‑{b,v}‑ʃ/, above, in that the meanings of all nouns and verbs derived from /x‑ʃ‑{b,v}/ have something to do with cognition or computation:

(2) a. /x‑ʃ‑{b,v}/ + CaCaC: xaʃav (‘think’)
b. /x‑ʃ‑{b,v}/ + CiC(C)eC: xiʃev (‘calculate’)
c. /x‑ʃ‑{b,v}/ + hiCCiC: hixʃiv (‘consider’)

Impressionistically, the examples in (2a‑c) may seem more well-behaved, in terms of their interpretations, than (1a‑c) are. Crucially, however, it is still the case that the meanings in (2a‑c) are not predictable from their respective root + template components. Particular verbal templates – like CaCaC, CiC(C)eC, and hiCCiC, in (2) – do have implications for the meanings of the verbs derived from them. But these implications concern voice (e.g. active vs. passive), causativity, and the like (see Kastner 2020 and references therein). Importantly, the examples in (2a‑c) are not distinguishable on these grounds. Each of the meanings in (2) could in fact have been associated with the other two templates on the list. That is: there is no way (that I am aware of) to predictively determine which of the encyclopedic meanings in (2a‑c) would be associated with which of the forms.

On the assumption that (i) consonantal roots are grammatically real entities of the mental grammar of Hebrew speakers, and (ii) all combinatorics that are not exclusive to the phonology or exclusive to the semantics take place in syntax (i.e., no multiplicity of generative engines; Marantz 1997, i.m.a.), it follows that both roots and templates are (separate) syntactic terminals, and that:

(3) Expressions like (1a‑c, 2a‑c) involve mappings from sets of syntactic terminals to listed meanings.

That’s because combining a root with a template affects both the phonology and the semantics of the resulting expression. Such combinatorics are syntactic by hypothesis, and thus both the root and the template must correspond to syntactic constituents. (Technically, this doesn’t entail that the root and template are themselves terminals; they could each be internally-complex bits of syntax. But that would make it even harder on the terminals-are-interpreted view, so let’s assume for now that roots and templates each correspond to a single syntactic terminal.)

·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·

We could “conceal” (3) if we wanted to, by saying that listed meanings are associated with individual syntactic terminals but, e.g., /x‑ʃ‑{b,v}/ is many-ways polysemous, and its syntactic context (in this case, the verbal template it combines with) disambiguates among these various allosemes. This is essentially the approach taken in Distributed Morphology, as far as I can tell. But there are several reasons to reject such a move.1As I am reminded by Jeffrey Punske, there is a bit of a bibliographic curio here, in that Alec Marantz made some of the same observations 25 years ago in his paper Cat as a phrasal idiom – and yet his work in the years since has remained wedded to the very same terminal-centrism that this blog post argues against.

First, this DM treatment is equivalent, in its expressive power, to the theory outlined at the top of this post. In terms of descriptive adequacy, then, the two tie, and so the DM treatment is just a less perspicuous description of the same thing.

Second, the DM treatment is less explanatory. Here’s why. As I learned from Heidi Harley (and as I’ve noted elsewhere on this blog), there are cases where things that are unambiguously syntactic constituents have no interpretation by themselves, and only “gain” an interpretation by virtue of occurring in the context of other, very specific syntactic material. I’m talking about the underlined material in expressions like these:

(4) in cahoots, newfangled, short shrift, dribs and drabs

This is crucially different from an “idiom” like kick the bucket, which, alongside its non-compositional interpretation, also has the interpretation where it’s about a foot making contact with a bucket. The underlined elements in (4), on the other hand, lack any identifiable meaning unless they are in the context of the appropriate syntactic material.

It is also important to note that, syntactically speaking, the expressions in (4) are all exceedingly ordinary: in cahoots has the structure of a PP containing a plural indefinite noun phrase (cf. in rows), short shrift has the structure of an English adjectivally-modified noun (cf. short film), and so forth. The only interesting thing about the expressions in (4), then, is their respective mappings to semantics. And in particular, that they contain syntactic parts that lack any meanings of their own.

Which brings us to the following point: from the terminal-centric, DM perspective, there would be nothing odd about a natural language that lacked cases like (4) entirely. Such a language would constitute an entirely unremarkable DMian entity. In contrast, consider the system outlined above, where the fundamental units of the mapping from syntax to semantics are sets of terminals. In such a system, nothing guarantees that for every available syntactic terminal δ, there will exist a mapping from the singleton set containing δ alone to a listed meaning. English, then, just happens to contain no mappings from {√FANGLE} alone, or from {v,√FANGLE}, to any listed meanings. There may perhaps be certain quantitative learning pressures that favor a system with many singleton mappings over one with fewer such mappings; but there is nothing in the architecture of grammar that affords singleton mappings any kind of privileged status. On this view, a language in which there exists a singleton mapping from {δ} to some listed meaning for every δ in the language represents an extreme edge case. It is an exceedingly unlikely natural object. And I’m willing to wager that there is in fact no natural language that is completely devoid of cases like (4). The sets-based rendition (of what are, again, expressively equivalent systems) gives us an explanatory handle on that fact.

·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·

Now back to Hebrew. There is absolutely nothing exceptional about the state of affairs shown in (2), as far as Hebrew is concerned. This is essentially the state of affairs for every root in the language that can combine with at least two distinct derivational templates. We can therefore assume, without loss of generality, that this is so even when it comes to those few roots that only ever combine with one derivational template (say, /ʔ-ʃ-l/ + CeCeC = ʔeʃel ‘tamarisk’). That is, even for such roots, we could still assume that the meaning arises from a joint mapping from root+template to a lexical meaning, as we must anyway assume for the cases in (1‑2). Essentially, then, every open-class item in Hebrew is an “idiom”, insofar as that term is taken to denote many-to-one mappings from syntactic terminals to lexical meanings.

Nevertheless, there is something fundamentally misleading about consigning these facts to the terminological bin of “idiomaticity”. The term “idiom” is typically understood to indicate a marked departure from the linguistic norm – an expression whose syntax-semantics mapping is unusual. But these many-to-one mappings in a language like Hebrew are not a departure from the norm; they are the norm, at least as far as open-class items are concerned. Under this usage of “idiom”, more or less the entire open-class lexicon of Hebrew would consist of idioms only.

It’s possible, of course, that Semitic languages are “special”, and that in other languages, like English, an expression like dog represents a one-to-one mapping of a syntactic terminal to a lexical meaning. However, given that English n0 is often phonologically null, the phonological spellout of the root √DOG alone would be string-identical to the spellout of this root and n0 together. English would therefore look the same even if it worked exactly like Hebrew does (i.e., if open-class items always involved, at minimum, a joint mapping from root+categorizer to lexical meaning, and never a mapping from the root alone). In fact we have already seen reason to believe that this is so. The assumption that the fundamental units of the mapping from syntax to semantics are sets of terminals (rather than individual terminals) provides a more explanatory handle on why semantically uninterpretable terminals would exist (see (4) above, and the surrounding discussion). And if sets of terminals are the fundamental unit of mapping, there is no reason not to adopt the working assumption that English is exactly like Hebrew – for example, that the English dog is a mapping from the set {n0,√DOG} to a listed meaning.

·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·   ·

So far, I have been talking almost exclusively about the interpretation of open-class items. And it is a fact that most contemporary work in formal semantics is concerned with the interpretation of closed-class items. Nevertheless, my understanding is that this focus on closed-class items is a heuristic choice. The idea – which seems exceedingly reasonable to me, for what that’s worth – is that whatever rules and principles are involved the interpretation of an expression like beauty and how it composes semantically with other expressions, will also be involved in the interpretation of things like every and how they compose semantically with other expressions. Except that accounting for the interpretation of beauty will also involve the added challenge of developing a theory of the conceptual encyclopedia. The latter has bedeviled philosophers and linguists for centuries, and there is no sign of this bedeviling abating any time soon.2There has been lots of progress on theories of the distribution of open-class items (like dog and beauty), making use of sparse vector representations, for example. But if you ever feel tempted to conflate the distribution of an item with the conceptual content associated with that item, please grab your friendly neighborhood philosopher-of-language and have them disabuse you of that notion.

As noted above, this heuristic choice seems like a very reasonable one. But one should not lose sight of the fact that it is a choice of opportunity: it is a gamble, predicated on the assumption that what we learn from the semantics of one sub-class of vocabulary will apply to the other. And more to the point, there is no theoretical principle (that I am aware of) that says that closed-class vocabulary is guaranteed to provide a more direct window into the workings of semantic interpretation than open-class vocabulary is.

By parity of reasoning, then, in those instances where we do manage to learn something from the interpretation of open-class items, those lessons should be taken to be general, as well. And specifically, they should be taken to hold of closed-class vocabulary too, unless and until that is shown to be incorrect.

Here, I have provided you with an argument that context-free interpretation of individual terminals is not something that exists in the domain of open-class items. Not in Hebrew, and maybe not in English, either. The assumption that individual closed-class terminals (say, the determiner every) are ever submitted to semantic interpretation all by themselves is therefore suspect, on the same, widely-accepted methodological grounds. In other words: there’s every reason to think Heim & Kratzer’s (1998) “Terminal Nodes” rule never applies.

Paulina Lyskawa defends!

 Posted by on 01/15/2021  Comments Off on Paulina Lyskawa defends!
Jan 152021
 

I am proud to announce that my student Paulina Lyskawa (co-advised with Maria Polinsky) has defended her PhD thesis!

The thesis, Coordination without grammar-internal feature resolution, presents an extended argument that so-called “resolved” agreement with coordinations (in those cases where agreement doesn’t target just the closest conjunct) is actually not a grammatical phenomenon at all (!). In particular, Paulina argues that when the agreement controller is a coordination, the grammar successfully links the finite verb with the coordination, but is unable to generate an actual agreeing form or feature-set for the finite verb to bear, and resources completely external to the grammar are recruited to fill the void. Under certain circumstances, this can give rise to the appearance of a systematic grammatical mechanism. But in other cases, it gives rise to: (i) inter- and intra-speaker variability, as well as speaker uncertainty and even ineffability, in people’s judgments concerning the appropriate agreement form to use with a given coordination; and (ii) a variety of strategies – some which are form-based, others which are meaning-based, and yet others which are purely a matter of social convention – which are all thrown “into the the breach” so to speak.

(UPDATE: The thesis is now available on lingbuzz.)

Congrats, Paulina!

Dec 262020
 

I was reading some comments by Dan Milway about Chomsky’s recent UCLA lectures, and I realized something I hadn’t noticed before: committing oneself to the brand of minimalism that Chomsky has been preaching lately means committing oneself to a fairly strong version of the Sapir-Whorf Hypothesis.

Here’s why. Consider Chomsky’s “Strong Minimalist Thesis” (SMT), which states that the properties of natural-language syntax can be derived entirely from Merge, interface conditions (about which, see below), and so-called “third factors” (e.g. properties of efficient computation). In particular, the only part of this that is linguistically proprietary, from a cognitive standpoint, is Merge. As Dan points out at the end of his note, this actually entails that there cannot be any substance-based conditions on the application of any syntactic operations – well, of the one syntactic operation. If there was a feature that made Merge apply or not apply (in a way that wasn’t wholly reducible to Sensory-Motor or Conceptual-Intentional considerations), that feature would ipso facto be a linguistically-proprietary entity. And the SMT entails that there can be no such entities.

Now consider the issue of cross-linguistic variation in general, and syntactic variation in particular. Needless to say, if the only linguistically-proprietary element of natural language is Merge, then that doesn’t leave a lot of room for linguistically-proprietary variation. As pointed out ad nauseam by many, Merge is something of an all-or-nothing proposition. There isn’t really anything about Merge that is a candidate for varying cross-linguistically. And so the SMT commits one to a version of the Borer Conjecture, whereby all cross-linguistic variation is variation in the content of lexical items, and a very particular version at that: since there are (by hypothesis) no syntactically potent features, all variation must be located in interface-visible properties of lexical items. That is: properties that either the Sensory-Motor system or the Conceptual-Intentional system (or both) care about.

So let’s grab ourselves a nice example of cross-linguistic variation that looks syntactic: in Kaqchikel, the subject of a transitive clause cannot be targeted for wh-interrogation, relativization, or focalization. In English, it can. How could this variation arise, given the SMT and all that it entails? Well, there could certainly be differences between English and Kaqchikel in the contents of various lexical items, and in particular, the contents of functional vocabulary like wh-elements, interrogative complementizers, and functional elements in the verb phrase, to name a few. But to have any effect on the respective languages, these differences would by hypothesis have to be differences that the Sensory-Motor and/or Conceptual-Intentional systems cared about. Now, if you’ve ever done fieldwork on Kaqchikel, you know that the Sensory-Motor systems of speakers have no problem with sentences in which, e.g., the subject of a transitive clause has been focalized. That’s because, by and large, speakers are perfectly able to use these systems to say the offending sentences, before immediately commenting that those sentences are “wrong.” (Granted, there are of course speakers who refuse to even say the offending sentences. So for the sake of uniformity, let’s run our argument only on the sub-community of speakers who are willing to say these sentences and only then comment on their wrongness.) I can already imagine some people who are reading this rushing to say something like, “Just because they can say the relevant sentences doesn’t mean there’s nothing wrong with those sentences from the perspective of the Sensory-Motor system.” But that kind of retort would be specious; the only way to evaluate the SMT is to take Chomsky at his word and then see what that entails. And since he says “Sensory-Motor system,” I think the only way to proceed is to assume that what he means by that is Sensory Motor system. Indeed, if the idea is that nothing outside Merge is linguistically proprietary, he certainly can’t mean, by “Sensory-Motor system,” anything that is about language in particular. And so, the fact that speakers can say the sentences in question means that, ipso facto, they have no Sensory-Motor problems with those sentences.

And so what we’re left with is the Conceptual-Intentional system. Epistemologically speaking, we have much less direct access to what’s going on there. So for all we know, it may indeed be true that the Kaqchikel sentences in question (involving, e.g., focalization of the subject of a transitive clause) are bad for reasons having to do with this system. But here, again, it is important that we take Chomsky at his word: the Conceptual-Intentional system is not “LF” or “semantics” or anything linguistic in nature; it is, well, the system of concepts and intentions. And so, by way of elimination, we have arrived at the conclusion that the difference between sentence (1) and its ill-formed Kaqchikel counterpart is a difference located in the system of concepts and intentions.

(1) It was the dog who saw the child.

This does not (yet) amount to the claim that the Conceptual-Intentional system of Kaqchikel speakers is different from that of their English-speaking counterparts. The respective systems can be functionally identical, with the relevant difference lying only in the Conceptually-Intentionally potent part of the relevant lexical items (wh-elements, complementizers, and so on) in English vs. in Kaqchikel.

But it does amount to the claim that either the Conceptual-Intentional systems of English speakers and Kaqchikel speakers differ, or else sentences like (1) express different Conceptual-Intentional content than their Kaqchikel counterparts. Since the former plainly amounts to the Sapir-Whorf Hypothesis, let us choose the latter for now. This would mean that English speakers are able to construct Conceptual-Intentional content that their Kaqchikel-speaking counterparts are unable to construct. While the Sapir-Whorf hypothesis comes in many guises and varying strengths, I think most people would recognize the claim that speakers of one language can construct Conceptual-Intentional content that speakers of another language are categorically unable to construct as a claim that is itself decidedly Sapir-Whorfian. Remember, this is not the claim that speakers of one language can construct “LFs” that speakers of another language cannot construct, nor is it the claim that speakers of one language have lexical items that speakers of another language do not have. This is a claim about the ability (or inability) of speakers to construct a sentence that picks out a particular bit of language-external content, concepts and intentions that live wholly outside the linguistic system. A difference in the ability to pick out such content would be a quintessentially Sapir-Whorfian thing.

Now, regular readers of this blog are no doubt aware of my opinion of the Strong Minimalist Thesis as well as my opinion of the Sapir-Whorf Hypothesis. But you don’t need to agree with me on either of those things to appreciate that the two are linked in the fashion just described. Like it or not, if you buy into the SMT, you’ve bought into (a nontrivial version of) the Sapir-Whorf Hypothesis.

W-NYI 2021

 Posted by on 12/15/2020  Comments Off on W-NYI 2021
Dec 152020
 

After the success of V‑NYI 2020, John F. Bailyn and the spectacular NYI crew are reprising their efforts with a winter edition: W‑NYI 2021!

Asia Pietraszko and I will again be teaching Words and other things: what do you need to list in your head?

You can find the course description on my Teaching & Advising page.

· · · · · · · · · · · · · · · · · · · ·

W‑NYI 2021 advertisement poster

Oct 282020
 

From time to time, the term “ecological validity” is thrown around in connection with linguistic research. And you’d think I’d be calloused by now, but no: I’m astounded anew every time someone treats this as something that’s self-evidently desirable (and not, say, as anathema to how most science works).

The term “ecological validity”, which I think has its origins in experimental psychology and sociology, is used in linguistic research as an informal assessment of how well the experimental conditions in a given study reflect the conditions and factors involved in real-world, day-to-day language use. (And before we get too far in: acceptability judgments, including introspective ones, are very much an instance of robust, reliable experimentation.)

Now, if your scientific question is something about how language is used in real-life situations, then by all means, “ecological validity” might be something you should think about.

But suppose what you’re after is the structure of human language. That is, suppose you’re treating human language as a naturally-occurring phenomenon, and you’re interested in uncovering its inner workings. Reason dictates that you should probably steer as far away from “ecological validity” as you possibly can! When some naturally-occurring phenomenon is thought to be a massive interaction effect of many, many independent and interdependent factors, the way sciences typically approach things is by creating highly artificial experimental setups – sometimes strictly thought-experimental, other times carried out – in the hopes of isolating one (or at least a relatively small number) of these many factors. Ask yourself: could you imagine a critique of the Large Hadron Collider on the grounds that the conditions inside it are not “ecologically valid”?

And here’s the thing: linguistic behavior is self-evidently a massive interaction effect, involving working memory, attention, motivation, fatigue, etc. etc. This makes physical phenomena like Brownian motion (wherein one can’t predict the motion of an individual particle) – or, to cite one of Chomsky’s favorite examples, the paths of individual leaves blowing in the wind – look positively simple by comparison. It’s beyond me why anyone would seek to confront this undifferentiated mess head-on.

More concretely: we have every reason to suspect that humans throw all their cognitive resources (or at least those that they can spare in the moment) at whatever task they’re currently faced with. The task of using language is no exception. E.g. do we have a capacity for rote memorization? We sure do! (Once upon a time we used it to memorize phone numbers. Remember that??) Why not make use of it, in those circumstances where rote-memorization can be fruitfully applied to language?1This is why, as I never tire of telling my students, “One rote-learned construction does not a head-final language make.” But since rote-learning is not a linguistic capacity per se, it follows that research into the structure of language itself needs to abstract away from it. So there you go: in real language-use situations, you can probably lean on rote-learned information to some extent. Therefore, research into the structure of language needs to be “ecologically invalid” in at least this sense. (E.g. by using jabberwocky items, or unlikely-to-be-encountered-before combinations of more familiar items.) And rote-memorization is of course but one example of the many ways that “ecological validity” would undermine research into the structure of human language.

And so, the next time someone tells you something like, “That sentence is not the kind of thing anyone would ever say in regular speech!”, you should proudly respond, “Thank you! I too think this is well-designed stimuli for testing what I’m after.”

Slides: “On the atoms of linguistic computation”

 Posted by on 10/14/2020  Comments Off on Slides: “On the atoms of linguistic computation”
Oct 142020
 

I’ve posted the slides for a guest seminar I gave recently as part of the More Advanced Syntax graduate course at MIT.

These slides represent my latest thinking (as of Oct 2020, anyway) about the question of how syntax interfaces with morpho-phonology and with semantics.

For those of you who are well-versed in some of these questions and are in a rush, here’s the tl;dr version: it’s “Nanosyntax-style spanning meets the ‘three lists’ architecture of Distributed Morphology.” But it’s not some arbitrary mix-and-match of these two pieces of grammatical architecture. Arguments are provided that this is actually the right way to proceed.

Relevant background reading: https://omer.lingsite.org/blogpost-architecture-and-blocking-revisited/

New paper: “Taxonomies of case and ontologies of case”

 Posted by on 09/23/2020  Comments Off on New paper: “Taxonomies of case and ontologies of case”
Sep 232020
 

I’ve posted a pre-print of a paper of mine that’s set to appear in an edited volume. The paper is titled Taxonomies of case and ontologies of case. It is a theoretical review paper of sorts, and it has several intertwined goals:

  1. To show what a system of configurational case assignment would look like when formulated in current syntactic terms (rather than the GB terms in which it was originally proposed, e.g. in Marantz’s 1991 paper).
  2. To show that given (1), the proposal in Baker’s (2015) book, to add case-assignment-under-phi-agreement to a configurational case system, is an empirically vacuous one. Everything it can account for can also be accounted for under a purely configurational system as construed in (1), with no appeal whatsoever to phi-features within the theory of case.
  3. To argue that the system in (1) is therefore sufficient to account for case, cross-linguistically. It is also necessary, in the sense that theories with no dependent-case component are unable to serve as general theories of case.
  4. To remind ourselves that one cannot argue against (3) by, e.g., presenting a language in which the-case-pretheoretically-identified-as-‘accusative’ doesn’t conform to the predictions of dependent case. That would only work if descriptive labels like ‘accusative’ were guaranteed to carve out a natural class of grammatical phenomena, but there is no reason to believe that they do.

The paper can be downloaded here.

(Backup link in case lingbuzz is down: here.)

Jul 252020
 

This is not so much a blog post as it is a collection of things that I think deserve your attention. As you will see, it is quite a self-serving list, in that several of these works provide evidence in favor of claims that I have also been arguing for. But hey, it’s my blog, right? 😊

  1. Pavel Rudnev has a paper set to appear in Glossa arguing against approaches to anaphoric binding in terms of phi-Agree, and in favor of an encapsulation-based account of the Anaphor Agreement Effect, of the kind I have argued for as well. (More converging evidence, with a twist, comes from the work of Rafael Abramovitz on the AAE in Koryak.)
  2. Recent work by Susi Wurmbrand & Magdalena Lohninger on clausal complementation, showing (among other things) that the semantics of clausal complements cannot be read directly off the syntax. Instead, the syntax of a given language will determine which complementation options a given verb in that language will have (subject to an implicational hierarchy that Wurmbrand & Lohninger uncover, but, importantly, underdetermined by the semantics). The semantics then has to map the possible readings of a given complement onto what these syntactically-prescribed structural possibilities happen to be. As readers of this blog know, this is entirely in line with what we find in other empirical domains. My slogan for this has been: “Meaning contrasts are not generated by syntax, they are parasitic on the contrasts syntax happens to make available.” (Not so pithy, I know. But still, this flies in the face of standard wisdom in the Montagovian tradition, so I think it’s worth hammering this point home.)
  3. Pavel Rudnev again! This time, in a paper that’s already available for “early view” in Linguistic Inquiry. The paper provides an argument based on agreement in Avar in favor of restricting phi-agreement to Downward Agree (a.k.a. Upward Valuation; Diercks, Koppen & Putnam 2019, as well as various papers of mine, some of them co-authored with Maria Polinsky).