Paulina Lyskawa defends!

 Posted by on 01/15/2021  Comments Off on Paulina Lyskawa defends!
Jan 152021

I am proud to announce that my student Paulina Lyskawa (co-advised with Maria Polinsky) has defended her PhD thesis!

The thesis, Coordination without grammar-internal feature resolution, presents an extended argument that so-called “resolved” agreement with coordinations (in those cases where agreement doesn’t target just the closest conjunct) is actually not a grammatical phenomenon at all (!). In particular, Paulina argues that when the agreement controller is a coordination, the grammar successfully links the finite verb with the coordination, but is unable to generate an actual agreeing form or feature-set for the finite verb to bear, and resources completely external to the grammar are recruited to fill the void. Under certain circumstances, this can give rise to the appearance of a systematic grammatical mechanism. But in other cases, it gives rise to: (i) inter- and intra-speaker variability, as well as speaker uncertainty and even ineffability, in people’s judgments concerning the appropriate agreement form to use with a given coordination; and (ii) a variety of strategies – some which are form-based, others which are meaning-based, and yet others which are purely a matter of social convention – which are all thrown “into the the breach” so to speak.

I hope Paulina makes the thesis available on lingbuzz once she has filed it. In the meantime, if you are interested, please get in touch with her!

Congrats, Paulina!

Dec 262020

I was reading some comments by Dan Milway about Chomsky’s recent UCLA lectures, and I realized something I hadn’t noticed before: committing oneself to the brand of minimalism that Chomsky has been preaching lately means committing oneself to a fairly strong version of the Sapir-Whorf Hypothesis.

Here’s why. Consider Chomsky’s “Strong Minimalist Thesis” (SMT), which states that the properties of natural-language syntax can be derived entirely from Merge, interface conditions (about which, see below), and so-called “third factors” (e.g. properties of efficient computation). In particular, the only part of this that is linguistically proprietary, from a cognitive standpoint, is Merge. As Dan points out at the end of his note, this actually entails that there cannot be any substance-based conditions on the application of any syntactic operations – well, of the one syntactic operation. If there was a feature that made Merge apply or not apply (in a way that wasn’t wholly reducible to Sensory-Motor or Conceptual-Intentional considerations), that feature would ipso facto be a linguistically-proprietary entity. And the SMT entails that there can be no such entities.

Now consider the issue of cross-linguistic variation in general, and syntactic variation in particular. Needless to say, if the only linguistically-proprietary element of natural language is Merge, then that doesn’t leave a lot of room for linguistically-proprietary variation. As pointed out ad nauseam by many, Merge is something of an all-or-nothing proposition. There isn’t really anything about Merge that is a candidate for varying cross-linguistically. And so the SMT commits one to a version of the Borer Conjecture, whereby all cross-linguistic variation is variation in the content of lexical items, and a very particular version at that: since there are (by hypothesis) no syntactically potent features, all variation must be located in interface-visible properties of lexical items. That is: properties that either the Sensory-Motor system or the Conceptual-Intentional system (or both) care about.

So let’s grab ourselves a nice example of cross-linguistic variation that looks syntactic: in Kaqchikel, the subject of a transitive clause cannot be targeted for wh-interrogation, relativization, or focalization. In English, it can. How could this variation arise, given the SMT and all that it entails? Well, there could certainly be differences between English and Kaqchikel in the contents of various lexical items, and in particular, the contents of functional vocabulary like wh-elements, interrogative complementizers, and functional elements in the verb phrase, to name a few. But to have any effect on the respective languages, these differences would have to be differences that the Sensory-Motor systems and/or the Conceptual-Intentional systems cared about. Now, if you’ve ever done fieldwork on Kaqchikel, you know that the Sensory-Motor systems of speakers have no problem with sentences in which, e.g., the subject of a transitive clause has been focalized. That’s because, by and large, speakers are perfectly able to use these systems to say the offending sentences, before immediately commenting that those sentences are “wrong.” (Granted, there are of course speakers who refuse to even say the offending sentences. So for the sake of uniformity, let’s run our argument only on the sub-community of speakers who are willing to say these sentences and only then comment on their wrongness.) I can already imagine some people who are reading this rushing to say something like, “Just because they can say the relevant sentences doesn’t mean there’s nothing wrong with those sentences from the perspective of the Sensory-Motor system.” But that kind of retort would be specious; the only way to evaluate the SMT is to take Chomsky at his word and then see what that entails. And since he says “Sensory-Motor system,” I think the only way to proceed is to assume that what he means by that is Sensory Motor system. Indeed, if the idea is that nothing outside Merge is linguistically proprietary, he certainly can’t mean, by “Sensory-Motor system,” anything that is about language in particular. And so, the fact that speakers can say the sentences in question means that, ipso facto, they have no Sensory-Motor problems with those sentences.

And so what we’re left with is the Conceptual-Intentional system. Epistemologically speaking, we have much less direct access to what’s going on there. So for all we know, it may indeed be true that the Kaqchikel sentences in question (involving, e.g., focalization of the subject of a transitive clause) are bad for reasons having to do with this system. But here, again, it is important that we take Chomsky at his word: the Conceptual-Intentional system is not “LF” or “semantics” or anything linguistic in nature; it is, well, the system of concepts and intentions. And so, by way of elimination, we have arrived at the conclusion that the difference between sentence (1) and its ill-formed Kaqchikel counterpart is a difference located in the system of concepts and intentions.

(1) It was the dog who saw the child.

This does not (yet) amount to the claim that the Conceptual-Intentional system of Kaqchikel speakers is different from that of their English-speaking counterparts. The respective systems can be functionally identical, with the relevant difference lying only in the Conceptually-Intentionally potent part of the relevant lexical items (wh-elements, complementizers, and so on) in English vs. in Kaqchikel.

But it does amount to the claim that either the Conceptual-Intentional systems of English speakers and Kaqchikel speakers differ, or else sentences like (1) express different Conceptual-Intentional content than their Kaqchikel counterparts. Since the former plainly amounts to the Sapir-Whorf Hypothesis, let us choose the latter for now. This would mean that English speakers are able to construct Conceptual-Intentional content that their Kaqchikel-speaking counterparts are unable to construct. While the Sapir-Whorf hypothesis comes in many guises and varying strengths, I think most people would recognize the claim that speakers of one language can construct Conceptual-Intentional content that speakers of another language are categorically unable to construct as a claim that is itself decidedly Sapir-Whorfian. Remember, this is not the claim that speakers of one language can construct “LFs” that speakers of another language cannot construct, nor is it the claim that speakers of one language have lexical items that speakers of another language do not have. This is a claim about the ability (or inability) of speakers to construct a sentence that picks out a particular bit of language-external content, concepts and intentions that live wholly outside the linguistic system. A difference in the ability to pick out such content would be a quintessentially Sapir-Whorfian thing.

Now, regular readers of this blog are no doubt aware of my opinion of the Strong Minimalist Thesis as well as my opinion of the Sapir-Whorf Hypothesis. But you don’t need to agree with me on either of those things to appreciate that the two are linked in the fashion just described. Like it or not, if you buy into the SMT, you’ve bought into (a nontrivial version of) the Sapir-Whorf Hypothesis.

W-NYI 2021

 Posted by on 12/15/2020  Comments Off on W-NYI 2021
Dec 152020

After the success of V‑NYI 2020, John F. Bailyn and the spectacular NYI crew are reprising their efforts with a winter edition: W‑NYI 2021!

Asia Pietraszko and I will again be teaching Words and other things: what do you need to list in your head?

You can find the course description on my Teaching & Advising page.

· · · · · · · · · · · · · · · · · · · ·

W‑NYI 2021 advertisement poster

Oct 282020

From time to time, the term “ecological validity” is thrown around in connection with linguistic research. And you’d think I’d be calloused by now, but no: I’m astounded anew every time someone treats this as something that’s self-evidently desirable (and not, say, as anathema to how most science works).

The term “ecological validity”, which I think has its origins in experimental psychology and sociology, is used in linguistic research as an informal assessment of how well the experimental conditions in a given study reflect the conditions and factors involved in real-world, day-to-day language use. (And before we get too far in: acceptability judgments, including introspective ones, are very much an instance of robust, reliable experimentation.)

Now, if your scientific question is something about how language is used in real-life situations, then by all means, “ecological validity” might be something you should think about.

But suppose what you’re after is the structure of human language. That is, suppose you’re treating human language as a naturally-occurring phenomenon, and you’re interested in uncovering its inner workings. Reason dictates that you should probably steer as far away from “ecological validity” as you possibly can! When some naturally-occurring phenomenon is thought to be a massive interaction effect of many, many independent and interdependent factors, the way sciences typically approach things is by creating highly artificial experimental setups – sometimes strictly thought-experimental, other times carried out – in the hopes of isolating one (or at least a relatively small number) of these many factors. Ask yourself: could you imagine a critique of the Large Hadron Collider on the grounds that the conditions inside it are not “ecologically valid”?

And here’s the thing: linguistic behavior is self-evidently a massive interaction effect, involving working memory, attention, motivation, fatigue, etc. etc. This makes physical phenomena like Brownian motion (wherein one can’t predict the motion of an individual particle) – or, to cite one of Chomsky’s favorite examples, the paths of individual leaves blowing in the wind – look positively simple by comparison. It’s beyond me why anyone would seek to confront this undifferentiated mess head-on.

More concretely: we have every reason to suspect that humans throw all their cognitive resources (or at least those that they can spare in the moment) at whatever task they’re currently faced with. The task of using language is no exception. E.g. do we have a capacity for rote memorization? We sure do! (Once upon a time we used it to memorize phone numbers. Remember that??) Why not make use of it, in those circumstances where rote-memorization can be fruitfully applied to language?1This is why, as I never tire of telling my students, “One rote-learned construction does not a head-final language make.” But since rote-learning is not a linguistic capacity per se, it follows that research into the structure of language itself needs to abstract away from it. So there you go: in real language-use situations, you can probably lean on rote-learned information to some extent. Therefore, research into the structure of language needs to be “ecologically invalid” in at least this sense. (E.g. by using jabberwocky items, or unlikely-to-be-encountered-before combinations of more familiar items.) And rote-memorization is of course but one example of the many ways that “ecological validity” would undermine research into the structure of human language.

And so, the next time someone tells you something like, “That sentence is not the kind of thing anyone would ever say in regular speech!”, you should proudly respond, “Thank you! I too think this is well-designed stimuli for testing what I’m after.”

Slides: “On the atoms of linguistic computation”

 Posted by on 10/14/2020  Comments Off on Slides: “On the atoms of linguistic computation”
Oct 142020

I’ve posted the slides for a guest seminar I gave recently as part of the More Advanced Syntax graduate course at MIT.

These slides represent my latest thinking (as of Oct 2020, anyway) about the question of how syntax interfaces with morpho-phonology and with semantics.

For those of you who are well-versed in some of these questions and are in a rush, here’s the tl;dr version: it’s “Nanosyntax-style spanning meets the ‘three lists’ architecture of Distributed Morphology.” But it’s not some arbitrary mix-and-match of these two pieces of grammatical architecture. Arguments are provided that this is actually the right way to proceed.

Relevant background reading:

New paper: “Taxonomies of case and ontologies of case”

 Posted by on 09/23/2020  Comments Off on New paper: “Taxonomies of case and ontologies of case”
Sep 232020

I’ve posted a pre-print of a paper of mine that’s set to appear in an edited volume. The paper is titled Taxonomies of case and ontologies of case. It is a theoretical review paper of sorts, and it has several intertwined goals:

  1. To show what a system of configurational case assignment would look like when formulated in current syntactic terms (rather than the GB terms in which it was originally proposed, e.g. in Marantz’s 1991 paper).
  2. To show that given (1), the proposal in Baker’s (2015) book, to add case-assignment-under-phi-agreement to a configurational case system, is an empirically vacuous one. Everything it can account for can also be accounted for under a purely configurational system as construed in (1), with no appeal whatsoever to phi-features within the theory of case.
  3. To argue that the system in (1) is therefore sufficient to account for case, cross-linguistically. It is also necessary, in the sense that theories with no dependent-case component are unable to serve as general theories of case.
  4. To remind ourselves that one cannot argue against (3) by, e.g., presenting a language in which the-case-pretheoretically-identified-as-‘accusative’ doesn’t conform to the predictions of dependent case. That would only work if descriptive labels like ‘accusative’ were guaranteed to carve out a natural class of grammatical phenomena, but there is no reason to believe that they do.

The paper can be downloaded here.

(Backup link in case lingbuzz is down: here.)

Jul 252020

This is not so much a blog post as it is a collection of things that I think deserve your attention. As you will see, it is quite a self-serving list, in that several of these works provide evidence in favor of claims that I have also been arguing for. But hey, it’s my blog, right? 😊

  1. Pavel Rudnev has a paper set to appear in Glossa arguing against approaches to anaphoric binding in terms of phi-Agree, and in favor of an encapsulation-based account of the Anaphor Agreement Effect, of the kind I have argued for as well. (More converging evidence, with a twist, comes from the work of Rafael Abramovitz on the AAE in Koryak.)
  2. Recent work by Susi Wurmbrand & Magdalena Lohninger on clausal complementation, showing (among other things) that the semantics of clausal complements cannot be read directly off the syntax. Instead, the syntax of a given language will determine which complementation options a given verb in that language will have (subject to an implicational hierarchy that Wurmbrand & Lohninger uncover, but, importantly, underdetermined by the semantics). The semantics then has to map the possible readings of a given complement onto what these syntactically-prescribed structural possibilities happen to be. As readers of this blog know, this is entirely in line with what we find in other empirical domains. My slogan for this has been: “Meaning contrasts are not generated by syntax, they are parasitic on the contrasts syntax happens to make available.” (Not so pithy, I know. But still, this flies in the face of standard wisdom in the Montagovian tradition, so I think it’s worth hammering this point home.)
  3. Pavel Rudnev again! This time, in a paper that’s already available for “early view” in Linguistic Inquiry. The paper provides an argument based on agreement in Avar in favor of restricting phi-agreement to Downward Agree (a.k.a. Upward Valuation; Diercks, Koppen & Putnam 2019, as well as various papers of mine, some of them co-authored with Maria Polinsky).

V-NYI 2020

 Posted by on 07/15/2020  Comments Off on V-NYI 2020
Jul 152020

The 2020 edition of the NY ‑ St. Petersburg Institute of Linguistics, Cognition, and Culture (NYI) will take place virtually, during the last two weeks of July!

Together with Asia Pietraszko (University of Rochester), I’ll be teaching a course called Words and other things: what do you need to list in your head?

For more information, including the course description, please see my Teaching & Advising page.

· · · · · · · · · · · · · · · · · · · ·

The course is now complete. Asia and I are happy to share the course materials with any interested individuals. If you are interested, drop me a line.

· · · · · · · · · · · · · · · · · · · ·

Suyoung Bae defends!

 Posted by on 07/03/2020  Comments Off on Suyoung Bae defends!
Jul 032020

I am proud to announce that Suyoung Bae, my second-ever PhD advisee (co-advised with Howard Lasnik), has defended her thesis!

The thesis is an investigation of Korean amwu-. Suyoung shows that this negation-dependent expression is neither a Negative-Polarity Item (NPI) nor a Negative-Concord Item (NCI), but instead, a third type of negation-dependent item, whose distribution is governed by purely syntactic factors: constituency, the restrictions on A-movement, and the restrictions on long head movement. On the way, she makes novel observations about “radical reconstruction” in Korean (spoiler: it’s not always radical!) and consequently, about long-distance scrambling in Korean (it can’t be “PF movement”), how Cyclic Linearization constrains subextraction from complex noun phrases, and more.

I hope Suyoung makes the thesis available on lingbuzz once she has filed it – in the meantime, if you are interested, please get in touch with her!

Congrats, Suyoung!