There are several reasons why I started my own blog, after blogging over on Norbert Hornstein’s Faculty of Language (FoL) for a while. One of them is that I found the thematic content of my posts was actually not a terribly good fit for FoL. Norbert is very interested in big-picture cogsci and bio-cognitive themes. I love little morphemes and what makes them (syntactically) tick. These themes obviously make contact with one another, but I sometimes felt like an entomologist at a geophysicists convention.
But here I am, exactly three blogposts in (two, not counting this one), and I’m penning a biolinguistically-themed post. Womp womp.
(This post is actually an elaboration of a comment I left on FoL a little while ago.)
First, some background. Let’s take it as a given that no non-human animal has the kind of syntax that humans have. This is often taken as proof positive that the mental underpinnings of language are innate, and I tend to agree. HOWEVER, the latter is not a necessary part of what I have to say here. The following should be of interest even if you prefer a continuity-based, gradualist approach to the emergence of human syntax. (I personally don’t think such a thing is viable, but I’m just flagging that the rest of what I have to say in this post doesn’t depend on it not being viable.)
The issue at hand concerns the nature of the “linguistic atom” and, in particular, the nature of the units that syntax manipulates. A property of many conversations on the emergence of human language is that they assume (or take for granted) that the individual symbols that syntax manipulates are not different in kind from (some) pre-linguistic signs. If you can teach some primate several hundred sign-meaning pairings, with decreasing levels of iconicity, then – this logic goes – there’s nothing unique-to-humans about the symbols at the bottom of the syntactic structure; what is unique is the structure.
So, you take as atoms the “ugh”s and “pffft”s and “tsk tsk”s and pre-linguistic groans and grunts of the world (let’s use these as examples of atomic sound-meaning pairings, for the sake of argument), you execute syntactic computation over those atoms, and you get human language.
This is false.
Alec Marantz once said that “when morphologists talk, linguists nap.” Moreover, morphologists are linguists, and my suspicion is that when linguists talk, (many) evolutionary psychologists chant “la la la I can’t hear you.”
But, paying minimal attention to contemporary morphological literature, one sees that picture above cannot possibly be right. That’s because in the general case, the things sitting at the bottom of the syntactic tree don’t align with morphemes or meanings, and morphemes and meanings don’t align with each other.
For example, “went” – a single morpheme – contains at the very least the syntactic terminals PAST and GO (and the latter is almost certainly internally complex), as well as, at the very least, one temporal meaning component and one non-temporal one. So it constitutes (conservatively) a 1 morpheme ↔︎ 2 syntactic terminals ↔︎ 2 meanings mapping.
“In cahoots” contains at least three individual morphemes. It contains some number – let’s say 3 – of syntactic terminals. But it certainly contains less than 3 units of meaning. (Or, if it does contain three meaning units, they definitely don’t each correspond to one of the three morphemes.) After all, what is a “cahoot”? (© Heidi Harley.) So this is something like a 3 morpheme ↔︎ 3 syntactic terminals ↔︎ 1 meaning mapping.
We could go on and on like this. You might think that these are marginal exceptions; that the vast majority of natural language is made up of atoms like “dog” and, therefore, obeys the 1 morpheme ↔︎ 1 syntactic terminal ↔︎ 1 meaning mapping that I’m casting as a foil. That kind of objection merits several responses. First, people clearly have no particular problem with “went” or “cahoot” (the latter being an instance of 1 morpheme ↔︎ 1 syntactic terminal ↔︎ zero meanings). So the linguistic system, as such, must be built to handle such cases, whatever their proportion in corpus or whatever. And it’s the properties of the linguistic system we are interested in here. Second, it’s not particularly clear that there are that many 1 morpheme ↔︎ 1 syntactic terminal ↔︎ 1 meaning mappings out there, at all. Many current morphosyntacticians don’t actually think “dog” is a single syntactic terminal (it is a root, a nominal categorizer, and perhaps a null singular number node; English just happens to not expone some of these, though other languages do).
What does all of this teach us?
Well one thing it teaches us is that Saussure was kinda wrong about the whole “sign↔︎signified” thing. It doesn’t work that way, not in human language, not even at the level of the individual morpheme. At least not in the general case.
But it also teaches us that even if the problem of getting from a primate without human syntax to a primate endowed with human syntax is solved by some deus ex machina maneuver, that doesn’t get you to human language. Human language is not just syntax carried out over pre-linguistic atoms.
Let me be more explicit. The properties of linguistic atoms cannot even be properly stated without reference to syntax: a morpheme is a piece of sound whose insertion is contextually-conditioned, and that context is made up of syntactic atoms assembled by syntax. A meaning is a piece of semantics whose insertion is contextually-conditioned, and that context is made up of syntactic atoms assembled by syntax. Maybe some morphemes and meanings have trivial insertion contexts, corresponding to a single syntactic atom. And maybe, moreover, some pairs of these morphemes and meanings are such that they share the same syntactic atom as their trivial insertion context. (“Pffft.”(?)) But even if this is so, that’s not the general design of the system.
So the answer to “Are pre-linguistic atoms anything like the atoms of human language” is categorically “No.” (And, as a side note, the atoms of syntax seem to have no correlates in anything pre-linguistic, as they are neither sounds nor meanings.)
Any story about the emergence of human language, then, is on the hook not only to explain why human syntax is unlike anything seen outside of humans. It is also on the hook to explain the fundamentally different nature of the atoms involved.
I’d like to acknowledge that what occasioned me to revisit these thoughts and organize them in a blogpost was a recent invitation from Tess Wood, Julianne Garbarino, and Yu’an Yang, to talk to members of the UMD undergraduate language science program (“PULSAR”). As is often the case, one of the best ways for me to find out what I think about something is to have to tell other people what I think about that something. So thanks to the PULSAR folks!