How to Understand the Deep Structures of Language
In an alternative to Chomsky’s "Universal Grammar," scientists explore language’s fundamental design constraints
There are two striking features of language that any scientific theory of this quintessentially human behavior must account for. The first is that we do not all speak the same language. This would be a shocking observation were not so commonplace. Communication systems and other animals tend to be universal, with any animal of the species able to communicate with any other. Likewise, many other fundamental human attributes show much less variation. Barring genetic or environmental mishap, we all have two eyes, one mouth, and four limbs. Around the world, we cry when we are sad, smile when we are happy, and laugh when something is funny, but the languages we use to describe this are different.
The second striking feature of language is that when you consider the space of possible languages, most languages are clustered in a few tiny bands. That is, most languages are much, much more similar to one another than random variation would have predicted.
Starting with pioneering work by Joseph Greenberg, scholars have cataloged over two thousand linguistic universals (facts true of all languages) and biases (facts true of most languages). For instance, in languages with fixed word order, the subject almost always comes before the object. If the verb describes a caused event, the entity that caused the event is the subject ("John broke the vase") not the object (for example, "The vase shbroke John" meaning "John broke the vase"). In languages like English where the verb agrees with one of its subjects or objects, it typically agrees with the subject (compare "the child eats the carrots" with "the children eat the carrots") and not with its object (this would look like "the child eats the carrot" vs. "the child eat the carrots"), though in some languages, like Hungarian, the ending of the verb changes to match both the subject and object.
When I point this out to my students, I usually get blank stares. How else could language work? The answer is: very differently. Scientists and engineers have created hundreds of artificial languages to do the work of mathematics (often called "the universal language"), logic, and computer programming. These languages show none of the features mentioned above for the simplest of reasons: the researchers who invented these languages never bothered to include verb agreement or even the subject/object distinction itself.
Since we became aware of just how tightly constrained the variation in human language is, researchers have struggled to find an explanation. Perhaps the most famous account is Chomsky's Universal Grammar hypothesis, which argues that humans are born with innate knowledge about many of the features of language (e.g., languages distinguish subjects and objects), which would not only explain cross-linguistic universals but also perhaps how language learning gets off the ground in the first place. Over the years, Universal Grammar has become increasingly controversial for a number of reasons, one of which is the arbitrariness of the theory: The theory merely replaces the question of why we have the languages we have, and not others, with the question of why we have the Universal Grammar we have, and not another one.
As an alternative, a number of researchers have explored the possibility that some universals in language fall out of necessary design constraints. The basic idea is that some possible but nonexistent languages do not exist because they would simply be bad languages. There there are are no no languages languages in in which which you you repeat repeat every every word word. We don't need Universal Grammar to explain this; sheer laziness will suffice. Similarly, there are no languages that consist of a single, highly ambiguous word (sorry Hodor); such a language would be nearly useless for communication.
In an exciting recent paper, Ted Gibson and colleagues provide evidence for a design-constraint explanation of a well-known bias involving case endings and word order. Case-markers are special affixes stuck onto nouns that specify whether the noun is the subject or object (etc.) of the verb. In English, you can see this on pronouns (compare "she talked with her"), but otherwise, English, like most SVO languages (languages where the typical word order is Subject, Verb, Object) does not mark case. In contrast, Japanese, like most SOV languages (languages where the typical word order is Subject, Object, Verb) does mark case, with -wa added to subjects and -o added to direct objects. "Yasu saw the bird" is translated as "Yasu-wa tori-o mita" and "The bird saw Yasu" is translated as "Tori-wa Yasu-o mita." The question is why there is this relationship between case-marking and SOV word order.
Gibson and colleagues provide the following explanation. To understand a sentence, you have to determine which character is doing what: was it Yasu who saw the bird, or was it the other way around? We know that it’s the subject who does the seeing, so the problem reduces to identifying the subject. In both SOV and SVO languages, you can (usually) use word order to identify the subject, but the difference is that in SOV languages, the subject and object are much closer to one another, which makes it more likely that you may get confused as to which actually came first (alternatively, the speaker may accidentally switch the order of the words).
Gibson and colleagues' focus is not on why you might become confused, but it is worth taking a moment to consider some possibilities. The most obvious one (to me) involves the binding problem. The binding problem is easiest to describe using an example from perception. Below, you will see a red R, a green X, and a blue I. But if you look directly at the plus sign to the right, you will likely not only have difficulty recognizing the letters (they should appear like a jumble of lines and curves) but even figuring out which letter is which color (this will depend on how close you are to the screen; the closer you are, the worse the problem will be; you may need to stare for a few seconds to get the full effect).
The binding problem, then, is determining which aspects of our perceptual experience all belong to the same object. This problem may be particularly pronounced because these different features (color, shape, etc.) are initially processed by different parts of the brain and must be bound together downstream. How this is done is one of the basic, unresolved problems in psychology and especially neuroscience.
Language processing faces similar challenges. We have different streams of information: what words were uttered and what order they were uttered in. It's easier to bind the order information to the right word in SVO languages like English because the subject and object are far apart (there is a verb in between), much as the crowding problem in the example above is ameliorated by spacing the letters out:
SOV languages don't have this trick available to them, which may explain why they often add case-markers as additional cues to subjecthood and objecthood.
Gibson and colleagues provide ingenious evidence for this account. They presented people with simple scenes, such as where a girl kicks a ball, and asked them to describe the scene in gestures (no speaking allowed). Most people described (in gesture) the girl first, then the ball, then the kicking action -- that is, they used an SOV order. Of course, when the kicking event involves a girl and a ball, there isn't much question about who did the kicking.
The researchers also asked people to describe in gestures an event in which a girl kicked a boy. Since both boys and girls are capable of kicking, it's very possible to be confused about who kicked who. And now participants were much more likely to describe (in gesture) the girl, then the kicking event, and then the boy -- that is, they switched to an SVO order. This was true (with a few complications which you can read about in the paper) whether the participant was a native speaker of English (an SVO language) or a native speaker of Korean or Japanese (SOV languages).
Gibson and colleagues provided a nice explanation for why you might want to use SVO word order rather than SOV word order when case-marking isn't available to you, and they also show that people, left to their own devices, actually do this.
Much is still left to be done. You might wonder why SOV languages exist at all, particularly since they typically make you learn all those annoying word endings. Gibson and colleagues suggest that we may have a default bias for SOV order, as shown by the facts that (a) SOV languages (like Japanese) are actually more common than SVO languages (like English), and (b) participants in their study slightly preferred SOV order overall. The researchers also cite evidence that newly created languages may be more likely to be SOV. Still, none of that explains why SOV would be the default; as usual, a new question has hitched a ride along with the answer to an old one. We also still need an explanation of why some SVO languages have case marking and some SOV languages do not (the authors sketch a few possibilities).
Overall, though, this paper provides one of the clearest examples yet of where an important tendency in human language -- a bias you would not expect to exist through mere random chance -- can be explained by reference to universal principles of computation and information theory. This does not necessarily exclude Universal Grammar -- perhaps Universal Grammar smartly implements good computational principles -- but it does shed light on why human language -- and by extension, human nature -- is the way it is and not some other way.
Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook, a Pulitzer prize-winning journalist and regular contributor to NewYorker.com. He can be reached at garethideas AT gmail.com or Twitter @garethideas.
ABOUT THE AUTHOR(S)
Joshua K. Hartshorne is a post-doctoral fellow in the Computational Cognitive Science Group at MIT. He conducts experiments through his web laboratory atgameswithwords.org.
No hay comentarios:
Publicar un comentario