There’s a classic joke about second language acquisition that goes something like this: an English learner on vacation in the USA gets in a terrible traffic accident. The paramedics rush to him at the scene of the crash, yelling out “Are you OK?!” Bloodied and half-conscious at the scene of the crash, the English learner stammers, “I’m… fine… thank… you… and… you?”
This joke gets at something that most classroom language learners have probably run into: the massive disconnect between “grammar as access to meaning” vs. “grammar as muscle memory.” However, traditional frameworks from linguistic theory – whether they conceive of language as hierarchical syntactic trees with discrete branching nodes or as construction-style templates with embedded meaning such as “[VERB] the [TIME PERIOD] away” – don’t distinguish between these two styles of processing. If theoretical linguists are serious about relating their field of study to language learners’ lived experiences (rather than waving their hands and saying that manifestations of language in the real world fall under the category of “performance,” outside of their abstract idealized domain of “competence”), then they should stop ignoring the fact that grammar can be used in real life – and even used successfully – without necessarily engaging meaning.
This distinction between meaning-based vs. meaning-agnostic grammar processing can be examined by combining mathematical models from cognitive psychology with a classic artificial language paradigm that involves a covert rule wherein the pseudowords “gi” and “ul” tend to co-occur with nouns for living things whereas “ro” and “ne” tend to co-occur with non-living things. The trial structure for this experiment paradigm is shown below:
Notice that there’s two possible ways to apply the covert grammatical rule in this experiment: either by seeing “gi” and immediately activating the mental concept of “living” (thus, “grammar as meaning”), or by seeing “gi” and anticipating a button press for the answer choice “living,” without actually thinking of a living/non-living distinction (thus, “grammar as muscle memory”). These two kinds of processing can be pulled apart using mathematical models that describe cognition in two-choice reaction time tasks decisions by differentiating between processes tied to “evidence accumulation” vs. time spent in non-decision-related processes (e.g., tied to non-cognitive factors like motor speed or speed of low-level perception). One such model, called the drift-diffusion model, is illustrated below:
In our results, participants who consciously noticed the hidden grammar rule showed the first kind of effect: their drift-diffusion modeling results showed that bias in evidence accumulation at the start of each trial (denoted z in the figure above) was affected for trials that violated the grammatical rule. By contrast, participants who did not consciously notice the rule showed the second kind of effect: they learned the rule subconsciously (as indicated by having overall faster reaction times to rule-following vs. rule-violating trials), but their rule learning was manifested as changes to non-decision times (denoted t in the figure above) such that their responses were affected because of factors outside of evidence accumulation process (maybe their index fingers were too eager to hover above the predicted answer key?). This suggests that they had become subconsciously attuned to recurring predictable button-press patterns in the experiment. These results present an interesting case study for second language educators about how successful task performance does not always require actual engagement with meaning.
We are currently collecting data to determine whether subconscious grammar learning can occur even when button presses in the experiment aren’t predictable. For now, though, my challenge to linguists out there is: is meaning-based grammar use necessarily “better”? Doesn’t automatizing language use lead to faster and less metabolically-costly processing? As an L2 learner, I can think of so many times when having a handy memorized phrase in my holster, ready to deploy “out of the box” with “no assembly required,” took me a lot farther than a hyper-abstract metalinguistic rule that is more generalizable but kills the flow of conversation, if you have to stop and mentally apply it mid-sentence (“If there is both a direct object clitic and an indirect object clitic, then the first clitic gets replaced with se…”). To illustrate this more tangibly for any Spanish teacher out there: hasn’t this catchy earworm of a song done much more for teaching learners to say “I like ___ “, much more than a boring grammar explanation ever did? Manu Chao – Me gustas tú – YouTube
No, children do not get confused about languages. Bilingual children speak at least two languages. Instead of confusing the two, they have to learn what language(s) they can use with each person. They start learning fairly early on (before age 2) but this can be influenced by the language situation at home. If some children go through a period in which they mix languages, this is nothing to be worried about. Eventually all bilinguals end up with at least one native language, possibly two. If parents and/or siblings use both languages in communicating with the child then the child will at first naturally assume that everybody is bilingual and that it can mix both languages when speaking with other people. It might take a little bit for the child to figure out that the daycare teacher only speaks English. But eventually it will happen (rather sooner than later). Keep in mind that no healthy grown-up bilingual mixes up languages when speaking to monolinguals.
If you’ve ever tried to learn another language, you know how much time, energy, and brain power is required. Imagine a person who can pick up languages very easily. Someone who can navigate our world’s multilingual hullaballoo. Who can leap language barriers with a single bound. Who can learn without effort and remember indelibly. Such people aren’t parrots. They’re not computers. They’re language superlearners.
Michael Erard searched for these people, and when he found them — in history books and living among us — he tried to make sense of their linguistic feats and their mental powers. His book answers the age-old question, What are the upper limits of the human ability to learn, remember, and use languages?
It is common for researchers to use artificial languages to test certain aspects of language acquisition. Linguists at Northwestern University cleverly took it one step further by referencing the world of pop culture with their made-up language, naming it after satirist Stephen Colbert, a man known for humorously coining his own words, such as ‘truthiness’. Colbertian was used to test whether being bilingual aids in learning another language, which the researchers say it does. You can read more of the details in the Chicago Sun-Times write-up. Furthermore, you can even learn Colbertian yourself!
This presentation will report on a series of studies on the interaction between external pedagogical conditions and individual variables, especially those related to bilingualism and cognition. The studies have been conducted within The Latin Project (TLP) paradigm (Sanz, Bowden, & Stafford, N=400+), and have until today looked at a combination of different L1s (English, Spanish, Mandarin) and L2s (Japanese, Mandarin, Spanish, English) using a mini-version of Latin as experimental language. Specifically, TLP operationalizes pedagogical conditions in terms of timing and amount of provision of explicit grammar rules in conjunction with task-essential, input-based practice (i.e. +/- explicit conditions) and includes a battery of cognitive measures (sentence span test, PSTM, (L1/L2), the MLAT, symbols/numbers test) to investigate the role of cognitive capacity in the interaction between conditions and variables associated with bilingualism, such as age, aging, proficiency, and strategy use.
In the presentation, I will focus on some of the patterns we have identified across studies: Input-based task essential practice is enough to promote language development; feedback with grammar is more effective for immediate performance, but gains made via interaction with meaningful input and right/wrong feedback may be more stable over time; higher L2 proficiency enhances L3 development; appearance of bilingual advantages depends on the complexity of the tasks performed both in terms of testing and of condition; aptitude is not a fixed trait and can be enhanced with experience in language learning.
These patterns will be discussed in light of what we know about language development under +/- explicit conditions (included in reviews, metanalyses in Norris & Ortega, 2000; Sanz & Morgan-Short, 2005; Spada & Tomita, 2010), the few studies on the role of cognition in moderating the effects of pedagogical conditions (e.g. Mackey, Adams, & Stafford, 2010), and on cognitive advantages of bilingualism, specially Bialystok’ most recent publications.
MIT researcher Deb Roy wanted to understand how his infant son learned language — so he wired up his house with videocameras to catch every moment (with exceptions) of his son’s life, then parsed 90,000 hours of home video to watch “gaaaa” slowly turn into “water.” Astonishing, data-rich research with deep implications for how we learn. (Recorded at TED2011, March 2011, in Long Beach, California. Duration: 19:52)
The UIC Bilingualism Forum is dedicated to research in any area related to bilingualism: theoretical linguistics, codeswitching, SLA, psycholinguistics, sociolinguistics, neurolinguistics, cognitive sciences, heritage acquisition, bilingual acquisition, etc. Presentations will be 20 minutes each with 10 minutes for discussion.
Marcel den Dikken, City University of New York
Michael Ullman, Georgetown University
Call For Papers
Deadline for submission of abstract: 12/1/2010
Acceptance response by: 1/15/2011
2 page anonymous abstract including examples and references
Quantitative models of linguistic phenomena have been increasingly informing linguistic theory by testing, confirming and falsifying linguists’ hypotheses, and translating their insights into language based applications. Despite this, the divide between theoretical linguistics and empirical research remains substantial, with many theories being expressed in terms that are not conducive to data-based testing, and conversely, the appearance of a variety of data-based studies and applications with no adequate theory to frame and explain
Quantitative Investigations in Theoretical Linguistics (QITL) offers a forum for researchers who aim to bridge this gap from any linguistic discipline or methodology, and in particular, but not limited to:
– Quantitative corpus based studies
– Computational linguistics / NLP
– Historical linguistics
– Second language acquisition / applied linguistics
– First language acquisition