1. Study suggests the bilingual brain calculates differently depending on the language used

    September 22, 2017 by Ashley

    From the University of Luxembourg press release:

    People can intuitively recognise small numbers up to four; however, when calculating they depend on the assistance of language. In this respect, the fascinating research question ensues: how do multilingual people solve arithmetical tasks presented to them in different languages of which they have a very good command? The question will gain in importance in the future, as an increasingly globalised job market and accelerated migration will mean that ever more people seek work and study outside of the linguistic area of their home countries.

    This question was investigated by a research team led by Dr Amandine Van Rinsveld and Professor Dr Christine Schiltz from the Cognitive Science and Assessment Institute (COSA) at the University of Luxembourg. For the purpose of the study, the researchers recruited subjects with Luxembourgish as their mother tongue, who successfully completed their schooling in the Grand Duchy of Luxembourg and continued their academic studies in francophone universities in Belgium. Thus, the study subjects mastered both the German and French languages perfectly. As Luxembourger students, they took maths classes in primary schools in German and then in secondary schools in French.

    In two separate test situations, the study participants had to solve very simple and a bit more complex addition tasks, both in German and French. In the tests it became evident that the subjects were able to solve simple addition tasks equally well in both languages. However, for complex addition in French, they required more time than with an identical task in German. Moreover, they made more errors when attempting to solve tasks in French.

    During the tests, functional magnetic resonance imaging (fMRI) was used to measure the brain activity of the subjects. This demonstrated that, depending on the language used, different brain regions were activated. With addition tasks in German, a small speech region in the left temporal lobe was activated. When solving complex calculatory tasks in French, additional parts of the subjects’ brains responsible for processing visual information, were involved. However, during the complex calculations in French, the subjects additionally fell back on figurative thinking. The experiments do not provide any evidence that the subjects translated the tasks they were confronted with from French into German, in order to solve the problem. While the test subjects were able to solve German tasks on the basis of the classic, familiar numerical-verbal brain areas, this system proved not to be sufficiently viable in the second language of instruction, in this case French. To solve the arithmetic tasks in French, the test subjects had to systematically fall back on other thought processes, not observed so far in monolingual persons.

    The study documents for the first time, with the help of brain activity measurements and imaging techniques, the demonstrable cognitive “extra effort” required for solving arithmetic tasks in the second language of instruction. The research results clearly show that calculatory processes are directly affected by language.


  2. Study suggests conversation is faster when words are accompanied by gestures

    September 19, 2017 by Ashley

    From the Springer press release:

    When someone asks a question during a conversation, their conversation partner answers more quickly if the questioner also moves their hands or head to accompany their words. These are the findings of a study led by Judith Holler of the Max Planck Institute for Psycholinguistics and Radboud University Nijmegen in the Netherlands. The study is published in Springer’s journal Psychonomic Bulletin & Review and focusses on how gestures influence language processing.

    The transition between turns taken during a conversation is astonishingly fast, with a mere 200 milliseconds typically elapsing between the contribution of one speaker to the next. Such speed means that people must be able to comprehend, produce and coordinate their contributions to a conversation in good time.

    To study the role of gestures during conversation, Holler and her colleagues, Kobin Kendrick and Stephen Levinson, analyzed the interaction of seven groups of three participants. The groups were left alone in a recording suite for twenty minutes, during which their interaction was filmed with three high-definition video cameras. The researchers analyzed the question-response sequences in particular because these are so prevalent in conversations. Holler and her team found that there was a strong visual component to most questions being asked and answered during the conversations. These took the form of bodily signals such as communicative head or hand movements.

    Bodily signals appear to profoundly influence language processing in interaction,” says Holler. “Questions accompanied by gestures lead to shorter turn transition times — that is, to faster responses — than questions without gestures, and responses come even earlier when gestures end before compared to after the question turn has ended.”

    This means that gestures that end early may give us an early visual cue that the speaker is about to end, thus helping us to respond faster. But, at least for those cases in which gestures did not end early, it also means that the additional information conveyed by head and hand gestures may help us process or predict what is being said in conversation.

    “The empirical findings presented here provide a first glimpse of the possible role of the body in the psycholinguistic processes underpinning human communication,” explains Holler. “They also provide a stepping stone for investigating these processes and mechanisms in much more depth in the future.”


  3. Is changing languages effortful for bilingual speakers? Depends on the situation

    September 15, 2017 by Ashley

    From the New York University press release:

    Research on the neurobiology of bilingualism has suggested that switching languages is inherently effortful, requiring executive control to manage cognitive functions, but a new study shows this is only the case when speakers are prompted, or forced, to do so.

    In fact, this latest work finds that switching languages when conversing with another bilingual individual — a circumstance when switches are typically voluntary — does not require any more executive control than when continuing to speak the same language.

    The findings appear in the Journal of Neuroscience.

    “For a bilingual human, every utterance requires a choice about which language to use,” observes senior author Liina Pylkkanen, a professor in New York University’s Department of Linguistics and Department of Psychology. “Our findings show that circumstances influence bilingual speakers’ brain activity when making language switches.”

    “Bilingualism is an inherently social phenomenon, with the nature of our interactions determining language choice,” adds lead author Esti Blanco-Elorrieta, an NYU doctoral candidate. “These results make clear that even though we may switch between languages in which we are fluent, our brains respond differently, depending on what spurs such changes.”

    Historically, research on the neuroscience of bilingualism has asked speakers to associate languages with a cue that bears no natural association to the language, such as a color, and to then name pictures in the language indicated by the color cue. However, this type of experiment doesn’t capture the real-life experience of a bilingual speaker — given experimental parameters, it artificially prompts, or forces, the speakers to speak a particular language. By contrast, in daily interactions, language choice is determined on the basis of social cues or ease of access to certain vocabulary items in one language vs. another.

    This distinction raises the possibility that our brains don’t have to work as hard when changing languages in more natural settings.

    In an effort to understand neural activity of bilingual speakers in both circumstances, the researchers used magnetoencephalography (MEG), a technique that maps neural activity by recording magnetic fields generated by the electrical currents produced by our brain. They studied Arabic-English bilingual speakers in a variety of conversational situations, ranging from completely artificial scenarios — much like earlier experiments — to fully natural conversations. These conversations were real conversations between undergraduates that had agreed to be mic’d for a portion of their day on campus.

    Their results showed marked distinctions between artificial and more natural settings. Specifically, the brain areas for executive, or cognitive, control — the anterior cingulate and prefrontal cortex — were less involved during language changes in the natural setting than they were in the artificial setting. In fact, when the study’s subjects were free to switch languages whenever they wanted, they did not engage these areas at all.

    Furthermore, in a listening mode, language switches in the artificial setting required an expansive tapping of the brain’s executive control areas; however, language switching while listening to a natural conversation engaged only the auditory cortices.

    In other words, the neural cost to switch languages was much lighter during a conversation — when speakers chose which language to speak — than in a classic laboratory task, in which language choice was dictated by artificial cues.

    “This work gets us closer to understanding the brain basis of bilingualism as opposed to language switching in artificial laboratory tasks” observes Pylkkänen.

    The study shows that the role of executive control in language switching may be much smaller than previously thought.

    This is important, the researchers note, for theories about “bilingual advantage,” which posit that bilinguals have superior executive control because they switch language frequently. These latest results suggest that the advantage may only arise for bilinguals who need to control their languages according to external constraints (such as the person they are speaking to) and would not occur by virtue of a life experience in a bilingual community where switching is fully free.

    The research was supported by a grant from the NYU Abu Dhabi Institute.


  4. Communicating in a foreign language takes emotion out of decision-making

    September 1, 2017 by Ashley

    From the University of Chicago press release:

    If you could save the lives of five people by pushing another bystander in front of a train to his death, would you do it? And should it make any difference if that choice is presented in a language you speak, but isn’t your native tongue?

    Psychologists at the University of Chicago found in past research that people facing such a dilemma while communicating in a foreign language are far more willing to sacrifice the bystander than those using their native tongue. In a paper published Aug. 14 in Psychological Science, the UChicago researchers take a major step toward understanding why that happens.

    “Until now, we and others have described how using a foreign language affects the way that we think,” said Boaz Keysar, the UChicago psychology professor in whose lab the research was conducted. “We always had explanations, but they were not tested directly. This is really the first paper that explains why, with evidence.”

    Through a series of experiments, Keysar and his colleagues explore whether the decision people make in the train dilemma is due to a reduction in the emotional aversion to breaking an ingrained taboo, an increase in deliberation thought to be associated with a utilitarian sense of maximizing the greater good or some combination of the two.

    “We discovered that people using a foreign language were not any more concerned with maximizing the greater good,” said lead author Sayuri Hayakawa, a UChicago doctoral student in psychology. “But rather, were less averse to violating the taboos that can interfere with making utility-maximizing choices.”

    The researchers, including Albert Costa and Joanna Corey from Pompeu Fabra University in Barcelona, propose that using a foreign language gives people some emotional distance and that allowed them to take the more utilitarian action.

    “I thought it was very surprising,” Keysar said. “My prediction was that we’d find that the difference is in how much they care about the common good. But it’s not that at all.”

    Studies from around the world suggest that using a foreign language makes people more utilitarian. Speaking a foreign language slows you down and requires that you concentrate to understand. Scientists have hypothesized that the result is a more deliberative frame of mind that makes the utilitarian benefit of saving five lives outweigh the aversion to pushing a man to his death.

    But Keysar’s own experience speaking a foreign language — English — gave him the sense that emotion was important. English just didn’t have the visceral resonance for him as his native Hebrew. It wasn’t as intimately connected to emotion, a feeling shared by many bilingual people and corroborated by numerous lab studies.

    “Your native language is acquired from your family, from your friends, from television,” Hayakawa said. “It becomes infused with all these emotions.”

    Foreign languages are often learned later in life in classrooms, and may not activate feelings, including aversive feelings, as strongly.

    The problem is that either the “more utilitarian” or the “less emotional” process would produce the same behavior. To help figure out which was actually responsible, the psychologists worked with David Tannenbaum, a postdoctoral research fellow at the University of Chicago Booth School of Business at the time of the research and now an assistant professor at the University of Utah.

    Tannenbaum is an expert at a technique called process dissociation, which allows researchers to tease out and measure the relative importance of different factors in a decision process. For the paper, the researchers did six separate studies with six different groups, including native speakers of English, German and Spanish. Each also spoke one of the other languages, so that all possible combinations were equally represented. Each person was randomly assigned to use either his or her native language or second language throughout the experiment.

    Participants read an array of paired scenarios that varied systematically in key ways. For example, instead of killing a man to save five people from death, they might be asked if they would kill him to save five people from minor injuries. The taboo act of killing the man is the same, but the consequences vary.

    “If you have enough of these paired scenarios, you can start gauging what are the factors that people are paying attention to,” Hayakawa said. “We found that people using a foreign language were not paying any more attention to the lives saved, but definitely were less averse to breaking these kinds of rules. So if you ask the classic question, ‘Is it the head or the heart?’ It seems that the foreign language gets to the heart.”

    The researchers are next looking at why that is. Does using a foreign language blunt people’s mental visualization of the consequences of their actions, contributing to their increased willingness to make the sacrifice? And do they create less mental imagery because of differences in how foreign language use affects which memories come to mind?

    The researchers are also starting to investigate whether their lab results apply in real-world situations where the stakes are high. A study Keysar’s team is initiating in Israel looks at whether the parties in a peace negotiation assess the same proposal differently if they see it in their own language or the language of their negotiating partner. And Keysar is interested in looking at whether language can be usefully considered in decisions made by doctors speaking a foreign language.

    “You might be able to predict differences in medical decision-making depending on the language that you use,” he said. “In some cases you might prefer a stronger emotional engagement, in some you might not.”


  5. What does music mean? Sign language may offer an answer

    August 28, 2017 by Ashley

    From the New York University press release:

    How do we detect the meaning of music? We may gain some insights by looking at an unlikely source, sign language, a newly released linguistic analysis concludes.

    “Musicians and music lovers intuitively know that music can convey information about an extra-musical reality,” explains author Philippe Schlenker, a senior researcher at Institut Jean-Nicod within France’s National Center for Scientific Research (CNRS) and a Global Distinguished Professor at New York University. “Music does so by way of abstract musical animations that are reminiscent of iconic, or pictorial-like, components of meaning that are common in sign language, but rare in spoken language.”

    The analysis, “Outline of Music Semantics,” appears in the journal Music Perception; it is available, with sound examples, here: http://ling.auf.net/lingbuzz/002942. A longer piece that discusses the connection with iconic semantics is forthcoming in the Review of Philosophy & Psychology (“Prolegomena to Music Semantics”).

    Schlenker acknowledges that spoken language also deploys iconic meanings–for example, saying that a lecture was ‘loooong’ gives a very different impression from just saying that it was ‘long.’ However, these meanings are relatively marginal in the spoken word; by contrast, he observes, they are pervasive in sign languages, which have the same general grammatical and logical rules as do spoken languages, but also far richer iconic rules.

    Drawing inspiration from sign language iconicity, Schlenker proposes that the diverse inferences drawn on musical sources are combined by way of abstract iconic rules. Here, music can mimic a reality, creating a “fictional source” for what is perceived to be real. As an example, he points to composer Camille Saint Saëns’s “The Carnival of the Animals” (1886), which aims to capture the physical movement of tortoises.

    “When Saint Saëns wanted to evoke tortoises in ‘The Carnival of Animals,’ he not only used a radically slowed-down version of a high-energy dance, the Can-Can,” Schlenker notes. “He also introduced a dissonance to suggest that the hapless animals were tripping, an effect obtained due to the sheer instability of the jarring chord.”

    In his work, Schlenker broadly considers how we understand music–and, in doing so, how we derive meaning through the fictional sources that it creates.

    “We draw all sorts of inferences about fictional sources of the music when we are listening,” he explains. “Lower pitch is, for instance, associated with larger sound sources, a standard biological code in nature. So, a double bass will more easily evoke an elephant than a flute would. Or, if the music slows down or becomes softer, we naturally infer that a piece’s fictional source is losing energy, just as we would in our daily, real-world experiences. Similarly, a higher pitch may signify greater energy–a physical code–or greater arousal, which is a biological code.”

    Fictional sources may be animate or inanimate, Schlenker adds, and their behavior may be indicative of emotions, which play a prominent role in musical meaning.

    “More generally, it is no accident that one often signals the end of a classical piece by simultaneously playing more slowly, more softly, and with a musical movement toward more consonant chords,” he says. “These are natural ways to indicate that the fictional source is gradually losing energy and reaching greater repose.”

    In his research, Schlenker worked with composer Arthur Bonetto to create minimal modifications of well-known music snippets to understand the source of the meaning effects they produce. This analytical method of ‘minimal pairs,’ borrowed from linguistics and experimental psychology, Schlenker posits, could be applied to larger musical excerpts in the future.


  6. How pronouns can be used to build confidence in stressful situations

    August 24, 2017 by Ashley

    From the University at Buffalo press release:

    You’re preparing for a major presentation. Or maybe you have a job interview. You could even be getting ready to finally ask your secret crush out on a date.

    Before any potentially stressful event, people often engage in self-talk, an internal dialogue meant to moderate anxiety.

    This kind of self-reflection is common, according to Mark Seery, a University at Buffalo psychologist whose new study, which applied cardiovascular measures to test participants’ reactions while giving a speech, suggests that taking a “distanced perspective,” or seeing ourselves as though we were an outside observer, leads to a more confident and positive response to upcoming stressors than seeing the experience through our own eyes. The findings, published in the Journal of Experimental Social Psychology with co-authors Lindsey Streamer, Cheryl Kondrak, Veronica Lamarche and Thomas Saltsman, illustrate how the strategic use of language in the face of tension helps people feel more confident.

    “Being a fly on the wall might be the way to put our best foot forward,” says Seery, an associate professor in UB’s Department of Psychology and an expert on stress and coping. “And one way to do that is by not using first-person pronouns like ‘I’. For me, it’s saying to myself, ‘Mark is thinking this’ or ‘Here is what Mark is feeling’ rather than ‘I am thinking this’ or ‘Here is what I’m feeling.’

    “It’s a subtle difference in language, but previous work in other areas has shown this to make a difference — and that’s the case here, too.” Seery says most everyone engages in self-talk, but it’s important to understand that not all self-talk is equally effective when contemplating future performance. We can either self-distance or self-immerse.

    For the study, researchers told 133 participants that a trained evaluator would assess a two-minute speech on why they were a good fit for their dream job. The participants were to think about their presentation either with first-person (self-immersing) or third-person pronouns (self-distancing).

    While they delivered their speeches, researchers measured a spectrum of physiological responses (how fast the heart beats; how hard it beats; how much blood the heart is pumping; and the degree to which blood vessels dilated or constricted), which provided data on whether the speech is important to the presenter and the presenter’s level of confidence.

    “What this allows us to do is something that hasn’t been shown before in studies that relied on asking participants to tell researchers about their thoughts and feelings,” says Seery. “Previous work has suggested that inducing self-distancing can lead to less negative responses to stressful things, but that can be happening because self-distancing has reduced the importance of the event.

    “That seems positive on the face of it, but long-term that could have negative implications because people might not be giving their best effort,” says Seery. “We found that self-distancing did not lead to lower task engagement, which means there was no evidence that they cared less about giving a good speech. Instead, self-distancing led to greater challenge than self-immersion, which suggests people felt more confident after self-distancing.”

    Seery points out that some of the most important moments in life involve goal pursuit, but these situations can be anxiety provoking or even overwhelming.

    Self-distancing may promote approaching them with confidence and experiencing them with challenge rather than threat.”


  7. Study suggests bilingual babies can efficiently process both languages

    August 23, 2017 by Ashley

    From the Princeton University press release:

    Are two languages at a time too much for the mind? Caregivers and teachers should know that infants growing up bilingual have the learning capacities to make sense of the complexities of two languages just by listening. In a new study, an international team of researchers, including those from Princeton University, report that bilingual infants as young as 20 months of age efficiently and accurately process two languages.

    The study, published Aug. 7 in the journal Proceedings of the National Academy of Sciences, found that infants can differentiate between words in different languages. “By 20 months, bilingual babies already know something about the differences between words in their two languages,” said Casey Lew-Williams, an assistant professor of psychology and co-director of the Princeton Baby Lab, where researchers study how babies and young children learn to see, talk and understand the world. He is also the co-author of the paper.

    “They do not think that ‘dog’ and ‘chien’ [French] are just two versions of the same thing,” Lew-Williams said. “They implicitly know that these words belong to different languages.”

    To determine infants’ ability to monitor and control language, the researchers showed 24 French-English bilingual infants and 24 adults in Montreal pairs of photographs of familiar objects. Participants heard simple sentences in either a single language (“Look! Find the dog!”) or a mix of two languages (“Look! Find the chien!”). In another experiment, they heard a language switch that crossed sentences (“That one looks fun! Le chien!”). These types of language switches, called code switches, are regularly heard by children in bilingual communities.

    The researchers then used eye-tracking measures, such as how long an infant’s or an adult’s eyes remained fixed to a photograph after hearing a sentence, and pupil dilation. Pupil diameter is an involuntary response to how hard the brain is “working,” and is used as an indirect measure of cognitive effort.

    The researchers tested bilingual adults as a control group and used the same photographs and eye-tracking procedure as tested on bilingual infants to examine whether these language-control mechanisms were the same across a bilingual speaker’s life.

    They found that bilingual infants and adults incurred a processing “cost” when hearing switched-language sentences and, at the moment of the language switch, their pupils dilated. However, this switch cost was reduced or eliminated when the switch was from the non-dominant to the dominant language, and when the language switch crossed sentences.

    “We identified convergent behavioral and physiological markers of there being a ‘cost’ associated with language switching,” Lew-Williams said. Rather than indicating barriers to comprehension, the study “shows an efficient processing strategy where there is an activation and prioritization of the currently heard language,” Lew-Williams said.

    The similar results in both the infant and adult subjects also imply that “bilinguals across the lifespan have important similarities in how they process their languages,” Lew-Williams said.

    “We have known for a long time that the language currently being spoken between two bilingual interlocutors — the base language — is more active than the language not being spoken, even when mixed speech is possible,” said François Grosjean, professor emeritus of psycholinguistics at Neuchâtel University in Switzerland, who is familiar with the research but was not involved with the study.

    “This creates a preference for the base language when listening, and hence processing a code-switch can take a bit more time, but momentarily,” added Grosjean. “When language switches occur frequently, or are situated at [sentence] boundaries, or listeners expect them, then no extra processing time is needed. The current study shows that many of these aspects are true in young bilingual infants, and this is quite remarkable.”

    “These findings advance our understanding of bilingual language use in exciting ways — both in toddlers in the initial stages of acquisition and in the proficient bilingual adult,” said Janet Werker, a professor of psychology at the University of British Columbia, who was not involved with the research. She noted that the findings may have implications for optimal teaching in bilingual settings. “One of the most obvious implications of these results is that we needn’t be concerned that children growing up bilingual will confuse their two languages. Indeed, rather than being confused as to which language to expect, the results indicate that even toddlers naturally activate the vocabulary of the language that is being used in any particular setting.”

    A bilingual advantage?

    Lew-Williams suggests that this study not only confirms that bilingual infants monitor and control their languages while listening to the simplest of sentences, but also provides a likely explanation of why bilinguals show cognitive advantages across the lifespan. Children and adults who have dual-language proficiency have been observed to perform better in “tasks that require switching or the inhibiting of a previously learned response,” Lew-Williams said.

    “Researchers used to think this ‘bilingual advantage’ was from bilinguals’ practice dealing with their two languages while speaking,” Lew-Williams said. “We believe that everyday listening experience in infancy — this back-and-forth processing of two languages — is likely to give rise to the cognitive advantages that have been documented in both bilingual children and adults.”


  8. Cultural activities may influence the way we think

    August 19, 2017 by Ashley

    From the American Friends of Tel Aviv University press release:

    A new Tel Aviv University study suggests that cultural activities, such as the use of language, influence our learning processes, affecting our ability to collect different kinds of data, make connections between them, and infer a desirable mode of behavior from them.

    “We believe that, over lengthy time scales, some aspects of the brain must have changed to better accommodate the learning parameters required by various cultural activities,” said Prof. Arnon Lotem, of TAU’s Department of Zoology, who led the research for the study. “The effect of culture on cognitive evolution is captured through small modifications of evolving learning and data acquisition mechanisms. Their coordinated action improves the brain network’s ability to support learning processes involved in such cultural phenomena as language or tool-making.”

    Prof. Lotem developed the new learning model in collaboration with Prof. Joseph Halpern and Prof. Shimon Edelman, both of Cornell University, and Dr. Oren Kolodny of Stanford University (formerly a PhD student at TAU). The research was recently published in PNAS.

    “Our new computational approach to studying human and animal cognition may explain how human culture shaped the evolution of human cognition and memory,” Prof. Lotem said. “The brain is not a rigid learning machine in which a particular event necessarily leads to another particular event. Instead, it functions according to coevolving mechanisms of learning and data acquisition, with certain memory parameters that jointly construct a complex network, capable of supporting a range of cognitive abilities.

    “Any change in these parameters may change the constructed network and thus the function of the brain,” Prof. Lotem said. “This is how small modifications can adapt our brain to ecological as well as to cultural changes. Our model reflects this.”

    To learn, the brain calculates statistics on the data it takes in from the environment, monitoring the distribution of data and determining the level of connections between them. The new learning model assumes a limited window of memory and constructs an associative network that represents the frequency of the connections between data items.

    “A computer remembers all the data it is fed. But our brain developed in a way that limits the quantity of data it can receive and remember,” said Prof. Lotem. “Our model hypothesizes that the brain does this ‘intentionally’ — that is, the mechanism of filtering the data from the surroundings is an integral element in the learning process. Moreover, a limited working memory may paradoxically be helpful in some cognitive tasks that require extensive computation. This may explain why our working memory is actually more limited than that of our closest relatives, chimpanzees.”

    Working with a large memory window imposes a far greater computational burden on the brain than working with a small window. Human language, for example, presents computational challenges. When we listen to a string of syllables, we need to scan a massive number of possible combinations to identify familiar words.

    But this is only a problem if the person who is learning really needs to care about the exact order of data items, which is the case with language, according to Dr. Lotem. On the other hand, a person only has to identify a small combination of typical features in order to discriminate between two types of trees in the forest. The exact order of the features is not as important, computation is simpler and a larger working memory may be better.

    “Some of these principles that evolved in the biological brain may be useful in the development of AI someday,” Dr. Lotem said. “Currently the concept of limiting memory in order to improve computation is not something that people do in the field of AI, but perhaps they should try and see whether it can paradoxically be helpful in some cases, as in our human brain.”

    “Excluding very recent cultural innovations, the assumption that culture shaped the evolution of cognition is both more parsimonious and more productive than assuming the opposite,” the researchers concluded. They are currently examining how natural variations in learning and memory parameters may influence learning tasks that require extensive computation.


  9. Study suggests language development starts in the womb

    August 9, 2017 by Ashley

    From the University of Kansas press release:

    A month before they are born, fetuses carried by American mothers-to-be can distinguish between someone speaking to them in English and Japanese.

    Using non-invasive sensing technology from the University of Kansas Medical Center for the first time for this purpose, a group of researchers from KU’s Department of Linguistics has shown this in-utero language discrimination. Their study published in the journal NeuroReport has implications for fetal research in other fields, the lead author says.

    “Research suggests that human language development may start really early — a few days after birth,” said Utako Minai, associate professor of linguistics and the team leader on the study. “Babies a few days old have been shown to be sensitive to the rhythmic differences between languages. Previous studies have demonstrated this by measuring changes in babies’ behavior; for example, by measuring whether babies change the rate of sucking on a pacifier when the speech changes from one language to a different language with different rhythmic properties, Minai said.

    “This early discrimination led us to wonder when children’s sensitivity to the rhythmic properties of language emerges, including whether it may in fact emerge before birth,” Minai said. “Fetuses can hear things, including speech, in the womb. It’s muffled, like the adults talking in a ‘Peanuts’ cartoon, but the rhythm of the language should be preserved and available for the fetus to hear, even though the speech is muffled.”

    Minai said there was already a study that suggested fetuses could discriminate between different types of language, based on rhythmic patterns, but none using the more accurate device available at the Hoglund Brain Imaging Center at KU Medical Center called a magnetocardiogram (MCG).

    “The previous study used ultrasound to see whether fetuses recognized changes in language by measuring changes in fetal heart rate,” Minai explained. “The speech sounds that were presented to the fetus in the two different languages were spoken by two different people in that study. They found that the fetuses were sensitive to the change in speech sounds, but it was not clear if the fetuses were sensitive to the differences in language or the differences in speaker, so we wanted to control for that factor by having the speech sounds in the two languages spoken by the same person.”

    Two dozen women, averaging roughly eight months pregnant, were examined using the MCG.

    Kathleen Gustafson, a research associate professor in the Department of Neurology at the medical center’s Hoglund Brain Imaging Center, was part of the investigator team.

    “We have one of two dedicated fetal biomagnetometers in the United States,” Gustafson said. “It fits over the maternal abdomen and detects tiny magnetic fields that surround electrical currents from the maternal and fetal bodies.”

    That includes, Gustafson said, heartbeats, breathing and other body movements.

    “The biomagnetometer is more sensitive than ultrasound to the beat-to-beat changes in heart rate,” she said. “Obviously, the heart doesn’t hear, so if the baby responds to the language change by altering heart rate, the response would be directed by the brain.”

    Which is exactly what the recent study found.

    “The fetal brain is developing rapidly and forming networks,” Gustafson said. “The intrauterine environment is a noisy place. The fetus is exposed to maternal gut sounds, her heartbeats and voice, as well as external sounds. Without exposure to sound, the auditory cortex wouldn’t get enough stimulation to develop properly. This study gives evidence that some of that development is linked to language.”

    Minai had a bilingual speaker make two recordings, one each in English and Japanese, to be played in succession to the fetus. English and Japanese are argued to be rhythmically distinctive. English speech has a dynamic rhythmic structure resembling Morse code signals, while Japanese has a more regular-paced rhythmic structure.

    Sure enough, the fetal heart rates changed when they heard the unfamiliar, rhythmically distinct language (Japanese) after having heard a passage of English speech, while their heart rates did not change when they were presented with a second passage of English instead of a passage in Japanese.

    “The results came out nicely, with strong statistical support,” Minai said. “These results suggest that language development may indeed start in utero. Fetuses are tuning their ears to the language they are going to acquire even before they are born, based on the speech signals available to them in utero. Pre-natal sensitivity to the rhythmic properties of language may provide children with one of the very first building blocks in acquiring language.”

    “We think it is an extremely exciting finding for basic science research on language. We can also see the potential for this finding to apply to other fields.”


  10. Study suggests our brains synchronize during a conversation

    August 7, 2017 by Ashley

    From the SINC press release:

    The rhythms of brainwaves between two people taking part in a conversation begin to match each other. This is the conclusion of a study published in the magazine Scientific Reports, led by the Basque research centre BCBL. According to scientists, this interbrain synchrony may be a key factor in understanding language and interpersonal communication.

    Something as simple as an everyday conversation causes the brains of the participants to begin to work simultaneously. This is the conclusion of a study carried out by the Basque Centre on Cognition, Brain, and Language (BCBL), recently published in the magazine Scientific Reports.

    Until now, most traditional research had suggested the hypothesis that the brain “synchronizes” according to what is heard, and correspondingly adjusts its rhythms to auditory stimuli.

    Now, the experts from this Donostia-based research centre have gone a step further and simultaneously analysed the complex neuronal activity of two strangers who hold a dialogue for the first time.

    The team, led by Alejandro Pérez, Manuel Carreiras and Jon Andoni Duñabeitia, has confirmed by recording cerebral electrical activity- that the neuronal activity of two people involved in an act of communication “synchronize” in order to allow for a “connection” between both subjects.

    “It involves interbrain communion that goes beyond language itself and may constitute a key factor in interpersonal relations and the understanding of language,” Jon Andoni Duñabeitia explains.

    Thus, the rhythms of the brainwaves corresponding to the speaker and the listener adjust according to the physical properties of the sound of the verbal messages expressed in a conversation. This creates a connection between the two brains, which begin to work together towards a common goal: communication.

    “The brains of the two people are brought together thanks to language, and communication creates links between people that go far beyond what we can perceive from the outside,” added the researcher from the Basque research centre. “We can find out if two people are having a conversation solely by analysing their brain waves.”

    What is neural synchrony?

    For the purposes of the study, the BCBL researchers used 15 dyads of people of the same sex, complete strangers to each other, separated by a folding screen. This ensured that the connection generated was truly thanks to the communication established.

    Following a script, the dyads held a general conversation and took turns playing the roles of speaker and listener.

    Through electroencephalography (EEG) — a non-invasive procedure that analyses electrical activity in the brain — the scientists measured the movement of their brainwaves simultaneously and confirmed that their oscillations took place at the same time.

    “To be able to know if two people are talking between themselves, and even what they are talking about, based solely on their brain activity is something truly marvellous. Now we can explore new applications, which are highly useful in special communicative contexts, such as the case of people who have difficulties with communication,” Duñabeitia pointed out.

    In the future, the understanding of this interaction between two brains would allow for the comprehension and analysis of very complex aspects of the fields of psychology, sociology, psychiatry, or education, using the neural images within an ecological or real-world context.

    “Demonstrating the existence of neural synchrony between two people involved in a conversation has only been the first step,” confirmed Alejandro Pérez. “There are many unanswered questions and challenges left to resolve.”

    Pérez further maintains that the practical potential of the study is enormous. “Problems with communication occur every day. We are planning to get the most out of this discovery of interbrain synchronization with the goal of improving communication,” he concluded.

    The next step for the researchers will be to learn, by applying the same technique and pair dynamic, if the brains of two people “synchronize” in the same way when the conversation takes place in their non-native language.