1. Study looks at timbre shifting in “baby talk”

    October 17, 2017 by Ashley

    From the Cell Press press release:

    When talking with their young infants, parents instinctively use “baby talk,” a unique form of speech including exaggerated pitch contours and short, repetitive phrases. Now, researchers reporting in Current Biology on October 12 have found another unique feature of the way mothers talk to their babies: they shift the timbre of their voice in a rather specific way. The findings hold true regardless of a mother’s native language.

    “We use timbre, the tone color or unique quality of a sound, all the time to distinguish people, animals, and instruments,” says Elise Piazza from Princeton University. “We found that mothers alter this basic quality of their voices when speaking to infants, and they do so in a highly consistent way across many diverse languages.”

    Timbre is the reason it’s so easy to discern idiosyncratic voices — the famously velvety sound of Barry White, the nasal tone of Gilbert Gottfried, and the gravelly sound of Tom Waits — even if they’re all singing the same note, Piazza explains.

    Piazza and her colleagues at the Princeton Baby Lab, including Marius Catalin Iordan and Casey Lew-Williams, are generally interested in the way children learn to detect structure in the voices around them during early language acquisition. In the new study, they decided to focus on the vocal cues that parents adjust during baby talk without even realizing they’re doing it.

    The researchers recorded 12 English-speaking mothers while they played with and read to their 7- to 12-month-old infants. They also recorded those mothers while they spoke to another adult.

    After quantifying each mother’s unique vocal fingerprint using a concise measure of timbre, the researchers found that a computer could reliably tell the difference between infant- and adult-directed speech. In fact, using an approach called machine learning, the researchers found that a computer could learn to differentiate baby talk from normal speech based on just one second of speech data. The researchers verified that those differences couldn’t be explained by pitch or background noise.

    The next question was whether those differences would hold true in mothers speaking other languages. The researchers enlisted another group of 12 mothers who spoke nine different languages, including Spanish, Russian, Polish, Hungarian, German, French, Hebrew, Mandarin, and Cantonese. Remarkably, they found that the timbre shift observed in English-speaking mothers was highly consistent across those languages from around the world.

    “The machine learning algorithm, when trained on English data alone, could immediately distinguish adult-directed from infant-directed speech in a test set of non-English recordings and vice versa when trained on non-English data, showing strong generalizability of this effect across languages,” Piazza says. “Thus, shifts in timbre between adult-directed and infant-directed speech may represent a universal form of communication that mothers implicitly use to engage their babies and support their language learning.”

    The researchers say the next step is to explore how the timbre shift helps infants in learning. They suspect that the unique timbre fingerprint could help babies learn to differentiate and direct their attention to their mother’s voice from the time they are born.

    And don’t worry, dads. While the study was done in mothers to keep the pitches more consistent across study participants, the researchers say it’s likely the results will apply to fathers, too.


  2. Study suggests reading difficulties in children may sometimes be linked to hearing problems

    by Ashley

    From the Coventry University press release:

    Children with reading difficulties should be more thoroughly screened for hearing problems, a new report by Coventry University academics has said.

    The study, funded by the Nuffield Foundation, found 25 per cent of its young participants who had reading difficulties showed mild or moderate hearing impairment, of which their parents and teachers were unaware.

    The researchers believe that if there was more awareness of youngsters’ hearing problems — as well as an understanding of what particular aspects of literacy they struggled with — then the children might be able to receive more structured support that could help them improve their reading and writing skills.

    The study by academics at the university’s Centre for Advances in Behavioural Science compared children with dyslexia to youngsters who had a history of repeated ear infections to see if they had a similar pattern of literacy difficulties.

    A total of 195 children aged between eight and 10 — including 36 with dyslexia and 29 with a history of repeated ear infections — completed a series of tests to establish their reading and writing skills and how they used the structures of words based on their sounds and meanings, in speech and literacy.

    They were retested 18 months later, when a hearing screening was also carried out.

    None of the parents of the children with dyslexia reported any knowledge of hearing loss before the tests, but the screening showed that nine out of 36 of these children had some form of hearing loss.

    Around one third of the children who had repeated ear infections had problems with reading and writing, although the researchers suggest repeat ear infections will only result in reading difficulties when accompanied by weaknesses in other areas.

    The results showed that children with dyslexia have different patterns of literacy difficulties to children with a history of repeat ear infections, although there is some overlap between the groups.

    Children with dyslexia had difficulties with literacy activities involving the ability to manipulate speech sounds (known as phonology) and the knowledge of grammatical word structure (called morphology).

    The academics said these youngsters need to be taught how to use morphology in a highly-structured step-by-step way to help them improve their literacy skills.

    Children with a history of repeated ear infections mainly had problems with the phonology tasks; showing that they still had subtle difficulties with the perception of spoken language.

    The academics suggested that teachers should be made aware if youngsters have had a history of repeated ear infections, so they can consider the possibility of any hearing loss and understand how the consequences of these infections may impact on children as they learn about the sound structure of words and begin to read.

    Children currently have their hearing tested as babies and, in some areas of the UK, when they start school. Even so, later onset deafness can occur at any age and GPs can arrange for a child to have a hearing test at any age if a parent or teacher has concerns.

    But the academics believe that more regular, detailed tests might help youngsters with literacy problems.

    Report author Dr Helen Breadmore said:

    “Many children in school may have an undetected mild hearing loss, which makes it harder for them to access the curriculum.

    “Current hearing screening procedures are not picking up these children, and we would advise that children have their hearing tested in more detail and more often.

    “A mild-moderate hearing loss will make the perception of speech sounds difficult, particularly in a classroom environment with background noise and other distractions. Therefore, children who have suffered repeated ear infections and associated hearing problems have fluctuating access to different speech sounds precisely at the age when this information is crucial in the early stages of learning to read.”


  3. Study suggests it is easier for bilinguals to pick up new languages

    October 12, 2017 by Ashley

    From the Georgetown University Medical Center press release:

    It is often claimed that people who are bilingual are better than monolinguals at learning languages. Now, the first study to examine bilingual and monolingual brains as they learn an additional language offers new evidence that supports this hypothesis, researchers say.

    The study, conducted at Georgetown University Medical Center and published in the journal Bilingualism: Language and Cognition, suggests that early bilingualism helps with learning languages later in life.

    “The difference is readily seen in language learners’ brain patterns. When learning a new language, bilinguals rely more than monolinguals on the brain processes that people naturally use for their native language,” says the study’s senior researcher, Michael T. Ullman, PhD, professor of neuroscience at Georgetown.

    “We also find that bilinguals appear to learn the new language more quickly than monolinguals,” says lead author Sarah Grey, PhD, an assistant professor in the department of modern languages and literatures at Fordham University. Grey worked with Ullman and co-author Cristina Sanz, PhD, on this study for her PhD research at Georgetown. Sanz is a professor of applied linguistics at Georgetown.

    The 13 bilingual college students enrolled in this study grew up in the U.S. with Mandarin-speaking parents, and learned both English and Mandarin at an early age. The matched comparison group consisted of 16 monolingual college students, who spoke only English fluently.

    The researchers studied Mandarin-English bilinguals because both of these languages differ structurally from the new language being learned. The new language was a well-studied artificial version of a Romance language, Brocanto2, that participants learned to both speak and understand. Using an artificial language allowed the researchers to completely control the learners’ exposure to the language.

    The two groups were trained on Brocanto2 over the course of about a week. At both earlier and later points of training, learners’ brain patterns were examined with electroencephalogram (EEG) electrodes on their scalps, while they listened to Brocanto2 sentences. This captures the natural brain-wave activity as the brain processes language.

    They found clear bilingual/monolingual differences. By the end of the first day of training, the bilingual brains, but not the monolingual brains, showed a specific brain-wave pattern, termed the P600. P600s are commonly found when native speakers process their language. In contrast, the monolinguals only began to exhibit P600 effects much later during learning — by the last day of training. Moreover, on the last day, the monolinguals showed an additional brain-wave pattern not usually found in native speakers of languages.

    “There has been a lot of debate about the value of early bilingual language education,” says Grey. “Now, with this small study, we have novel brain-based data that points towards a distinct language-learning benefit for people who grow up bilingual.”


  4. Study suggests something universal occurs in the brain when it processes stories, regardless of language

    October 10, 2017 by Ashley

    From the University of Southern California press release:

    New brain research by USC scientists shows that reading stories is a universal experience that may result in people feeling greater empathy for each other, regardless of cultural origins and differences.

    And in what appears to be a first for neuroscience, USC researchers have found patterns of brain activation when people find meaning in stories, regardless of their language. Using functional MRI, the scientists mapped brain responses to narratives in three different languages — English, Farsi and Mandarin Chinese.

    The USC study opens up the possibility that exposure to narrative storytelling can have a widespread effect on triggering better self-awareness and empathy for others, regardless of the language or origin of the person being exposed to it.

    “Even given these fundamental differences in language, which can be read in a different direction or contain a completely different alphabet altogether, there is something universal about what occurs in the brain at the point when we are processing narratives,” said Morteza Dehghani, the study’s lead author and a researcher at the Brain and Creativity Institute at USC.

    Dehghani is also an assistant professor of psychology at the USC Dornsife College of Letters, Arts and Sciences, and an assistant professor of computer science at the USC Viterbi School of Engineering.

    The study was published on Sept. 20 in the journal Human Brain Mapping.

    Making sense of 20 million personal anecdotes

    The researchers sorted through more than 20 million blog posts of personal stories using software developed at the USC Institute for Creative Technologies. The posts were narrowed down to 40 stories about personal topics such as divorce or telling a lie.

    They were then translated into Mandarin Chinese and Farsi, and read by a total of 90 American, Chinese and Iranian participants in their native language while their brains were scanned by MRI. The participants also answered general questions about the stories while being scanned.

    Using state-of-the-art machine learning and text-analysis techniques, and an analysis involving over 44 billion classifications, the researchers were able to “reverse engineer” the data from these brain scans to determine the story the reader was processing in each of the three languages. In effect, the neuroscientists were able to read the participants’ minds as they were reading.

    The brain is not resting

    In the case of each language, reading each story resulted in unique patterns of activations in the “default mode network” of the brain. This network engages interconnected brain regions such as the medial prefrontal cortex, the posterior cingulate cortex, the inferior parietal lobe, the lateral temporal cortex and hippocampal formation.

    The default mode network was originally thought to be a sort of autopilot for the brain when it was at rest and shown only to be active when someone is not engaged in externally directed thinking. Continued studies, including this one, suggest that the default mode network actually is working behind the scenes while the mind is ostensibly at rest to continually find meaning in narrative, serving an autobiographical memory retrieval function that influences our cognition related to the past, the future, ourselves and our relationship to others.

    “One of the biggest mysteries of neuroscience is how we create meaning out of the world. Stories are deep-rooted in the core of our nature and help us create this meaning,” said Jonas Kaplan, corresponding author at the Brain and Creativity Institute and an assistant professor of psychology at USC Dornsife.


  5. Study finds that people can more easily communicate warmer colors than cool ones

    September 26, 2017 by Ashley

    From the Massachusetts Institute of Technology press release:

    The human eye can perceive millions of different colors, but the number of categories human languages use to group those colors is much smaller. Some languages use as few as three color categories (words corresponding to black, white, and red), while the languages of industrialized cultures use up to 10 or 12 categories.

    In a new study, MIT cognitive scientists have found that languages tend to divide the “warm” part of the color spectrum into more color words, such as orange, yellow, and red, compared to the “cooler” regions, which include blue and green. This pattern, which they found across more than 100 languages, may reflect the fact that most objects that stand out in a scene are warm-colored, while cooler colors such as green and blue tend to be found in backgrounds, the researchers say.

    This leads to more consistent labeling of warmer colors by different speakers of the same language, the researchers found.

    “When we look at it, it turns out it’s the same across every language that we studied. Every language has this amazing similar ordering of colors, so that reds are more consistently communicated than greens or blues,” says Edward Gibson, an MIT professor of brain and cognitive sciences and the first author of the study, which appears in the Proceedings of the National Academy of Sciences the week of Sept. 18.

    The paper’s other senior author is Bevil Conway, an investigator at the National Eye Institute (NEI). Other authors are MIT postdoc Richard Futrell, postdoc Julian Jara-Ettinger, former MIT graduate students Kyle Mahowald and Leon Bergen, NEI postdoc Sivalogeswaran Ratnasingam, MIT research assistant Mitchell Gibson, and University of Rochester Assistant Professor Steven Piantadosi.

    Color me surprised

    Gibson began this investigation of color after accidentally discovering during another study that there is a great deal of variation in the way colors are described by members of the Tsimane’, a tribe that lives in remote Amazonian regions of Bolivia. He found that most Tsimane’ consistently use words for white, black, and red, but there is less agreement among them when naming colors such as blue, green, and yellow.

    Working with Conway, who was then an associate professor studying visual perception at Wellesley College, Gibson decided to delve further into this variability. The researchers asked about 40 Tsimane’ speakers to name 80 color chips, which were evenly distributed across the visible spectrum of color.

    Once they had these data, the researchers applied an information theory technique that allowed them to calculate a feature they called “surprisal,” which is a measure of how consistently different people describe, for example, the same color chip with the same color word.

    When a particular word (such as “blue” or “green”) is used to describe many color chips, then one of these chips has higher surprisal. Furthermore, chips that people tend to label consistently with just one word have a low surprisal rate, while chips that different people tend to label with different words have a higher surprisal rate. The researchers found that the color chips labeled in Tsimane’, English, and Spanish were all ordered such that cool-colored chips had higher average surprisals than warm-colored chips (reds, yellows, and oranges).

    The researchers then compared their results to data from the World Color Survey, which performed essentially the same task for 110 languages around the world, all spoken by nonindustrialized societies. Across all of these languages, the researchers found the same pattern.

    This reflects the fact that while the warm colors and cool colors occupy a similar amount of space in a chart of the 80 colors used in the test, most languages divide the warmer regions into more color words than the cooler regions. Therefore, there are many more color chips that most people would call “blue” than there are chips that people would define as “yellow” or “red.”

    “What this means is that human languages divide that space in a skewed way,” Gibson says. “In all languages, people preferentially bring color words into the warmer parts of the space and they don’t bring them into the cooler colors.”

    Colors in the forefront

    To explore possible explanations for this trend, the researchers analyzed a database of 20,000 images collected and labeled by Microsoft, and they found that objects in the foreground of a scene are more likely to be a warm color, while cooler colors are more likely to be found in backgrounds.

    “Warm colors are in the foreground, they’re all the stuff that we interact with and want to talk about,” Gibson says. “We need to be able to talk about things which are identical except for their color: objects.”

    Gibson now hopes to study languages spoken by societies found in snowy or desert climates, where background colors are different, to see if their color naming system is different from what he found in this study.


  6. Study suggests the bilingual brain calculates differently depending on the language used

    September 22, 2017 by Ashley

    From the University of Luxembourg press release:

    People can intuitively recognise small numbers up to four; however, when calculating they depend on the assistance of language. In this respect, the fascinating research question ensues: how do multilingual people solve arithmetical tasks presented to them in different languages of which they have a very good command? The question will gain in importance in the future, as an increasingly globalised job market and accelerated migration will mean that ever more people seek work and study outside of the linguistic area of their home countries.

    This question was investigated by a research team led by Dr Amandine Van Rinsveld and Professor Dr Christine Schiltz from the Cognitive Science and Assessment Institute (COSA) at the University of Luxembourg. For the purpose of the study, the researchers recruited subjects with Luxembourgish as their mother tongue, who successfully completed their schooling in the Grand Duchy of Luxembourg and continued their academic studies in francophone universities in Belgium. Thus, the study subjects mastered both the German and French languages perfectly. As Luxembourger students, they took maths classes in primary schools in German and then in secondary schools in French.

    In two separate test situations, the study participants had to solve very simple and a bit more complex addition tasks, both in German and French. In the tests it became evident that the subjects were able to solve simple addition tasks equally well in both languages. However, for complex addition in French, they required more time than with an identical task in German. Moreover, they made more errors when attempting to solve tasks in French.

    During the tests, functional magnetic resonance imaging (fMRI) was used to measure the brain activity of the subjects. This demonstrated that, depending on the language used, different brain regions were activated. With addition tasks in German, a small speech region in the left temporal lobe was activated. When solving complex calculatory tasks in French, additional parts of the subjects’ brains responsible for processing visual information, were involved. However, during the complex calculations in French, the subjects additionally fell back on figurative thinking. The experiments do not provide any evidence that the subjects translated the tasks they were confronted with from French into German, in order to solve the problem. While the test subjects were able to solve German tasks on the basis of the classic, familiar numerical-verbal brain areas, this system proved not to be sufficiently viable in the second language of instruction, in this case French. To solve the arithmetic tasks in French, the test subjects had to systematically fall back on other thought processes, not observed so far in monolingual persons.

    The study documents for the first time, with the help of brain activity measurements and imaging techniques, the demonstrable cognitive “extra effort” required for solving arithmetic tasks in the second language of instruction. The research results clearly show that calculatory processes are directly affected by language.


  7. Study suggests conversation is faster when words are accompanied by gestures

    September 19, 2017 by Ashley

    From the Springer press release:

    When someone asks a question during a conversation, their conversation partner answers more quickly if the questioner also moves their hands or head to accompany their words. These are the findings of a study led by Judith Holler of the Max Planck Institute for Psycholinguistics and Radboud University Nijmegen in the Netherlands. The study is published in Springer’s journal Psychonomic Bulletin & Review and focusses on how gestures influence language processing.

    The transition between turns taken during a conversation is astonishingly fast, with a mere 200 milliseconds typically elapsing between the contribution of one speaker to the next. Such speed means that people must be able to comprehend, produce and coordinate their contributions to a conversation in good time.

    To study the role of gestures during conversation, Holler and her colleagues, Kobin Kendrick and Stephen Levinson, analyzed the interaction of seven groups of three participants. The groups were left alone in a recording suite for twenty minutes, during which their interaction was filmed with three high-definition video cameras. The researchers analyzed the question-response sequences in particular because these are so prevalent in conversations. Holler and her team found that there was a strong visual component to most questions being asked and answered during the conversations. These took the form of bodily signals such as communicative head or hand movements.

    Bodily signals appear to profoundly influence language processing in interaction,” says Holler. “Questions accompanied by gestures lead to shorter turn transition times — that is, to faster responses — than questions without gestures, and responses come even earlier when gestures end before compared to after the question turn has ended.”

    This means that gestures that end early may give us an early visual cue that the speaker is about to end, thus helping us to respond faster. But, at least for those cases in which gestures did not end early, it also means that the additional information conveyed by head and hand gestures may help us process or predict what is being said in conversation.

    “The empirical findings presented here provide a first glimpse of the possible role of the body in the psycholinguistic processes underpinning human communication,” explains Holler. “They also provide a stepping stone for investigating these processes and mechanisms in much more depth in the future.”


  8. Is changing languages effortful for bilingual speakers? Depends on the situation

    September 15, 2017 by Ashley

    From the New York University press release:

    Research on the neurobiology of bilingualism has suggested that switching languages is inherently effortful, requiring executive control to manage cognitive functions, but a new study shows this is only the case when speakers are prompted, or forced, to do so.

    In fact, this latest work finds that switching languages when conversing with another bilingual individual — a circumstance when switches are typically voluntary — does not require any more executive control than when continuing to speak the same language.

    The findings appear in the Journal of Neuroscience.

    “For a bilingual human, every utterance requires a choice about which language to use,” observes senior author Liina Pylkkanen, a professor in New York University’s Department of Linguistics and Department of Psychology. “Our findings show that circumstances influence bilingual speakers’ brain activity when making language switches.”

    “Bilingualism is an inherently social phenomenon, with the nature of our interactions determining language choice,” adds lead author Esti Blanco-Elorrieta, an NYU doctoral candidate. “These results make clear that even though we may switch between languages in which we are fluent, our brains respond differently, depending on what spurs such changes.”

    Historically, research on the neuroscience of bilingualism has asked speakers to associate languages with a cue that bears no natural association to the language, such as a color, and to then name pictures in the language indicated by the color cue. However, this type of experiment doesn’t capture the real-life experience of a bilingual speaker — given experimental parameters, it artificially prompts, or forces, the speakers to speak a particular language. By contrast, in daily interactions, language choice is determined on the basis of social cues or ease of access to certain vocabulary items in one language vs. another.

    This distinction raises the possibility that our brains don’t have to work as hard when changing languages in more natural settings.

    In an effort to understand neural activity of bilingual speakers in both circumstances, the researchers used magnetoencephalography (MEG), a technique that maps neural activity by recording magnetic fields generated by the electrical currents produced by our brain. They studied Arabic-English bilingual speakers in a variety of conversational situations, ranging from completely artificial scenarios — much like earlier experiments — to fully natural conversations. These conversations were real conversations between undergraduates that had agreed to be mic’d for a portion of their day on campus.

    Their results showed marked distinctions between artificial and more natural settings. Specifically, the brain areas for executive, or cognitive, control — the anterior cingulate and prefrontal cortex — were less involved during language changes in the natural setting than they were in the artificial setting. In fact, when the study’s subjects were free to switch languages whenever they wanted, they did not engage these areas at all.

    Furthermore, in a listening mode, language switches in the artificial setting required an expansive tapping of the brain’s executive control areas; however, language switching while listening to a natural conversation engaged only the auditory cortices.

    In other words, the neural cost to switch languages was much lighter during a conversation — when speakers chose which language to speak — than in a classic laboratory task, in which language choice was dictated by artificial cues.

    “This work gets us closer to understanding the brain basis of bilingualism as opposed to language switching in artificial laboratory tasks” observes Pylkkänen.

    The study shows that the role of executive control in language switching may be much smaller than previously thought.

    This is important, the researchers note, for theories about “bilingual advantage,” which posit that bilinguals have superior executive control because they switch language frequently. These latest results suggest that the advantage may only arise for bilinguals who need to control their languages according to external constraints (such as the person they are speaking to) and would not occur by virtue of a life experience in a bilingual community where switching is fully free.

    The research was supported by a grant from the NYU Abu Dhabi Institute.


  9. Communicating in a foreign language takes emotion out of decision-making

    September 1, 2017 by Ashley

    From the University of Chicago press release:

    If you could save the lives of five people by pushing another bystander in front of a train to his death, would you do it? And should it make any difference if that choice is presented in a language you speak, but isn’t your native tongue?

    Psychologists at the University of Chicago found in past research that people facing such a dilemma while communicating in a foreign language are far more willing to sacrifice the bystander than those using their native tongue. In a paper published Aug. 14 in Psychological Science, the UChicago researchers take a major step toward understanding why that happens.

    “Until now, we and others have described how using a foreign language affects the way that we think,” said Boaz Keysar, the UChicago psychology professor in whose lab the research was conducted. “We always had explanations, but they were not tested directly. This is really the first paper that explains why, with evidence.”

    Through a series of experiments, Keysar and his colleagues explore whether the decision people make in the train dilemma is due to a reduction in the emotional aversion to breaking an ingrained taboo, an increase in deliberation thought to be associated with a utilitarian sense of maximizing the greater good or some combination of the two.

    “We discovered that people using a foreign language were not any more concerned with maximizing the greater good,” said lead author Sayuri Hayakawa, a UChicago doctoral student in psychology. “But rather, were less averse to violating the taboos that can interfere with making utility-maximizing choices.”

    The researchers, including Albert Costa and Joanna Corey from Pompeu Fabra University in Barcelona, propose that using a foreign language gives people some emotional distance and that allowed them to take the more utilitarian action.

    “I thought it was very surprising,” Keysar said. “My prediction was that we’d find that the difference is in how much they care about the common good. But it’s not that at all.”

    Studies from around the world suggest that using a foreign language makes people more utilitarian. Speaking a foreign language slows you down and requires that you concentrate to understand. Scientists have hypothesized that the result is a more deliberative frame of mind that makes the utilitarian benefit of saving five lives outweigh the aversion to pushing a man to his death.

    But Keysar’s own experience speaking a foreign language — English — gave him the sense that emotion was important. English just didn’t have the visceral resonance for him as his native Hebrew. It wasn’t as intimately connected to emotion, a feeling shared by many bilingual people and corroborated by numerous lab studies.

    “Your native language is acquired from your family, from your friends, from television,” Hayakawa said. “It becomes infused with all these emotions.”

    Foreign languages are often learned later in life in classrooms, and may not activate feelings, including aversive feelings, as strongly.

    The problem is that either the “more utilitarian” or the “less emotional” process would produce the same behavior. To help figure out which was actually responsible, the psychologists worked with David Tannenbaum, a postdoctoral research fellow at the University of Chicago Booth School of Business at the time of the research and now an assistant professor at the University of Utah.

    Tannenbaum is an expert at a technique called process dissociation, which allows researchers to tease out and measure the relative importance of different factors in a decision process. For the paper, the researchers did six separate studies with six different groups, including native speakers of English, German and Spanish. Each also spoke one of the other languages, so that all possible combinations were equally represented. Each person was randomly assigned to use either his or her native language or second language throughout the experiment.

    Participants read an array of paired scenarios that varied systematically in key ways. For example, instead of killing a man to save five people from death, they might be asked if they would kill him to save five people from minor injuries. The taboo act of killing the man is the same, but the consequences vary.

    “If you have enough of these paired scenarios, you can start gauging what are the factors that people are paying attention to,” Hayakawa said. “We found that people using a foreign language were not paying any more attention to the lives saved, but definitely were less averse to breaking these kinds of rules. So if you ask the classic question, ‘Is it the head or the heart?’ It seems that the foreign language gets to the heart.”

    The researchers are next looking at why that is. Does using a foreign language blunt people’s mental visualization of the consequences of their actions, contributing to their increased willingness to make the sacrifice? And do they create less mental imagery because of differences in how foreign language use affects which memories come to mind?

    The researchers are also starting to investigate whether their lab results apply in real-world situations where the stakes are high. A study Keysar’s team is initiating in Israel looks at whether the parties in a peace negotiation assess the same proposal differently if they see it in their own language or the language of their negotiating partner. And Keysar is interested in looking at whether language can be usefully considered in decisions made by doctors speaking a foreign language.

    “You might be able to predict differences in medical decision-making depending on the language that you use,” he said. “In some cases you might prefer a stronger emotional engagement, in some you might not.”


  10. What does music mean? Sign language may offer an answer

    August 28, 2017 by Ashley

    From the New York University press release:

    How do we detect the meaning of music? We may gain some insights by looking at an unlikely source, sign language, a newly released linguistic analysis concludes.

    “Musicians and music lovers intuitively know that music can convey information about an extra-musical reality,” explains author Philippe Schlenker, a senior researcher at Institut Jean-Nicod within France’s National Center for Scientific Research (CNRS) and a Global Distinguished Professor at New York University. “Music does so by way of abstract musical animations that are reminiscent of iconic, or pictorial-like, components of meaning that are common in sign language, but rare in spoken language.”

    The analysis, “Outline of Music Semantics,” appears in the journal Music Perception; it is available, with sound examples, here: http://ling.auf.net/lingbuzz/002942. A longer piece that discusses the connection with iconic semantics is forthcoming in the Review of Philosophy & Psychology (“Prolegomena to Music Semantics”).

    Schlenker acknowledges that spoken language also deploys iconic meanings–for example, saying that a lecture was ‘loooong’ gives a very different impression from just saying that it was ‘long.’ However, these meanings are relatively marginal in the spoken word; by contrast, he observes, they are pervasive in sign languages, which have the same general grammatical and logical rules as do spoken languages, but also far richer iconic rules.

    Drawing inspiration from sign language iconicity, Schlenker proposes that the diverse inferences drawn on musical sources are combined by way of abstract iconic rules. Here, music can mimic a reality, creating a “fictional source” for what is perceived to be real. As an example, he points to composer Camille Saint Saëns’s “The Carnival of the Animals” (1886), which aims to capture the physical movement of tortoises.

    “When Saint Saëns wanted to evoke tortoises in ‘The Carnival of Animals,’ he not only used a radically slowed-down version of a high-energy dance, the Can-Can,” Schlenker notes. “He also introduced a dissonance to suggest that the hapless animals were tripping, an effect obtained due to the sheer instability of the jarring chord.”

    In his work, Schlenker broadly considers how we understand music–and, in doing so, how we derive meaning through the fictional sources that it creates.

    “We draw all sorts of inferences about fictional sources of the music when we are listening,” he explains. “Lower pitch is, for instance, associated with larger sound sources, a standard biological code in nature. So, a double bass will more easily evoke an elephant than a flute would. Or, if the music slows down or becomes softer, we naturally infer that a piece’s fictional source is losing energy, just as we would in our daily, real-world experiences. Similarly, a higher pitch may signify greater energy–a physical code–or greater arousal, which is a biological code.”

    Fictional sources may be animate or inanimate, Schlenker adds, and their behavior may be indicative of emotions, which play a prominent role in musical meaning.

    “More generally, it is no accident that one often signals the end of a classical piece by simultaneously playing more slowly, more softly, and with a musical movement toward more consonant chords,” he says. “These are natural ways to indicate that the fictional source is gradually losing energy and reaching greater repose.”

    In his research, Schlenker worked with composer Arthur Bonetto to create minimal modifications of well-known music snippets to understand the source of the meaning effects they produce. This analytical method of ‘minimal pairs,’ borrowed from linguistics and experimental psychology, Schlenker posits, could be applied to larger musical excerpts in the future.