1. Study suggests bilingualism may increase cognitive flexibility in kids with Autism Spectrum Disorders (ASD)

    January 21, 2018 by Ashley

    From the McGill University press release:

    Children with Autism Spectrum Disorders (ASD) often have a hard time switching gears from one task to another. But being bilingual may actually make it a bit easier for them to do so, according to a new study which was recently published in Child Development.

    “This is a novel and surprising finding,” says Prof. Aparna Nadig, the senior author of the paper, from the School of Communication Sciences and Disorders at McGill University. “Over the past 15 years there has been a significant debate in the field about whether there is a ‘bilingual advantage’ in terms of executive functions. Some researchers have argued convincingly that living as a bilingual person and having to switch languages unconsciously to respond to the linguistic context in which the communication is taking place increases cognitive flexibility. But no one has yet published research that clearly demonstrates that this advantage may also extend to children on the autism spectrum. And so it’s very exciting to find that it does.”

    The researchers arrived at this conclusion after comparing how easily 40 children between the ages of six and nine, with or without ASD, who were either monolingual or bilingual, were able to shift tasks in a computer-generated test. There were ten children in each category.

    Blue rabbits or red boats

    The children were initially asked to sort a single object appearing on a computer screen by colour (i.e. sort blue rabbits and red boats as being either red or blue) and were then asked to switch and sort the same objects instead by their shape (i.e. sort blue rabbits and red boats by shape regardless of their color).

    The researchers found that bilingual children with ASD performed significantly better when it came to the more complex part of the task-shifting test relative to children with ASD who were unilingual. It is a finding which has potentially far-reaching implications for the families of children with ASD.

    “It is critical to have more sound evidence for families to use when making important educational and child-rearing decisions, since they are often advised that exposing a child with ASD to more than one language will just worsen their language difficulties,” says Ana Maria Gonzalez-Barrero, the paper’s first author, and a recent McGill PhD graduate. “But there are an increasing number of families with children with ASD for whom using two or more languages is a common and valued practice and, as we know, in bilingual societies such as ours in Montreal, speaking only one language can be a significant obstacle in adulthood for employment, educational, and community opportunities.”

    Despite the small sample size, the researchers believe that the ‘bilingual advantage’ that they saw in children with ASD has highly significant implications and should be studied further. They plan to follow the children with ASD that they tested in this study over the next three-five years to see how they develop. The researchers want to see whether the bilingual advantage they observed in the lab may also be observed in daily life as the children age.


  2. Study examines neurological causes of stuttering

    December 30, 2017 by Ashley

    From the Max Planck Institute for Human Cognitive and Brain Sciences press release:

    One per cent of adults and five per cent of children are unable to achieve what most of us take for granted — speaking fluently. Instead, they struggle with words, often repeating the beginning of a word, for example “G-g-g-g-g-ood morning” or get stuck with single sounds, such as “Ja” for “January” although they know exactly what they want to say.

    What processes in the brain cause people to stutter? Previous studies showed imbalanced activity of the two brain hemispheres in people who stutter compared to fluent speakers: A region in the left frontal brain is hypoactive, whereas the corresponding region in the right hemisphere is hyperactive. However, the cause of this imbalance is unclear. Does the less active left hemisphere reflect a dysfunction and causes the right side to compensate for this failure? Or is it the other way around and the hyperactive right hemisphere suppresses activity in the left hemisphere and is therefore the real cause of stuttering?

    Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and at the University Medical Center Göttingen have now gained crucial insights: The hyperactivity in regions of the right hemisphere seems to be central for stuttering: “Parts of the right inferior frontal gyrus (IFG) are particularly active when we stop actions, such as hand or speech movements,” says Nicole Neef, neuroscientist at MPI CBS and first author of the new study. “If this region is overactive, it hinders other brain areas that are involved in the initiation and termination of movements. In people who stutter, the brain regions that are responsible for speech movements are particularly affected.”

    Two of these areas are the left inferior frontal gyrus (IFG), which processes the planning of speech movements, and the left motor cortex, which controls the actual speech movements. “If these two processes are sporadically inhibited, the affected person is unable to speak fluently,” explains Neef.

    The scientists investigated these relations using Magnetic Resonance Imaging (MRI) in adults who have stuttered since childhood. In the study, the participants imagined themselves saying the names of the months. They used this method of imaginary speaking to ensure that real speech movements did not interfere with the sensitive MRI signals. The neuroscientists were then able to analyse the brain by scanning for modified fibre tracts in the overactive right hemisphere regions in participants who stutter.

    Indeed, they found a fibre tract in the hyperactive right network that was much stronger in affected persons than in those without speech disorders. “The stronger the frontal aslant tract (FAT), the more severe the stuttering. From previous studies we know that this fibre tract plays a crucial role in fine-tuning signals that inhibit movements,” the neuroscientist states. “The hyperactivity in this network and its stronger connections could suggest that one cause of stuttering lies in the neural inhibition of speech movements.”


  3. Study suggests certain books can increase infant learning during shared reading

    December 21, 2017 by Ashley

    From the University of Florida press release:

    Parents and pediatricians know that reading to infants is a good thing, but new research shows reading books that clearly name and label people and objects is even better.

    That’s because doing so helps infants retain information and attend better.

    “When parents label people or characters with names, infants learn quite a bit,” said Lisa Scott, a University of Florida psychology professor and co-author of the study published Dec. 8 in the journal Child Development. “Books with individual-level names may lead parents to talk to infants more, which is particularly important for the first year of life.”

    Scott and colleagues from the University of Massachusetts-Amherst studied infants in Scott’s Brain, Cognition, and Development Lab. Babies came into the lab twice: once at 6 months old and again at age 9 months. While in the lab, eye-tracking and electroencephalogram, or EEG, methods were used to measure attention and learning at both ages.

    In between visits, parents were asked to read with their infants at home according to a schedule that included 10 minutes of parent-infant shared book reading every day for the first two weeks, every other day for the second two weeks and then continued to decrease until infants returned at 9 months. Twenty-three families were randomly assigned storybooks. One set contained individual-level names, and the other contained category-level labels. Both sets of books were identical except for the labeling. Each of the training books’ eight pages presented an individual image and a two-sentence story.

    The individual-level books clearly identified and labeled all eight individuals, with names such as “Jamar,” “Boris,” “Anice,” and “Fiona.” The category-level books included two made-up labels (“hitchel,” “wadgen”) for all images. The control group included 11 additional 9-month-old infants who did not receive books.

    The infants whose parents read the individual-level names spent more time focusing and attending the images, and their brain activity clearly differentiated the individual characters after book reading. This was not found at 6 months (before book reading), for the control group, or for the group of infants who were given books with category-level labels.

    Scott has been studying how the specificity of labels affects infant learning and brain development since 2006. This longitudinal study is the third in a series. The eye tracking and EEG results are consistent with her other studies showing that name specificity improves cognition in infants.

    “There are lots of recommendations about reading books to babies, but our work provides a scientific basis for these recommendations and suggests that the type of book is also important,” she said. “Shared reading is a good way to support development in the first year of life,” “It creates an enjoyable and comforting environment for both the parents and the infant and encourages parents to talk to their infants.”


  4. Study suggests infant brain responses predict reading speed in secondary school

    December 20, 2017 by Ashley

    From the University of Jyväskylä press release:

    A study conducted at the Department of Psychology at the University of Jyväskylä, Finland and Jyväskylä Centre for Interdisciplinary Brain Research (CIBR) has found that the brain responses of infants with an inherited risk for dyslexia, a specific reading disability, predict their future reading speed in secondary school.

    The longitudinal study looked at the electrical brain responses of six-month-old infants to speech and the correlation between the brain responses and their pre-literacy skills in pre-school-age, as well as their literacy in the eighth grade at 14 years of age.

    The study discovered that the brain response of the infants with an inherited dyslexia risk differed from the brain responses of the control infants and predicted their reading speed in secondary school. The larger brain responses were related to a more fluent naming speed of familiar objects, better phonological skills, and faster reading.

    “The predictive effect of the infant’s brain response to the reading speed in secondary school is mediated by the pre-school-age naming speed of familiar objects, suggesting that if search of the words from the mental lexicon is hindered before school age, reading is still tangled in secondary school,” states researcher PhD Kaisa Lohvansuu.

    Atypical brain activation to speech in infants with inherited risk for dyslexia impedes the development of effective connections to the mental lexicon, and thus slows the naming and reading performances. Efficient/effortless search from the mental lexicon is therefore necessary in both fluent reading and naming.

    Developmental dyslexia, a specific reading disability, is the most common of the learning disabilities. It has a strong genetic basis: children of dyslexic parents are at great risk of encountering reading and/or writing difficulties at school.

    As speech stimuli, the current study used pseudo-words, i.e. words without meaning. The pseudo-words contained either a short or a long consonant (a double consonant). Phonemic length is a semantically distinguishing feature in the Finnish language, and differentiating it correctly would therefore be essential. However, the correct reading and writing of these contrasts have previously been found to be particularly difficult for Finnish dyslexics.

    The research was carried out as part of the Jyväskylä Longitudinal Study of Dyslexia (JLD). Half of the children who participated in the project had an inherited risk of dyslexia.


  5. Study suggests similarities across languages may stem from brain’s preference for efficient information processing

    December 13, 2017 by Ashley

    From the University of Arizona press release:

    An estimated 7,099 languages are spoken throughout the world today. Almost a third of them are endangered — spoken by dwindling numbers — while just 23 languages represent more than half of the global population.

    For years, researchers have been interested in the similarities seen across human languages. A new study led by University of Arizona researcher Masha Fedzechkina suggests that some of those similarities may be based on the human brain’s preference for efficient information processing.

    “If we look at languages of the world, they are very different on the surface, but they also share a lot of underlying commonalities, often called linguistic universals or cross-linguistic generalizations,” said Fedzechkina, an assistant professor in the UA Department of Linguistics and lead author of the study, published in the journal Psychological Science.

    “Most theories assume the reasons why languages have these cross-linguistic universals is because they’re in some way constrained by the human brain,” Fedzechkina said. “If these linguistic universals are indeed real, and if we understand their causes, then it can tell us something about how language is acquired or processed by the human brain, which is one of the central questions in language sciences.”

    Fedzechkina and her collaborators conducted a study in which two groups of English-speaking-only individuals were each taught, over a three-day period, a different miniature artificial language designed by the experimenters. The two languages were structured differently from each other, and, importantly, neither was structured like the participants’ native English.

    In both groups, participants were taught two ways to express the same ideas. When later tested verbally — asked to describe actions in a video — participants, in the phrasing of their answers, showed an overwhelming preference for word orders that resulted in short “dependency length,” which refers to the distance between words that depend on each other for interpretation.

    The finding suggests that language universals might be explained, at least in part, by what appears to be the human brain’s innate preference for “short dependencies.”

    “The longer the dependencies are, the harder they are to process in comprehension, presumably because of memory constraints,” Fedzechkina said. “If we look cross-linguistically, we find that word orders of languages, overall, tend to have shorter dependencies than would be expected by chance, suggesting that there is a correlation between constraints on human information processing and the structures of natural languages. We wanted to do this research to provide the first behavioral evidence for a causal link between the two, and we did. We found that when learners have two options in the input grammar, they tend to prefer the option that reduces dependency lengths and thus makes sentences in the language easier for the human brain to process.”

    What Artificial Languages Can Teach Us

    The researchers’ decision to use artificial languages for their study was strategic.

    “Traditionally, linguists have studied cross-linguistic universals by going to different cultures and documenting the structures of different languages, and then they looked for cross-linguistic commonalities,” Fedzechkina said. “That research has been transformative in identifying a large number of potential linguistic universals and has generated a lot of theories about why these universals exist, but it also has its drawbacks.”

    Among those drawbacks: It has been challenging to tease out the role of the brain.

    “If you look, for example, at languages like Spanish and Italian, they share a lot of structural commonalities, but many of these commonalities are there because both languages originated from Latin,” Fedzechkina said. “Also, languages that are related to each other geographically often share structures, too — for example, due to population movement — even if they are not related historically. Once we take into account these historical and geographic dependencies, we might not have enough independent data points to convincingly test hypotheses about language universals.”

    By teaching naïve study participants an artificial language that has certain structures that are not present in their native language, and then looking at what kind of structures they prefer after they have learned these languages, researchers can draw inferences about the causality underlying language universals, Fedzechkina said.

    “If the pattern is not present in the input artificial language and if it’s not present in the participants’ native language, but they still introduce this pattern, it likely reveals more general cognitive biases humans have,” she said.

    The finding gives researchers a better understanding of the role of human cognition in language structure and acquisition.

    “We know from work on second language acquisition that learners’ native language influences the way they learn a second or third language, and the fact that our participants relied so strongly on the deeper underlying principle of human information processing rather than on surface word order of their native language was very surprising and very impressive for us,” Fedzechkina said. “We provide the first behavioral evidence for the hypothesized connection between human information processing and language structure, and we suggest that processing constraints do play a role in language acquisition, language structure and the way language changes over time.”


  6. Study suggests hearing different accents at home impacts language processing in infants

    December 12, 2017 by Ashley

    From the University at Buffalo press release:

    Infants raised in homes where they hear a single language, but spoken with different accents, recognize words dramatically differently at about 12 months of age than their age-matched peers exposed to little variation in accent, according to a recent study by a University at Buffalo expert in language development.

    The findings, published in the Journal of the Acoustical Society of America, point to the importance of considering the effects of multiple accents when studying speech development and suggest that monolingual infants should not be viewed as a single group.

    “This is important if you think about clinical settings where children are tested,” says Marieke van Heugten, an assistant professor in UB’s Department of Psychology and lead author of the study with Elizabeth K. Johnson, an associate professor of psychology at the University of Toronto. “Speech language pathologists [in most of North America] typically work with the local variant of English, but if you have a child growing up in an environment with more than one accent then they might recognize words differently than a child who hears only one accent.

    Although extensive research exists on bilingualism, few studies have taken accents into account when looking at early word recognition in monolingualism, and van Heugten says none have explored the issue of accents in children younger than 18 months, the age when they traditionally develop the ability to recognize pronunciation differences that can occur across identical words.

    “Variability in children’s language input, what they hear and how they hear it, can have important consequences on word recognition in young, monolingual children,” she says. For instance, an American-English speaking parent might call the yellow vehicle that takes children to school a “bus,” while the pronunciation of the same word by an Irish-English speaking parent might sound more like “boss.” The parents are referencing the same object, but because the child hears the word pronounced two ways she needs to learn how to map those different pronunciations to the same object.

    For the study, researchers tested children who were seated on the lap of a parent. The children heard words typically known to 12-1/2 month olds, like daddy, mommy, ball, dog and bath, along with nonsense words, such as dimma, mitty and guttle. Head turns by the children, a common procedure used in infant speech perception that signals recognition, determined preference for particular words. A second experiment included children of 14-1/2- and 18-months of age.

    Van Heugten says children show a preference for the known words when they recognize words that occur in their language. If they fail to recognize words there’s no reason for them to express a preference, as on the surface, the known and the nonsense words sound equally exciting with an infant-directed intonation.

    The results indicate that children who hear just a single accent prefer to listen to real words in the lab, although those who hear multiple accents don’t have that preference at 12-1/2 months. This difference between preference patterns was found even though the two groups were matched on socioeconomic status as well as the number of words children understand and produce. This suggests that both groups of children are learning words at about the same rate, but that hearing multiple accents at home might change how children recognize these words around their first birthday, at least in lab settings. Perhaps children raised in multi-accent environments need more contextual information to recognize words because they do not assume all words will be spoken in the regional accent.

    But they no longer have difficulties recognizing these words in this challenging task by the time they’re 18-months-old, according to van Heugten.

    “What we’re concluding is that children who hear multiple accents process language differently than those who hear a single accent,” she says. “We should be aware of this difference, and keep in mind as a factor prediction behavior in test settings, especially when testing children from diverse areas in the world.”

    “We are excited to carry out additional work in this area, comparing the development of young children growing up in more or less linguistically diverse environments,” adds Johnson. “[This] will help us better understand language acquisition in general, and perhaps help us better diagnose and treat language delays in children growing up in different types of environments.”


  7. Study suggests mu­sic and nat­ive lan­guage in­ter­act in the brain

    December 11, 2017 by Ashley

    From the University of Helsinki press release:

    The brain’s auditory system can be shaped by exposure to different auditory environments, such as native language and musical training.

    A recent doctoral study by Caitlin Dawson from University of Helsinki focuses on interacting effects of native language patterns and musical experience on early auditory processing of basic sound features. Methods included electrophysiological brainstem recording as well as a set of behavioral auditory discrimination tasks.

    The auditory tasks were designed to find discrimination thresholds for intensity, frequency, and duration. A self-report questionnaire on musical sophistication was also used in the analyses.

    “We found that Finnish speakers showed an advantage in duration processing in the brainstem, compared to German speakers. The reason for this may be that Finnish language includes long and short sounds that determine the meaning of words, which trains Finnish speakers’ brains to be very sensitive to the timing of sounds,” Dawson states.

    For Finnish speakers, musical expertise was associated with enhanced behavioral frequency discrimination. Mandarin speaking musicians showed enhanced behavioral discrimination in both frequency and duration. Mandarin Chinese language has tones which determine the meaning of words.

    “The perceptual effects of musical expertise were not reflected in brainstem responses in either Finnish or Mandarin speakers. This might be because language is an earlier and more essential skill than music, and native speakers are experts at their own language,” Dawson says.

    The results suggest that musical expertise does not enhance all auditory features equally for all language speakers; native language phonological patterns may modulate the enhancing effects of musical expertise on processing of specific features.


  8. Study suggests socioeconomic status may be linked to differences in the vocabulary growth

    December 10, 2017 by Ashley

    From the University of Texas at Dallas press release:

    The nation’s 31 million children growing up in homes with low socioeconomic status have, on average, significantly smaller vocabularies compared with their peers.

    A new study from the Callier Center for Communication Disorders at The University of Texas at Dallas found these differences in vocabulary growth among grade school children of different socioeconomic statuses are likely related to differences in the process of word learning.

    Dr. Mandy Maguire, associate professor in the School of Behavioral and Brain Sciences (BBS), said in her study that children from lower-income homes learned 10 percent fewer words than their peers from higher-income homes. When entering kindergarten, children from low-income homes generally score about two years behind their higher-income peers on language and vocabulary measures.

    The vocabulary gap between the two groups of children gets larger throughout their schooling and has long-term academic implications, Maguire said.

    The primary reason for the differences in infancy and preschool is related to different quantity and quality of language exposure at home. But why the gap increases as the children get older is less studied.

    “We might assume that it’s the same reason that the gap is large when they’re young: that their environment is different,” Maguire said. “Another possibility is that all of this time spent in low-income situations has led to differences in their ability to learn a word. If that’s the case, there’s a problem in the mechanism of learning, which is something we can fix.”

    The study, recently published in the Journal of Experimental Child Psychology, aimed to determine whether socioeconomic status is related to word learning in grade school and to what degree vocabulary, reading and working memory might mediate that relationship.

    For the study, 68 children ages 8 to 15 performed a task that required using the surrounding text to identify the meaning of an unknown word. One exercise included three sentences, each with a made-up word at the end — for example, “Mom piled the pillows on the thuv.”

    “You have to understand all of the language in each sentence leading up to the made-up word, remember it and decide systematically across all three sentences what the made-up word must mean,” Maguire said. “In this case, the three sentences all indicated ‘thuv’ meant ‘bed.’ This isn’t quite the same as real word learning, where we have to create a new concept, but this what we think kids — and adults — do as they initially learn a word.”

    Specifically, the study found that children of lower socioeconomic status are not as effective at using known vocabulary to build a robust picture or concept of the incoming language and use that to identify the meaning of an unknown word.

    Reading and working memory — also known to be problematic for children from low-income homes — were not found to be related.

    The study also provides potential strategies that may be effective for intervention. For children ages 8 to 15, schools may focus too much on reading and not enough on increasing vocabulary through oral methods, Maguire said.

    Maguire said parents and teachers can help children identify relationships between words in sentences, such as assigning a word like “bakery,” and having the child list as many related words as possible in one minute. Visualizing the sentences as they read also can help.

    “Instead of trying to fit more vocabulary in a child’s head, we might be able to work on their depth of knowledge of the individual words and linking known meanings together in a way that they can use to learn new information,” Maguire said.

    This study was funded by a three-year grant from the National Science Foundation, which was awarded in April 2016.

    Three co-authors of the paper are BBS doctoral students who work in Maguire’s Developmental Neurolinguistics Laboratory: Julie M. Schneider, Anna E. Middleton and Yvonne Ralph. Lab coordinator Michael Lopez and Dr. Robert Ackerman, an associate professor, also are co-authors, along with Dr. Alyson Abel, a recent Callier Center postdoctoral fellow who is an assistant professor at San Diego State University.


  9. Study suggests babies understand when words are related

    December 1, 2017 by Ashley

    From the Duke University press release:

    The meaning behind infants’ screeches, squeals and wails may frustrate and confound sleep-deprived new parents. But at an age when babies cannot yet speak to us in words, they are already avid students of language.

    Even though there aren’t many overt signals of language knowledge in babies, language is definitely developing furiously under the surface,” said Elika Bergelson, assistant professor of psychology and neuroscience at Duke University.

    Bergelson is the author of a surprising 2012 study showing that six- to nine-month-olds already have a basic understanding of words for food and body parts. In a new report, her team used eye-tracking software to show that babies also recognize that the meanings of some words, like car and stroller, are more alike than others, like car and juice.

    By analyzing home recordings, the team found that babies’ word knowledge correlated with the proportion of time they heard people talking about objects in their immediate surroundings.

    “Even in the very early stages of comprehension, babies seem to know something about how words relate to each other,” Bergelson said. “And already by six months, measurable aspects of their home environment predict how much of this early level of knowledge they have. There are clear follow-ups for potential intervention work with children who might be at-risk for language delays or deficits.”

    The study appears the week of Nov. 20 in the Proceedings of the National Academy of Sciences.

    To gauge word comprehension, Bergelson invited babies and their caregivers into a lab equipped with a computer screen and few other infant distractions. The babies were shown pairs of images that were related, like a foot and a hand, or unrelated, like a foot and a carton of milk. For each pair, the caregiver (who couldn’t see the screen) was prompted to name one of the images while an eye-tracking device followed the baby’s gaze.

    Bergelson found that babies spent more time looking at the image that was named when the two images were unrelated than when they were related.

    “They may not know the full-fledged adult meaning of a word, but they seem to recognize that there is something more similar about the meaning of these words than those words,” Bergelson said.

    Bergelson then wanted to investigate how babies’ performance in the lab might be linked to the speech they hear at home. To peek into the daily life of the infants, she sent each caregiver home with a colorful baby vest rigged with a small audio recorder and asked them to use the vest to record day-long audio of the infant. She also used tiny hats fitted with lipstick-sized video recorders to collect hour-long video of each baby interacting with his or her caregivers.

    Combing through the recordings, Bergelson and her team categorized and tabulated different aspects of speech the babies were exposed to, including the objects named, what kinds of phrases they occurred in, who said them, and whether or not objects named were present and attended to.

    “It turned out that the proportion of the time that parents talked about something when it was actually there to be seen and learned from correlated with the babies’ overall comprehension,” Bergelson said.

    For instance, Bergelson said, if a parent says, “here is my favorite pen,” while holding up a pen, the baby might learn something about pens based on what they can see. In contrast, if a parent says, “tomorrow we are going to see the lions at the zoo,” the baby might not have any immediate clues to help them understand what lion means.

    “This study is an exciting first step in identifying how early infants learn words, how their initial lexicon is organized, and how it is shaped or influenced by the language that they hear in the world that surrounds them,” said Sandra Waxman, a professor of psychology at Northwestern University who was not involved in the study.

    But, Waxman cautions, it is too early in the research to draw any conclusions about how caregivers should be speaking to their infants.

    “Before anyone says ‘this is what parents need to be doing,’ we need further studies to tease apart how culture, context and the age of the infant can affect their learning,” Waxman said.

    “My take-home to parents always is, the more you can talk to your kid, the better,” Bergelson said. “Because they are listening and learning from what you say, even if it doesn’t appear to be so.”

     


  10. Study suggests punctuation in text messages helps replace cues found in face-to-face conversations

    November 25, 2017 by Ashley

    From the Binghamton University press release:

    Emoticons, irregular spellings and exclamation points in text messages aren’t sloppy or a sign that written language is going down the tubes — these “textisms” help convey meaning and intent in the absence of spoken conversation, according to newly published research from Binghamton University, State University of New York.

    “In contrast with face-to-face conversation, texters can’t rely on extra-linguistic cues such as tone of voice and pauses, or non-linguistic cues such as facial expressions and hand gestures,” said Binghamton University Professor of Psychology Celia Klin. “In a spoken conversation, the cues aren’t simply add-ons to our words; they convey critical information. A facial expression or a rise in the pitch of our voices can entirely change the meaning of our words.”

    “It’s been suggested that one way that texters add meaning to their words is by using “textisms” — things like emoticons, irregular spellings (sooooo) and irregular use of punctuation (!!!).”

    A 2016 study led by Klin found that text messages that end with a period are seen as less sincere than text messages that do not end with a period. Klin pursued this subject further, conducting experiments to see if people reading texts understand textisms, asking how people’s understanding of a single-word text (e.g., yeah, nope, maybe) as a response to an invitation is influenced by the inclusion, or absence, of a period.

    “In formal writing, such as what you’d find in a novel or an essay, the period is almost always used grammatically to indicate that a sentence is complete. With texts, we found that the period can also be used rhetorically to add meaning,” said Klin. “Specifically, when one texter asked a question (e.g., I got a new dog. Wanna come over?), and it was answered with a single word (e.g., yeah), readers understood the response somewhat differently depending if it ended with a period (yeah.) or did not end with a period (yeah). This was true if the response was positive (yeah, yup), negative (nope, nah) or more ambiguous (maybe, alright). We concluded that although periods no doubt can serve a grammatical function in texts just as they can with more formal writing — for example, when a period is at the end of a sentence — periods can also serve as textisms, changing the meaning of the text.”

    Klin said that this research is motivated by an interest in taking advantage of a unique moment in time when scientists can observe language evolving in real time.

    “What we are seeing with electronic communication is that, as with any unmet language need, new language constructions are emerging to fill the gap between what people want to express and what they are able to express with the tools they have available to them,” said Klin. “The findings indicate that our understanding of written language varies across contexts. We read text messages in a slightly different way than we read a novel or an essay. Further, all the elements of our texts — the punctuation we choose, the way that words are spelled, a smiley face — can change the meaning. The hope, of course, is that the meaning that is understood is the one we intended. Certainly, it’s not uncommon for those of us in the lab to take an extra second or two before we send texts. We wonder: How might this be interpreted? ‘Hmmm, period or no period? That sounds a little harsh; maybe I should soften it with a “lol” or a winky-face-tongue-out emoji.'”

    With trillions of text messages sent each year, we can expect the evolution of textisms, and of the language of texting more generally, to continue at a rapid rate, wrote the researchers. Texters are likely to continue to rely on current textisms, as well to as create new textisms, to take the place of the extra-linguistic and nonverbal cues available in spoken conversations. The rate of change for “talk-writing” is likely to continue to outpace the changes in other forms of English.

    “The results of the current experiments reinforce the claim that the divergence from formal written English that is found in digital communication is neither arbitrary nor sloppy,” said Klin. “It wasn’t too long ago that people began using email, instant messaging and text messaging on a regular basis. Because these forms of communication provide limited ways to communicate nuanced meaning, especially compared to face-to-face conversations, people have found other tools.”

    Klin believes that this subject could be studied further.

    “An important extension would be to have a situation that more closely mimics actual texting, but in the lab where we can observe how different cues, such as punctuation, abbreviations and emojis, contribute to texters’ understanding,” she said. “We might also examine some of the factors that should influence texters’ understanding, such as the social relationship between the two texters or the topic of the text exchange, as their level of formality should influence the role of things like punctuation.”