1. Researchers discover regions of the brain we use to categorize emotions that are communicated vocally

    December 29, 2017 by Ashley

    From the Université de Genève press release:

    Gestures and facial expressions betray our emotional state but what about our voices? How does simple intonation allow us to decode emotions — on the telephone, for example? By observing neuronal activity in the brain, researchers at the University of Geneva (UNIGE), Switzerland, have been able to map the cerebral regions we use to interpret and categorise vocal emotional representations. The results, which are presented in the journal Scientific Reports, underline the essential role played by the frontal regions in interpreting emotions communicated orally. When the process does not function correctly — following a brain injury, for instance — an individual will lack the ability to interpret another person’s emotions and intentions properly. The researchers also noted the intense network of connections that links this area to the amygdala, the key organ for processing emotions.

    The upper part of the temporal lobe in mammals is linked to hearing in particular. A specific area is dedicated to the vocalisations of their congeners, making it possible to distinguish them from (for example) environmental noises. But the voice is more than a sound to which we are especially sensitive: it is also a vector of emotions.

    Categorising and discriminating

    When someone speaks to us, we use the acoustic information that we perceive in him or her and classify it according to various categories, such as anger, fear or pleasure,” explains Didier Grandjean, professor in UNIGE’s Faculty of Psychology and Educational Sciences (SCAS) and at the Swiss Centre for Affective Sciences (CISA). This way of classifying emotions is called categorisation, and we use it (inter alia) to establish that a person is sad or happy during a social interaction.

    Categorisation differs from discrimination, which consists of focusing attention on a particular state: detecting or looking for someone happy in a crowd, for example. But how does the brain categorise these emotions and determine what the other person is expressing? In an attempt to answer this question, Grandjean’s team analysed the cerebral regions that are mobilised when constructing vocal emotional representations.

    The sixteen adults who took part in the experiment were exposed to a vocalisation database consisting of six men’s voices and six women’s, all saying pseudo-words that were meaningless but uttered with emotion. The participants first had to classify each voice as to whether it was angry, neutral or happy so that the researchers could observe which area of the brain was being used for categorisation. Next, the subjects simply had to decide whether a voice was angry or happy or not so that the scientists could look at the area solicited by discrimination. “Functional magnetic resonance imaging meant we could observe which areas were being activated in each case. We found that categorisation and discrimination did not use exactly the same region of the inferior frontal cortex,” says Sascha Frühholz, a researcher at UNIGE’s SCAS at the time of the experiment but who is now a professor at the University of Zurich.

    The crucial role of the frontal lobe

    Unlike the voice-background noise distinction that is found in the temporal lobe, the actions of categorising and discriminating call on the frontal lobe, in particular the inferior frontal gyri (down the sides of the forehead). “We expected the frontal lobe to be involved, and had predicted the observation of two different sub-regions that would be activated depending on the action of categorising or discriminating,” says Grandjean. In the first instance, it was the pars opercularis sub-region that corresponded to the categorisation of voices, while in the second case — discrimination — it was the pars triangularis. “This distinction is linked not just to brain activations selective to the processes studied but is also due to the difference in connections with other cerebral regions that require these two operations,” continues Grandjean. “When we categorise, we have to be more precise than when we discriminate. That’s why the temporal region, the amygdala and the orbito-frontal cortex — crucial areas for emotion — are used to a much higher degree and are functionally connected to the pars opercularis rather than the pars triangularis.”

    The research, which emphasises the difference between functional sub-territories in perceiving emotions through vocal communication, shows that the more complex and precise the processes related to emotions are, the more the frontal lobe and its connections with other cerebral regions are solicited. There is a difference between processing basic sound information (a distinction between surrounding noise and voices) made by the upper part of the temporal lobe, and processing high-level information (perceived emotions and contextual meanings) made by the frontal lobe. It is the latter that enables social interaction by decoding the intention of the speaker. “Without this area, it is not possible to represent the emotions of the other person through his or her voice, and we no longer understand his or her expectations and have difficulty integrating contextual information, as in sarcasm,” concludes Grandjean. We now know why an individual with a brain injury affecting the inferior frontal gyrus and the orbito-frontal regions can no longer interpret the emotions related to what his or her peers are saying, and may, as a result, adopt socially inappropriate behaviour.”

     


  2. Study suggests noise sens­it­iv­ity is vis­ible in brain struc­tures

    December 26, 2017 by Ashley

    From the University of Helsinki press release:

    Recent functional studies conducted at the University of Helsinki and Aarhus University suggest that noise sensitivity, a trait describing attitudes towards noise and predicting noise annoyance, is associated with altered processing in the central auditory system. Now, the researchers have found that noise sensitivity is associated with the grey matter volume in selected brain structures previously linked to auditory perceptual, emotional and interoceptive processing.

    Having an increased amount of grey matter in these areas may mean that noise sensitivity requires more neural resources to be involved in dealing with sound.

    “We found greater grey matter volume in people with high noise sensitivity in the brain temporal regions, as well as the hippocampus and the right insula. These cortical and subcortical areas are parts of brain networks supporting listening experience,” says researcher Marina Kliuchko, the first author of the research article published in NeuroImage journal.

    The research included brain images of 80 subjects from which grey matter volume, cortical thickness, and other anatomical parameters were measured and correlated with noise sensitivity. The work brings new insight into the physiological mechanisms of noise sensitivity.

    Noise sensitivity may be related to self-awareness in noise-sensitive individuals about the sensations that noise induces in them. That is suggested from the increased volume of the anterior part of the right insular cortex, which is known to be important for matching external sensory information with internal state of the body and bringing it to one’s conscious awareness,” Kliuchko says.


  3. Study suggests disorders of the voice can affect a politician’s success

    December 14, 2017 by Ashley

    From the Acoustical Society of America press release:

    The acoustics of a political speech delivery are known to be a powerful influencer of voter preferences, perhaps giving some credence to the saying, “It’s not what you say, but how you say it.” Vocal disorders change the qualities of a person’s speech, and voice scientists Rosario Signorello and Didier Demolin at the Université Sorbonne Nouvelle, Paris have found that this alters politicians’ perceived charisma and listeners’ voting preferences.

    The researchers examined two cases of politicians with vocal disorders: Umberto Bossi, former leader of the Italian Lega Nord party, whose vocal cords were partially paralyzed by a stroke, and Luiz Inácio Lula da Silva, former president of Brazil, whose larynx has disturbed functionality due to throat cancer. Signorello will present the findings at the 174th Meeting of the Acoustical Society of America, being held Dec. 4-8, 2017, in New Orleans, Louisiana.

    In both vocal pathologies, the vocal range was narrowed and pitch lowered. The disordered voices were characterized by hoarseness, a slower speech rate and a restriction in the ability to modulate pitch. “We use pitch manipulation to be ironic and sarcastic, to change the meaning of a sentence,” said Signorello, emphasizing the limited speech capabilities of the politicians after their pathology.

    “Before the stroke, people perceived Bossi as positive, enthusiastic, a very charming speaker, and when listening to his post-stroke voice, everything changed,” said Signorello. “After the stroke, he had a flat pitch contour, a lack of modulation, and this was perceived as a wise and competent charisma.”

    Multiple charismatic adjectives were assessed on a Likert scale of agreement by a French audience. Using an audience who didn’t understand the languages of the vocal stimuli was important. “[W]henever you listen to a voice you assess the acoustics, but also what they say, and we didn’t want the verbal, semantic content to influence our results,” said Signorello.

    The French listeners were asked which vocal stimuli they would vote for and, perhaps surprisingly, there was a preference for the leaders’ post-disorder voices. “French people didn’t want to vote for someone who was strong and authoritarian, or perceived as a younger version of the leader,” said Signorello. However, this was a variable trend. “In each example the vocal patterns are so diverse you never find the same answers; all trigger different emotional states and convey different personality traits.”

    Emphasizing that there is no “best” voice, Signorello said, “Charisma is a social phenomenon, difficult to assess because it is subject to social trends. It’s impossible to give a recipe of what is more or less charismatic — it’s like fashion, it changes drastically with time.”

    The researchers found it intriguing that the leadership charismas identified from post-vocal disorder vocal stimuli were characterized by personality traits that are also used to describe an older person, for example, as wise. “We are interested in how age and the perception of age from voice influences the social status of a speaker in a given society,” said Signorello, who plans to investigate this further.

    He plans to extend the study to vocal disorders of female politicians, aiming to use the findings to improve and focus speech rehabilitation of public speakers, from teachers, to CEOs and politicians. He is also interested in applying these findings to smart device voice recognition technology.


  4. Study suggests repetition can make sounds into music

    December 12, 2017 by Ashley

    From the University of Arkansas, Fayetteville press release:

    Water dripping. A shovel scraping across rock. These sounds don’t seem very musical. Yet new research at the University of Arkansas shows that repeating snippets of environmental sounds can make them sound like music.

    The new findings from the Music Cognition Lab at the University of Arkansas build upon research by Diana Deutsch and colleagues. These researchers showed that repetition can musicalize speech, a phenomenon they called “speech-to-song” illusion. Researchers at the University of Arkansas Music Cognition Lab have now demonstrated that repetition can also musicalize non-speech sounds. They call this a “sound-to-music” illusion.

    This underscores the role of repetition in generating a musical mode of listening and shows that the effect does not depend on the special relationship between music and speech, but can occur for broader categories of sound.

    The findings will be published in the journal Music & Science in early 2018.

    In their article, “The sound-to-music illusion: Repetition can musicalize nonspeech sounds,” doctoral student Rhimmon Simchy-Gross and music professor Elizabeth Hellmuth Margulis demonstrate that repetition can musicalize environmental sound, whether the clips are presented in their original sequence or in a jumbled version, contrasting with previous research on speech, where the effect only occurred for exact replications.

    “This difference suggests that what works as a repetition depends not just on the acoustic characteristics of the sound, but also its function,” Margulis said. “Jumbling speech sounds disrupts the words’ meaning, but jumbling the components of a string of environmental sounds doesn’t change the fact that it sounds like water dripping or a shovel scraping across rock.

    “Composers and performers have been playing with repeated sound samples and speech for more than 50 years,” Margulis said. “Like so much else in the cognitive science of music, this research is inspired by actual musical practice. It uses new experimental methods to pursue some of the ideas about repetition’s special role in musicalization outlined in my 2014 book On Repeat: How Music Plays the Mind.”

    Researchers used digitally excised clips of 20 environmental sounds, ranging from a bee buzzing to machine noise. They played each clip a total of 10 times to measure the reaction of participants, who rated them along a spectrum from “sounded exactly like environmental sound” to “sounded exactly like music.” The degree of musicality participants heard in the clips rose with repeated exposure.

    “In other words, sound that initially seemed unambiguously like environmental noise, through the simple act of repetition, came to sound like music,” Margulis said. “The sounds themselves didn’t change, but something changed in the minds of the listeners to make them seem like music. This finding can help future studies investigate the characteristics that define musical listening.”


  5. Study suggests mu­sic and nat­ive lan­guage in­ter­act in the brain

    December 11, 2017 by Ashley

    From the University of Helsinki press release:

    The brain’s auditory system can be shaped by exposure to different auditory environments, such as native language and musical training.

    A recent doctoral study by Caitlin Dawson from University of Helsinki focuses on interacting effects of native language patterns and musical experience on early auditory processing of basic sound features. Methods included electrophysiological brainstem recording as well as a set of behavioral auditory discrimination tasks.

    The auditory tasks were designed to find discrimination thresholds for intensity, frequency, and duration. A self-report questionnaire on musical sophistication was also used in the analyses.

    “We found that Finnish speakers showed an advantage in duration processing in the brainstem, compared to German speakers. The reason for this may be that Finnish language includes long and short sounds that determine the meaning of words, which trains Finnish speakers’ brains to be very sensitive to the timing of sounds,” Dawson states.

    For Finnish speakers, musical expertise was associated with enhanced behavioral frequency discrimination. Mandarin speaking musicians showed enhanced behavioral discrimination in both frequency and duration. Mandarin Chinese language has tones which determine the meaning of words.

    “The perceptual effects of musical expertise were not reflected in brainstem responses in either Finnish or Mandarin speakers. This might be because language is an earlier and more essential skill than music, and native speakers are experts at their own language,” Dawson says.

    The results suggest that musical expertise does not enhance all auditory features equally for all language speakers; native language phonological patterns may modulate the enhancing effects of musical expertise on processing of specific features.


  6. Engineer finds how brain encodes sounds

    November 16, 2017 by Ashley

    From the Washington University in St. Louis press release:

    When you are out in the woods and hear a cracking sound, your brain needs to process quickly whether the sound is coming from, say, a bear or a chipmunk. In new research published in PLoS Biology, a biomedical engineer at Washington University in St. Louis has a new interpretation for an old observation, debunking an established theory in the process.

    Dennis Barbour, MD, PhD, associate professor of biomedical engineering in the School of Engineering & Applied Science who studies neurophysiology, found in an animal model that auditory cortex neurons may be encoding sounds differently than previously thought. Sensory neurons, such as those in auditory cortex, on average respond relatively indiscriminately at the beginning of a new stimulus, but rapidly become much more selective. The few neurons responding for the duration of a stimulus were generally thought to encode the identity of a stimulus, while the many neurons responding at the beginning were thought to encode only its presence. This theory makes a prediction that had never been tested — that the indiscriminate, initial responses would encode stimulus identity less accurately than how the selective responses register over the sound’s duration.

    “At the beginning of a sound transition, things are diffusely encoded across the neuron population, but sound identity turns out to be more accurately encoded,” Barbour said. “As a result, you can more rapidly identify sounds and act on that information. If you get about the same amount of information for each action potential spike of neural activity, as we found, then the more spikes you can put toward a problem, the faster you can decide what to do. Neural populations spike most and encode most accurately at the beginning of stimuli.”

    Barbour’s study involved recording individual neurons. To make similar kinds of measurements of brain activity in humans, researchers must use noninvasive techniques that average many neurons together. Event-related potential (ERP) techniques record brain signals through electrodes on the scalp and reflect neural activity synchronized to the onset of a stimulus. Functional MRI (fMRI), on the other hand, reflects activity averaged over several seconds. If the brain were using fundamentally different encoding schemes for onsets versus sustained stimulus presence, these two methods might be expected to diverge in their findings. Both reveal the neural encoding of stimulus identity, however.

    “There has been a lot of debate for a very long time, but especially in the past couple of decades, about whether information representation in the brain is distributed or local,” Barbour said.

    “If function is localized, with small numbers of neurons bunched together doing similar things, that’s consistent with sparse coding, high selectivity, and low population spiking rates. But if you have distributed activity, or lots of neurons contributing all over the place, that’s consistent with dense coding, low selectivity and high population spiking rates. Depending on how the experiment is conducted, neuroscientists see both. Our evidence suggests that it might just be both, depending on which data you look at and how you analyze it.”

    Barbour said the research is the most fundamental work to build a theory for how information might be encoded for sound processing, yet it implies a novel sensory encoding principle potentially applicable to other sensory systems, such as how smells are processed and encoded.

    Earlier this year, Barbour worked with Barani Raman, associate professor of biomedical engineering, to investigate how the presence and absence of an odor or a sound is processed. While the response times between the olfactory and auditory systems are different, the neurons are responding in the same ways. The results of that research also gave strong evidence that there may exist a stored set of signal processing motifs that is potentially shared by different sensory systems and even different species.


  7. Study examines brain patterns underlying mothers’ responses to infant cries

    October 28, 2017 by Ashley

    From the NIH/Eunice Kennedy Shriver National Institute of Child Health and Human Development press release:

    Infant cries activate specific brain regions related to movement and speech, according to a National Institutes of Health study of mothers in 11 countries. The findings, led by researchers at NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), identify behaviors and underlying brain activities that are consistent among mothers from different cultures. Understanding these reactions may help in identifying and treating caregivers at risk for child maltreatment and other problematic behaviors.

    The study team conducted a series of behavioral and brain imaging studies using functional magnetic resonance imaging (fMRI). In a group of 684 new mothers in Argentina, Belgium, Brazil, Cameroon, France, Israel, Italy, Japan, Kenya, South Korea and the United States, researchers observed and recorded one hour of interaction between the mothers and their 5-month-old babies at home. The team analyzed whether mothers responded to their baby’s cries by showing affection, distracting, nurturing (like feeding or diapering), picking up and holding, or talking. Regardless of which country they came from, mothers were likely to pick up and hold or talk to their crying infant.

    Through fMRI studies of other groups of women, the team found that infant cries activated similar brain regions in new and experienced mothers: the supplementary motor area, which is associated with the intention to move and speak; the inferior frontal regions, which are involved in the production of speech; and the superior temporal regions that are linked to sound processing.

    Overall, the findings suggest that mothers’ responses to infant cries are hard-wired and generalizable across cultures. The study also builds upon earlier work showing that women’s and men’s brains respond differently to infant cries.


  8. Predicting when a sound will occur relies on the brain’s motor system

    October 10, 2017 by Ashley

    From the McGill University press release:

    Whether it is dancing or just tapping one foot to the beat, we all experience how auditory signals like music can induce movement. Now new research suggests that motor signals in the brain actually sharpen sound perception, and this effect is increased when we move in rhythm with the sound.

    It is already known that the motor system, the part of the brain that controls our movements, communicates with the sensory regions of the brain. The motor system controls eye and other body part movements to orient our gaze and limbs to explore our spatial environment. However, because ears are immobile it was less clear what role the motor system plays in distinguishing sounds.

    Benjamin Morillon, a researcher at the Montreal Neurological Institute of McGill University, designed a study based on the hypothesis that signals coming from the sensorimotor cortex could prepare the auditory cortex to process sound, and by doing so improve its ability to decipher complex sound flows like speech and music.

    Working in the lab of MNI researcher Sylvain Baillet, he recruited 21 participants who listened to complex tone sequences and had to indicate whether a target melody was on average higher or lower-pitched compared to a reference. The researchers also played an intertwined distracting melody to measure the participants’ ability to focus on the target melody.

    The exercise was done in two stages, one in which the participants were completely still, and another in which they tapped on a touchpad in rhythm with the target melody. The participants performed this task while their brain oscillations, a form of neural signaling brain regions use to communicate with each other, were recorded with magnetoencephalography (MEG).

    MEG millisecond imaging revealed that bursts of fast neural oscillations coming from the left sensorimotor cortex were directed at the auditory regions of the brain. These oscillations occurred in anticipation of the occurrence of the next tone of interest. This finding revealed that the motor system can predict in advance when a sound will occur and send this information to auditory regions so they can prepare to interpret the sound.

    One striking aspect of this discovery is that timed brain motor signaling anticipated the incoming tones of the target melody, even when participants remained completely still. Hand tapping to the beat of interest further improved performance, confirming the important role of motor activity in the accuracy of auditory perception.

    “A realistic example of this is the cocktail party concept: when you try to listen to someone but many people are speaking around at the same time,” says Morillon. “In real life, you have many ways to help you focus on the individual of interest: pay attention to the timbre and pitch of the voice, focus spatially toward the person, look at the mouth, use linguistic cues, use what was the beginning of the sentence to predict the end of it, but also pay attention to the rhythm of the speech. This latter case is what we isolated in this study to highlight how it happens in the brain.”

    A better understanding of the link between movement and auditory processing could one day mean better therapies for people with hearing or speech comprehension problems.

    “It has implications for clinical research and rehabilitation strategies, notably on dyslexic children and hearing-impaired patients,” says Morillon. “Teaching them to better rely on their motor system by at first overtly moving in synchrony with a speaker’s pace could help them to better understand speech.”


  9. How the human brain detects the ‘music’ of speech

    September 9, 2017 by Ashley

    From the University of California – San Francisco press release:

    Researchers at UC San Francisco have identified neurons in the human brain that respond to pitch changes in spoken language, which are essential to clearly conveying both meaning and emotion.

    The study was published online August 24, 2017 in Science by the lab of Edward Chang, MD, a professor of neurological surgery at the UCSF Weill Institute for Neurosciences, and led by Claire Tang, a fourth-year graduate student in the Chang lab.

    “One of the lab’s missions is to understand how the brain converts sounds into meaning,” Tang said. “What we’re seeing here is that there are neurons in the brain’s neocortex that are processing not just what words are being said, but how those words are said.”

    Changes in vocal pitch during speech — part of what linguists call speech prosody — are a fundamental part of human communication, nearly as fundamental as melody to music. In tonal languages such as Mandarin Chinese, pitch changes can completely alter the meaning of a word, but even in a non-tonal language like English, differences in pitch can significantly change the meaning of a spoken sentence.

    For instance, “Sarah plays soccer,” in which “Sarah” is spoken with a descending pitch, can be used by a speaker to communicate that Sarah, rather than some other person, plays soccer; in contrast, “Sarah plays soccer” indicates that Sarah plays soccer, rather than some other game. And adding a rising tone at the end of a sentence (“Sarah plays soccer?”) indicates that the sentence is a question.

    The brain’s ability to interpret these changes in tone on the fly is particularly remarkable, given that each speaker also has their own typical vocal pitch and style (that is, some people have low voices, others have high voices, and others seem to end even statements as if they were questions). Moreover, the brain must track and interpret these pitch changes while simultaneously parsing which consonants and vowels are being uttered, what words they form, and how those words are being combined into phrases and sentences — with all of this happening on a millisecond scale.

    Previous studies in both humans and non-human primates have identified areas of the brain’s frontal and temporal cortices that are sensitive to vocal pitch and intonation, but none have answered the question of how neurons in these regions detect and represent changes in pitch to inform the brain’s interpretation of a speaker’s meaning.

    Distinct groups of neurons in the brain’s temporal cortex distinguish speaker, phonetics, and intonation

    Chang, a neurosurgeon at the UCSF Epilepsy Center, specializes in surgeries to remove brain tissue that causes seizures in patients with epilepsy. In some cases, to prepare for these operations, he places high-density arrays of tiny electrodes onto the surface of the patients’ brains, both to help identify the location triggering the patients’ seizures and to map out other important areas, such as those involved in language, to make sure the surgery avoids damaging them.

    In the new study, Tang asked 10 volunteers awaiting surgery with these electrodes in place to listen to recordings of four sentences as spoken by three different synthesized voices:

    “Humans value genuine behavior”

    “Movies demand minimal energy”

    “Reindeer are a visual animal”

    “Lawyers give a relevant opinion”

    The sentences were designed to have the same length and construction, and could be played with four different intonations: neutral, emphasizing the first word, emphasizing the third word, or as a question. You can see how these intonation changes alter the meaning of the sentence: “Humans [unlike Klingons] value genuine behavior;” “Humans value genuine [not insincere] behavior;” and “Humans value genuine behavior?” [Do they really?]

    Tang and her colleagues monitored the electrical activity of neurons in a part of the volunteers’ auditory cortices called the superior temporal gyrus (STG), which previous research had shown might play some role in processing speech prosody.

    They found that some neurons in the STG could distinguish between the three synthesized speakers, primarily based on differences in their average vocal pitch range. Other neurons could distinguish between the four sentences, no matter which speaker was saying them, based on the different kinds of sounds (or phonemes) that made up the sentences (“reindeer” sounds different from “lawyers” no matter who’s talking). And yet another group of neurons could distinguish between the four different intonation patterns. These neurons changed their activity depending on where the emphasis fell in the sentence, but didn’t care which sentence it was or who was saying it.

    To prove to themselves that they had cracked the brain’s system for pulling intonation information from sentences, the team designed an algorithm to predict how neurons’ response to any sentence should change based on speaker, phonetics, and intonation and then used this model to predict how the volunteers’ neurons would respond to hundreds of recorded sentences by different speakers. They showed that while the neurons responsive to the different speakers were focused on absolute pitch of the speaker’s voice, the ones responsive to intonation were more focused on relative pitch: how the pitch of the speaker’s voice changed from moment to moment during the recording.

    “To me this was one of the most exciting aspects of our study,” Tang said. “We were able to show not just where prosody is encoded in the brain, but also how, by explaining the activity in terms of specific changes in vocal pitch.”

    These findings reveal how the brain begins to take apart the complex stream of sounds that make up speech and identify important cues about the meaning of what we’re hearing, Tang says. Who is talking, what are they saying, and just as importantly, how are they saying it?

    “Now, a major unanswered question is how the brain controls our vocal tracts to make these intonational speech sounds,” said Chang, the paper’s senior author. “We hope we can solve this mystery soon.”

    Volunteers from the UCSF epilepsy center enable deeper look into workings of human brain

    The patients involved in the study were all at UCSF undergoing surgery for severe, untreatable epilepsy. Brain surgery is a powerful way to halt epilepsy in its tracks, potentially completely stopping seizures overnight, and its success is directly related to the accuracy with which a medical team can map the brain, identifying the exact pieces of tissue responsible for an individual’s seizures and removing them.

    The UCSF Comprehensive Epilepsy Center is a leader in the use of advanced intracranial monitoring to map out elusive seizure-causing brain regions. The mapping is done by surgically placing a flexible electrode array under the skull on the brain’s outer surface or cortex and recording the brain’s activity in order to pinpoint the parts of the brain responsible for triggering seizures. In a second surgery a few weeks later, the electrodes are removed and the unhealthy brain tissue that causes the seizures is removed.

    This setting also permits a rare opportunity to ask basic questions about how the human brain works, such as how it controls speaking. The neurological basis of speech motor control has remained unknown until now because scientists cannot study speech mechanisms in animals and because non-invasive imaging methods lack the ability to track the very rapid time course of the brain signals that drive the muscles that create speech, which change in hundredths of seconds.

    But presurgical brain mapping can record neural activity directly, and can detect changes in electrical activity on the order of a few milliseconds.


  10. People who ‘hear voices’ can detect hidden speech in unusual sounds

    September 4, 2017 by Ashley

    From the Durham University press release:

    People who hear voices that other people can’t hear may use unusual skills when their brains process new sounds, according to research led by Durham University and University College London (UCL).

    The study, published in the academic journal Brain, found that voice-hearers could detect disguised speech-like sounds more quickly and easily than people who had never had a voice-hearing experience.

    The findings suggest that voice-hearers have an enhanced tendency to detect meaningful speech patterns in ambiguous sounds.

    The researchers say this insight into the brain mechanisms of voice-hearers tells us more about how these experiences occur in voice-hearers without a mental health problem, and could ultimately help scientists and clinicians find more effective ways to help people who find their voices disturbing.

    The study involved people who regularly hear voices, also known as auditory verbal hallucinations, but do not have a mental health problem.

    Participants listened to a set of disguised speech sounds known as sine-wave speech while they were having an MRI brain scan. Usually these sounds can only be understood once people are either told to listen out for speech, or have been trained to decode the disguised sounds.

    Sine-wave speech is often described as sounding a bit like birdsong or alien-like noises. However, after training people can understand the simple sentences hidden underneath (such as “The boy ran down the path” or “The clown had a funny face”).

    In the experiment, many of the voice-hearers recognised the hidden speech before being told it was there, and on average they tended to notice it earlier than other participants who had no history of hearing voices.

    The brains of the voice-hearers automatically responded to sounds that contained hidden speech compared to sounds that were meaningless, in the regions of the brain linked to attention and monitoring skills.

    The small-scale study was conducted with 12 voice-hearers and 17 non voice-hearers. Nine out of 12 (75 per cent) voice-hearers reported hearing the hidden speech compared to eight out of 17 (47 per cent) non voice-hearers.

    Lead author Dr Ben Alderson-Day, Research Fellow from Durham University’s Hearing the Voice project, said: “These findings are a demonstration of what we can learn from people who hear voices that are not distressing or problematic.

    “It suggests that the brains of people who hear voices are particularly tuned to meaning in sounds, and shows how unusual experiences might be influenced by people’s individual perceptual and cognitive processes.”

    People who hear voices often have a diagnosis of a mental health condition such as schizophrenia or bipolar disorder. However, not all voice-hearers have a mental health problem.

    Research suggests that between five and 15 per cent of the general population have had an occasional experience of hearing voices, with as many as one per cent having more complex and regular voice-hearing experiences in the absence of any need for psychiatric care.

    Co-author Dr Cesar Lima from UCL’s Speech Communication Lab commented: “We did not tell the participants that the ambiguous sounds could contain speech before they were scanned, or ask them to try to understand the sounds. Nonetheless, these participants showed distinct neural responses to sounds containing disguised speech, as compared to sounds that were meaningless.

    “This was interesting to us because it suggests that their brains can automatically detect meaning in sounds that people typically struggle to understand unless they are trained.”

    The research is part of a collaboration between Durham University’s Hearing the Voice project, a large interdisciplinary study of voice-hearing funded by the Wellcome Trust, and UCL’s Speech Communication lab.

    Durham’s Hearing the Voice project aims to develop a better understanding of the experience of hearing a voice when no one is speaking. The researchers want to increase understanding of voice-hearing by examining it from different academic perspectives, working with clinicians and other mental health professionals, and listening to people who have heard voices themselves.

    In the long term, it is hoped that the research will inform mental health policy and improve therapeutic practice in cases where people find their voices distressing and clinical help is sought.

    Professor Charles Fernyhough, Director of Hearing the Voice at Durham University, said: ‘This study brings the expertise of UCL’s Speech Communication lab together with Durham’s Hearing the Voice project to explore what is a frequently troubling and widely misunderstood experience.”

    Professor Sophie Scott from UCL Speech Communication Lab added: “This is a really exciting demonstration of the ways that unusual experiences with voices can be linked to — and may have their basis in — everyday perceptual processes.”

    The study involved researchers from Durham University, University College London, University of Porto (Portugal), University of Westminster and University of Oxford.