1. Study suggests brain waves reflect different types of learning

    October 20, 2017 by Ashley

    From the Massachusetts Institute of Technology press release:

    Figuring out how to pedal a bike and memorizing the rules of chess require two different types of learning, and now for the first time, researchers have been able to distinguish each type of learning by the brain-wave patterns it produces.

    These distinct neural signatures could guide scientists as they study the underlying neurobiology of how we both learn motor skills and work through complex cognitive tasks, says Earl K. Miller, the Picower Professor of Neuroscience at the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences, and senior author of a paper describing the findings in the Oct. 11 edition of Neuron.

    When neurons fire, they produce electrical signals that combine to form brain waves that oscillate at different frequencies. “Our ultimate goal is to help people with learning and memory deficits,” notes Miller. “We might find a way to stimulate the human brain or optimize training techniques to mitigate those deficits.”

    The neural signatures could help identify changes in learning strategies that occur in diseases such as Alzheimer’s, with an eye to diagnosing these diseases earlier or enhancing certain types of learning to help patients cope with the disorder, says Roman F. Loonis, a graduate student in the Miller Lab and first author of the paper. Picower Institute research scientist Scott L. Brincat and former MIT postdoc Evan G. Antzoulatos, now at the University of California at Davis, are co-authors.

    Explicit versus implicit learning

    Scientists used to think all learning was the same, Miller explains, until they learned about patients such as the famous Henry Molaison or “H.M.,” who developed severe amnesia in 1953 after having part of his brain removed in an operation to control his epileptic seizures. Molaison couldn’t remember eating breakfast a few minutes after the meal, but he was able to learn and retain motor skills that he learned, such as tracing objects like a five-pointed star in a mirror.

    “H.M. and other amnesiacs got better at these skills over time, even though they had no memory of doing these things before,” Miller says.

    The divide revealed that the brain engages in two types of learning and memory — explicit and implicit.

    Explicit learning “is learning that you have conscious awareness of, when you think about what you’re learning and you can articulate what you’ve learned, like memorizing a long passage in a book or learning the steps of a complex game like chess,” Miller explains.

    Implicit learning is the opposite. You might call it motor skill learning or muscle memory, the kind of learning that you don’t have conscious access to, like learning to ride a bike or to juggle,” he adds. “By doing it you get better and better at it, but you can’t really articulate what you’re learning.”

    Many tasks, like learning to play a new piece of music, require both kinds of learning, he notes.

    Brain waves from earlier studies

    When the MIT researchers studied the behavior of animals learning different tasks, they found signs that different tasks might require either explicit or implicit learning. In tasks that required comparing and matching two things, for instance, the animals appeared to use both correct and incorrect answers to improve their next matches, indicating an explicit form of learning. But in a task where the animals learned to move their gaze one direction or another in response to different visual patterns, they only improved their performance in response to correct answers, suggesting implicit learning.

    What’s more, the researchers found, these different types of behavior are accompanied by different patterns of brain waves.

    During explicit learning tasks, there was an increase in alpha2-beta brain waves (oscillating at 10-30 hertz) following a correct choice, and an increase delta-theta waves (3-7 hertz) after an incorrect choice. The alpha2-beta waves increased with learning during explicit tasks, then decreased as learning progressed. The researchers also saw signs of a neural spike in activity that occurs in response to behavioral errors, called event-related negativity, only in the tasks that were thought to require explicit learning.

    The increase in alpha-2-beta brain waves during explicit learning “could reflect the building of a model of the task,” Miller explains. “And then after the animal learns the task, the alpha-beta rhythms then drop off, because the model is already built.”

    By contrast, delta-theta rhythms only increased with correct answers during an implicit learning task, and they decreased during learning. Miller says this pattern could reflect neural “rewiring” that encodes the motor skill during learning.

    “This showed us that there are different mechanisms at play during explicit versus implicit learning,” he notes.

    Future Boost to Learning

    Loonis says the brain wave signatures might be especially useful in shaping how we teach or train a person as they learn a specific task. “If we can detect the kind of learning that’s going on, then we may be able to enhance or provide better feedback for that individual,” he says. “For instance, if they are using implicit learning more, that means they’re more likely relying on positive feedback, and we could modify their learning to take advantage of that.”

    The neural signatures could also help detect disorders such as Alzheimer’s disease at an earlier stage, Loonis says. “In Alzheimer’s, a kind of explicit fact learning disappears with dementia, and there can be a reversion to a different kind of implicit learning,” he explains. “Because the one learning system is down, you have to rely on another one.”

    Earlier studies have shown that certain parts of the brain such as the hippocampus are more closely related to explicit learning, while areas such as the basal ganglia are more involved in implicit learning. But Miller says that the brain wave study indicates “a lot of overlap in these two systems. They share a lot of the same neural networks.”


  2. Study suggests learning during development is regulated by an unexpected brain region

    October 19, 2017 by Ashley

    From the Netherlands Institute for Neuroscience – KNAW press release:

    Half a century of research on how the brain learns to integrate visual inputs from the two eyes has provided important insights in critical period regulation, leading to the conclusion that it occurs within the cortex. Scientists have now made the surprising discovery that a brain region that passes on input from the eyes to the cortex also plays a crucial role in opening the critical period of binocular vision.

    During childhood, the brain goes through critical periods in which its learning ability for specific skills and functions is strongly increased. It is assumed that the beginning and ending of these critical periods are regulated in the cortex, the outermost layer of the brain. However, scientists from the Netherlands Institute for Neuroscience discovered that a structure deep in the brain also plays a crucial role in the regulation of these critical periods. These findings, published today in the leading journal Nature Neuroscience, have important implications for understanding developmental problems ranging from a lazy eye to intellectual disability.

    Critical periods

    We can only flawlessly learn skills and functions such as speaking a language or seeing in 3D through binocular vision during critical periods of development. When these developmental forms of learning fail, lifelong problems arise.

    Scientists have been investigating the mechanisms by which critical periods are switched on and off in the hope to extend or reopen them for the treatment of developmental problems. Half a century of research on how the brain learns to integrate visual inputs from the two eyes has provided important insights in critical period regulation, leading to the conclusion that it occurs within the cortex. Neuroscientist Christiaan Levelt and his team now made the surprising discovery that a brain region that passes on input from the eyes to the cortex also plays a crucial role in opening the critical period of binocular vision.

    Using electrophysiological recordings in genetically modified mice, they showed that this brain region, known as the thalamus, contains inhibitory neurons that regulate how efficiently the brain learns to integrate binocular inputs. Levelt: “To improve developmental problems resulting in learning problems during critical periods, reinstating flexibility in the visual cortex may not be sufficient. Scientists and clinicians should not limit themselves to studying cortical deficits alone. They should also focus on the thalamus and the way it preprocesses information before it enters the cortex.”

    Albinism

    The study may also provide some hope for people with albinism, who often have limited binocular vision due to misrouting of inputs from the eyes to the thalamus. Levelt’s team found that in contrast to what is generally assumed, plasticity of binocular vision also occurs in the thalamus itself, suggesting that this might be improved in children with albinism through training.


  3. Study looks at relationship between flexibility and modularity in brain

    October 18, 2017 by Ashley

    From the Rice University press release:

    A new study by Rice University researchers takes a step toward what they see as key to the advance of neuroscience: a better understanding of the relationship between the brain’s flexibility and its modularity.

    Their open-access study appears in Frontiers in Human Neuroscience.

    Scientists are only beginning to comprehend how brains are wired, both structurally and functionally. The latest in a series of studies by Rice scientists shows that the brain’s flexibility and modularity — which researchers often study independently — are strongly related. The new study also presents a theoretical framework to explain the two processes.

    They found that flexibility, which relates to how much brain networks change over time, and modularity, which defines the degree of interconnectivity between parts of the brain responsible for specific tasks, are highly negatively correlated. In other words, people with highly modular brains that constrain tasks to the modules also show low flexibility, while people with high-flexibility brains that share tasks across the network show low modularity.

    These properties explain how participants perform in experiments, according to co-author and Rice psychologist Simon Fischer-Baum. When someone is presented with a complex task, flexibility of the brain network better explains performance than the modularity of the network. For a simple task, he said, the reverse is true, which indicates that these ways of thinking about brain organization map onto different cognitive abilities.

    But even that sliding scale only hints at the relationship between flexibility and modularity.

    “There have been a bunch of papers about the flexibility of the network and how that relates to cognition and a bunch of other papers about modularity and its relation to cognition, but nobody’s really explored whether these things are tapping into different aspects of cognition or if they’re two sides of the same coin,” said Fischer-Baum, who was the lead faculty member on the study in collaboration with Rice psychologist Randi Martin and biophysicist Michael Deem.

    “People can be described in terms of how flexible their brain networks are: how frequently brain regions change the modules they’re assigned to or how stable the modules stay over time,” Fischer-Baum said. Regions of the brain with higher flexibility are those typically associated with cognitive control and executive function, the processes that control behavior. Regions with lower flexibility are those involved in motor, taste, vision and hearing processes.

    The researchers took two paths to map the modules and thought processes of 52 subjects ranging in age from 18 to 26; 31 percent of the participants were male and 69 percent were female.

    First, the subjects underwent resting-state functional magnetic resonance imaging (fMRI), which detected blood oxygen levels to track activity and measure modularity (the likelihood of connections within modules of the brain) and flexibility (how frequently regions switch allegiance from one module to another).

    Second, the subjects took a battery of six cognitive tests to assess their performance on simple and complex tasks.

    Data from both paths had previously been used to show that brain modularity correlates positively with performance on simple tasks and negatively with performance on complex tasks. The question that remained was how modularity relates to other measures of network organization, like flexibility, and how flexibility relates to cognitive processing as a whole. The goal of the Rice study was to analyze the entire brain network rather than individual components.

    “Cognition is complex,” Fischer-Baum said. “Any task relies on multiple areas of the brain doing different functions and communicating with each other. We have to be able to articulate why, for a given task, it’s good to have a particular type of brain organization.”

    Participants were given a series of common cognitive psychology experiments, which yielded composite scores for simple and complex behaviors that aligned with the fMRI results. Participants who showed high flexibility fared best on complex tests that required the ability to quickly switch between tasks and draw upon working memory. One, for example, presented them a square or triangle in blue or yellow and required the participants to respond to a cue about either the color or shape of the object.

    Those who scored higher on the modular scale excelled on tasks with more limited behavioral requirements, like a traffic-light test that required them to hit a button as quickly as possible when they saw a light turn from red to green and measured their response times.

    The researchers noted it would be incorrect to think that modularity and flexibility simply measure the same property. Because they make independent contributions to performance, the researchers theorize they are “likely to link to different cognitive processes.”

    They also theorized that learning a skill may account for variation within individuals between modularity and flexibility. The researchers wrote that during the initial stages of learning, even a simple skill may employ cognitive controls usually required for complex tasks. But as the task is learned and becomes simpler, flexibility decreases and modularity increases.

    Fischer-Baum said it’s best to think of flexibility and modularity as continuously variable rather than fixed traits. “I don’t think we know how this changes over time, but it seems likely that it changes with experience, and even over the course of a day.

    “This broad division between complex and simple tasks is a first pass at the problem,” he said. “It’s not just that having a modular brain is good, or having a flexible brain is good. We want to know what they’re good for and the timescales at which these variables have an impact.”


  4. Study looks at timbre shifting in “baby talk”

    October 17, 2017 by Ashley

    From the Cell Press press release:

    When talking with their young infants, parents instinctively use “baby talk,” a unique form of speech including exaggerated pitch contours and short, repetitive phrases. Now, researchers reporting in Current Biology on October 12 have found another unique feature of the way mothers talk to their babies: they shift the timbre of their voice in a rather specific way. The findings hold true regardless of a mother’s native language.

    “We use timbre, the tone color or unique quality of a sound, all the time to distinguish people, animals, and instruments,” says Elise Piazza from Princeton University. “We found that mothers alter this basic quality of their voices when speaking to infants, and they do so in a highly consistent way across many diverse languages.”

    Timbre is the reason it’s so easy to discern idiosyncratic voices — the famously velvety sound of Barry White, the nasal tone of Gilbert Gottfried, and the gravelly sound of Tom Waits — even if they’re all singing the same note, Piazza explains.

    Piazza and her colleagues at the Princeton Baby Lab, including Marius Catalin Iordan and Casey Lew-Williams, are generally interested in the way children learn to detect structure in the voices around them during early language acquisition. In the new study, they decided to focus on the vocal cues that parents adjust during baby talk without even realizing they’re doing it.

    The researchers recorded 12 English-speaking mothers while they played with and read to their 7- to 12-month-old infants. They also recorded those mothers while they spoke to another adult.

    After quantifying each mother’s unique vocal fingerprint using a concise measure of timbre, the researchers found that a computer could reliably tell the difference between infant- and adult-directed speech. In fact, using an approach called machine learning, the researchers found that a computer could learn to differentiate baby talk from normal speech based on just one second of speech data. The researchers verified that those differences couldn’t be explained by pitch or background noise.

    The next question was whether those differences would hold true in mothers speaking other languages. The researchers enlisted another group of 12 mothers who spoke nine different languages, including Spanish, Russian, Polish, Hungarian, German, French, Hebrew, Mandarin, and Cantonese. Remarkably, they found that the timbre shift observed in English-speaking mothers was highly consistent across those languages from around the world.

    “The machine learning algorithm, when trained on English data alone, could immediately distinguish adult-directed from infant-directed speech in a test set of non-English recordings and vice versa when trained on non-English data, showing strong generalizability of this effect across languages,” Piazza says. “Thus, shifts in timbre between adult-directed and infant-directed speech may represent a universal form of communication that mothers implicitly use to engage their babies and support their language learning.”

    The researchers say the next step is to explore how the timbre shift helps infants in learning. They suspect that the unique timbre fingerprint could help babies learn to differentiate and direct their attention to their mother’s voice from the time they are born.

    And don’t worry, dads. While the study was done in mothers to keep the pitches more consistent across study participants, the researchers say it’s likely the results will apply to fathers, too.


  5. Study suggests brain development and plasticity share similar signalling pathways

    by Ashley

    From the Goethe-Universität Frankfurt am Main press release:

    Learning and memory are two important functions of the brain that are based on the brain’s plasticity. Scientists from Goethe University Frankfurt report in the latest issue of the scientific journal Cell Reports how a trio of key molecules directs these processes. Their findings provide new leads for the therapy of Alzheimer’s disease.

    The brain is able to adapt to new situations through changing, building or reducing the contact points between nerve cells (synapses). In particular, the signal strength is regulated by constantly altering the abundance of receptors in the membrane of nerve cells. This explains why it is easier to remember information that we use frequently as opposed to information that we learned years ago and did not use anymore.

    Amparo Acker-Palmer’s research group at the Institute of Cell Biology and Neuroscience of the Goethe University focused in their study on AMPA receptors, which are the main transmitters of the stimulating signals. Nerve cells in the hippocampus, the brain region responsible for learning and memory, are able to alter the number of their “switched-on” receptors by extending or retracting them like antennae thereby regulating the strength of a signal. The Frankfurt scientists now discovered that three key molecules are involved in this regulation: GRIP1, ephrinB2 and ApoER2, the latter being a receptor for the signalling molecule Reelin.

    “These results are fascinating since it has been known for years that ephrinB2 as well as Reelin are essential for the development of the brain ” explains Amparo Acker-Palmer. “Furthermore, earlier work in my lab has shown that there is an interaction between the Reelin signalling pathway and ephrinBs when neurons migrate during brain maturation.”

    Interestingly, a single mechanism can fulfill very different functions within a cell. An earlier study by Amparo Acker-Palmer’s team already showed that macromolecular complexes consisting of ephrinB2 and ApoER2 regulate processes involved in neuronal migration. In the present study, the scientists selectively inhibited the interaction between the two proteins and could thereby demonstrate that these proteins, together with GRIP1, also influence brain plasticity in adults. When the interaction between these proteins was inhibited, neurons were unable to react to changes in the activity of their network. They also showed defects in long-term plasticity, which is the cellular basis for learning and memory.

    “Both, ApoER2 and ephrinB2 molecules have been linked to the development of Alzheimer’s, although the mechanisms of action are not clear yet,” says Amparo Acker-Palmer. “With our research we not only discovered new interactions of key molecules for the regulation of learning and memory but also shed light on potential new therapeutic targets for the treatment of Alzheimer’s disease.”


  6. Study suggests reading difficulties in children may sometimes be linked to hearing problems

    by Ashley

    From the Coventry University press release:

    Children with reading difficulties should be more thoroughly screened for hearing problems, a new report by Coventry University academics has said.

    The study, funded by the Nuffield Foundation, found 25 per cent of its young participants who had reading difficulties showed mild or moderate hearing impairment, of which their parents and teachers were unaware.

    The researchers believe that if there was more awareness of youngsters’ hearing problems — as well as an understanding of what particular aspects of literacy they struggled with — then the children might be able to receive more structured support that could help them improve their reading and writing skills.

    The study by academics at the university’s Centre for Advances in Behavioural Science compared children with dyslexia to youngsters who had a history of repeated ear infections to see if they had a similar pattern of literacy difficulties.

    A total of 195 children aged between eight and 10 — including 36 with dyslexia and 29 with a history of repeated ear infections — completed a series of tests to establish their reading and writing skills and how they used the structures of words based on their sounds and meanings, in speech and literacy.

    They were retested 18 months later, when a hearing screening was also carried out.

    None of the parents of the children with dyslexia reported any knowledge of hearing loss before the tests, but the screening showed that nine out of 36 of these children had some form of hearing loss.

    Around one third of the children who had repeated ear infections had problems with reading and writing, although the researchers suggest repeat ear infections will only result in reading difficulties when accompanied by weaknesses in other areas.

    The results showed that children with dyslexia have different patterns of literacy difficulties to children with a history of repeat ear infections, although there is some overlap between the groups.

    Children with dyslexia had difficulties with literacy activities involving the ability to manipulate speech sounds (known as phonology) and the knowledge of grammatical word structure (called morphology).

    The academics said these youngsters need to be taught how to use morphology in a highly-structured step-by-step way to help them improve their literacy skills.

    Children with a history of repeated ear infections mainly had problems with the phonology tasks; showing that they still had subtle difficulties with the perception of spoken language.

    The academics suggested that teachers should be made aware if youngsters have had a history of repeated ear infections, so they can consider the possibility of any hearing loss and understand how the consequences of these infections may impact on children as they learn about the sound structure of words and begin to read.

    Children currently have their hearing tested as babies and, in some areas of the UK, when they start school. Even so, later onset deafness can occur at any age and GPs can arrange for a child to have a hearing test at any age if a parent or teacher has concerns.

    But the academics believe that more regular, detailed tests might help youngsters with literacy problems.

    Report author Dr Helen Breadmore said:

    “Many children in school may have an undetected mild hearing loss, which makes it harder for them to access the curriculum.

    “Current hearing screening procedures are not picking up these children, and we would advise that children have their hearing tested in more detail and more often.

    “A mild-moderate hearing loss will make the perception of speech sounds difficult, particularly in a classroom environment with background noise and other distractions. Therefore, children who have suffered repeated ear infections and associated hearing problems have fluctuating access to different speech sounds precisely at the age when this information is crucial in the early stages of learning to read.”


  7. New insights into how sleep helps the brain to reorganize itself

    October 15, 2017 by Ashley

    From the University of Surrey press release:

    A study has given new insights into how sleep contributes to brain plasticity — the ability for our brain to change and reorganize itself — and could pave the way for new ways to help people with learning and memory disorders.

    Researchers at the Humboldt and Charité Universities in Berlin, led by Dr Julie Seibt from the University of Surrey, used cutting edge techniques to record activity in a particular region of brain cells that is responsible for holding new information — the dendrites.

    The study, published in Nature Communications, found that activity in dendrites increases when we sleep, and that this increase is linked to specific brain waves that are seen to be key to how we form memories.

    Dr Julie Seibt, Lecturer in Sleep and Plasticity at the University of Surrey and lead author of the study, said: “Our brains are amazing and fascinating organs — they have the ability to change and adapt based on our experiences. It is becoming increasingly clear that sleep plays an important role in these adaptive changes. Our study tells us that a large proportion of these changes may occur during very short and repetitive brain waves called spindles.

    “Sleep spindles have been associated with memory formation in humans for quite some time but nobody knew what they were actually doing in the brain. Now we know that during spindles, specific pathways are activated in dendrites, maybe allowing our memories to be reinforced during sleep.

    “In the near future, techniques that allow brain stimulation, such as transcranial magnetic stimulation (TMS), could be used to stimulate dendrites with the same frequency range as spindles. This could lead to enhance cognitive functions in patients with learning and memory disorders, such as dementia.”


  8. Study looks at how disliked classes affect incidence of college student cheating

    October 13, 2017 by Ashley

    From the Ohio State University press release:

    One of the tactics that discourages student cheating may not work as well in courses that college students particularly dislike, a new study has found.

    Previous research suggests instructors who emphasize mastering the content in their classes encounter less student cheating than those who push students to get good grades.

    But this new study found emphasizing mastery isn’t related as strongly to lower rates of cheating in classes that students list as their most disliked. Students in disliked classes were equally as likely to cheat, regardless of whether the instructors emphasized mastery or good grades.

    The factor that best predicted whether a student would cheat in a disliked class was a personality trait: a high need for sensation, said Eric Anderman, co-author of the study and professor of educational psychology at The Ohio State University.

    People with a high need for sensation are risk-takers, Anderman said.

    “If you enjoy taking risks, and you don’t like the class, you may think ‘why not cheat.’ You don’t feel you have as much to lose,” he said.

    Anderman conducted the study with Sungjun Won, a graduate student in educational psychology at Ohio State. It appears online in the journal Ethics & Behavior and will be published in a future print edition.

    The study is the first to look at how academic misconduct might differ in classes that students particularly dislike.

    “You could understand why students might be less motivated in classes they don’t like and that could affect whether they were willing to cheat,” Anderman said.

    The researchers surveyed 409 students from two large research universities in different parts of the country.

    The students were asked to answer questions about the class in college that they liked the least.

    Participants were asked if they took part in any of 22 cheating behaviors in that class, including plagiarism and copying test answers from another student. The survey also asked students their beliefs about the ethics of cheating, their perceptions of how much the instructor emphasized mastery and test scores, and a variety of demographic questions, as well as a measure of sensation-seeking.

    A majority of the students (57 percent) reported a math or science course as their most disliked. Large classes were not popular: Nearly half (45 percent) said their least favorite class had more than 50 students enrolled, while two-thirds (65 percent) said the course they disliked was required for their major.

    The most interesting finding was that an emphasis on mastery or on test scores did not predict cheating in disliked classes, Anderman said.

    In 20 years of research on cheating, Anderman said he and his colleagues have consistently found that students cheated less — and believed cheating was less acceptable — in classes where the goals were intrinsic: learning and mastering the content. They were more likely to cheat in classes where they felt the emphasis was on extrinsic goals, such as successful test-taking and getting good grades.

    This study was different, Anderman said.

    In classes that emphasized mastery, some students still believed cheating was wrong, even in their most-disliked class. But when classes are disliked, the new findings suggest a focus on mastery no longer directly protects against cheating behaviors. Nevertheless, there is still a positive relation between actual cheating and the belief that cheating is morally acceptable in those classes.

    “When you have students who are risk-takers in classes that they dislike, the benefits of a class that emphasizes learning over grades seems to disappear,” he said.

    But Anderman noted that this study reinforced results from earlier studies that refute many of the common beliefs about student cheating.

    “All of the things that people think are linked to cheating don’t really matter,” he said.

    “We examined gender, age, the size of classes, whether it was a required class, whether it was graded on a curve — and none of those were related to cheating once you took into account the need for sensation in this study,” he said. “And in other studies, the classroom goals were also important.”

    The good news is that the factors that cause cheating are controllable in some measure, Anderman said. Classes can be designed to emphasize mastery and interventions could be developed to help risk-taking students.

    “We can find ways to help minimize cheating,” he said.


  9. Study suggests stress diminishes our capacity to sense new dangers

    October 12, 2017 by Ashley

    From the New York University press release:

    Being under stress diminishes our abilities to predict new dangers that we face, a team of psychology researchers finds. Its work runs counter to the conventional view that stress enhances our ability to detect and adjust to these changing sources of threat.

    “Stress does not always increase perceptions of danger in the environment, as is often assumed,” explains Candace Raio, a postdoctoral researcher at New York University and the study’s lead author. “In fact, our study shows that when we are under stress, we pay less attention to changes in the environment, potentially putting us at increased risk for ignoring new sources of threat. As a result, stress can reduce the flexibility of our responses to threats by impairing how well we track and update predictions of potentially dangerous circumstances.”

    The research, conducted in collaboration with Jian Li, a scientist at Peking University, appears in the latest issue of the journal Proceedings of the National Academy of Sciences.

    Although learning to predict threats in our environment is critical to survival, note the study’s authors, who also include NYU Professor Elizabeth Phelps and Assistant Professor Catherine Hartley, it is equally important to be flexible in order to control these responses when new sources of threat change — for instance, from an oncoming car to an out-of-control skateboarder.

    To test our ability to learn to flexibly update threat responses under stressful conditions the researchers conducted a series of experiments that centered on “Pavlovian threat-conditioning.” Here, the subjects viewed images on a computer screen. The appearance of some images were coupled with a mild, electric wrist-shock, serving as a “threat cue,” while other images were never paired with a shock (“safe cue”).

    A day later, half of the participants underwent a laboratory procedure designed to induce stress — this “stress group” placed their arm in an ice-water bath for a few minutes, which elevated two known stress hormones (alpha-amylase and cortisol). Later, all of the study’s subjects repeated the threat-conditioning procedure. However, this time the cue outcomes switched: the earlier threatening cue no longer predicted shock, but the formerly safe cue did.

    While the subjects viewed the images, the scientists collected physiological arousal responses in order to measure how individuals anticipated the outcome of each cue.

    On the second day of the experiment, the stress group was less likely to change their responses to threats (the formerly safe visuals that were now paired with shocks) than was the control group, an indication that stress impaired its ability to be flexible in detecting new threats. Specifically, stressed participants showed reduced physiological response to the new threat cue, suggesting that they did not fully switch their association with this cue from safe to threatening.

    The researchers then applied a computational learning model to further understand how stress affects flexibility in decision making. This analysis revealed a learning deficit for the subjects put under the stress condition — specifically, stress affected an attentional signal (“associability“) — that participants used to update the cue associations. In short, this resulted in a slower rate of learning.


  10. Study suggests synchronizing specific brain oscillations enhances executive function

    October 9, 2017 by Ashley

    From the Boston University press release:

    Two brain regions — the medial frontal and lateral prefrontal cortices — control most executive function. Researchers used high-definition transcranial alternating current stimulation (HD-tACS) to synchronize oscillations between them, improving brain processing. De-synchronizing did the opposite.

    Robert Reinhart calls the medial frontal cortex the “alarm bell of the brain.”

    “If you make an error, this brain area fires,” says Reinhart, an assistant professor of psychological and brain sciences at Boston University. “If I tell you that you make an error, it also fires. If something surprises you, it fires.” Hit a sour note on the piano and the medial frontal cortex lights up, helping you correct your mistake as fast as possible. In healthy people, this region of the brain works hand in hand (or perhaps lobe in lobe) with a nearby region, the lateral prefrontal cortex, an area that stores rules and goals and also plays an important role in changing our decisions and actions.

    “These are maybe the two most fundamental brain areas involved with executive function and self-control,” says Reinhart, who used a new technique called high-definition transcranial alternating current stimulation (HD-tACS) to stimulate these two regions with electrodes placed on a participant’s scalp. Using this new technology, he found that improving the synchronization of brain waves, or oscillations, between these two regions enhanced their communication with each other, allowing participants to perform better on laboratory tasks related to learning and self-control. Conversely, de-synchronizing or disrupting the timing of the brain waves in these regions impaired participants’ ability to learn and control their behavior, an effect that Reinhart could quickly fix by changing how he delivered the electrical stimulation. The work, published October 9, 2017, in the journal Proceedings of the National Academy of Sciences (PNAS), suggests that electrical stimulation can quickly — and reversibly — increase or decrease executive function in healthy people and change their behavior. These findings may someday lead to tools that can enhance normal brain function, possibly helping treat disorders from anxiety to autism.

    “We’re always looking for a link between brain activity and behavior — it’s not enough to have just one of those things. That’s part of what makes this finding so exciting,” says David Somers, a BU professor of psychological and brain sciences, who was not involved with the study. Somers likens the stimulation to a “turbo charge” for your brain. “It’s really easy to mess things up in the brain but much harder to actually improve function.”

    Research has recently suggested that populations of millions of cells in the medial frontal cortex and the lateral prefrontal cortex may communicate with each other through the precise timing of their synchronized oscillations, and these brain rhythms appear to occur at a relatively low frequency (about four to eight cycles per second). While scientists have studied these waves before, Reinhart is the first to use HD-tACS to test how these populations of cells interact and whether their interactions are behaviorally useful for learning and decision-making. In his work, funded by the National Institutes of Health, Reinhart is able to use HD-tACS to isolate and alter these two specific brain regions, while also recording participants’ electrical brain activity via electroencephalogram (EEG).

    “The science is much stronger, much more precise than what’s been done earlier,” says Somers.

    In his first round of studies, Reinhart tested 30 healthy participants. Each subject wore a soft cap fitted with electrodes that stimulated brain activity, while additional electrodes monitored brain waves. (The procedure is safe, noninvasive, and doesn’t hurt, says Reinhart. “There’s a slight tingling for the first 30 seconds,” he says, “and then people habituate to it.”) Then, for 40 minutes, participants performed a time-estimation learning task, pressing a button when they thought 1.7 seconds had passed. Each time, the computer gave them feedback: too fast, too slow, or just right.

    Reinhart tested each of the 30 participants three times, once up-regulating the oscillations, once disrupting them, and once doing nothing. In tests where Reinhart cranked up the synchrony between the two brain regions, people learned faster, made fewer errors, and — when they did make an error — adjusted their performance more accurately. And, when he instead disrupted the oscillations and decreased the synchrony — in a very rough sense, flicking the switch from “smart” to “dumb” — subjects made more errors and learned slower. The effects were so subtle that the people themselves did not notice any improvement or impairment in the task, but the results were statistically significant.

    Reinhart then replicated the experiment in 30 new participants, adding another study parameter by looking at only one side of the brain at a time. In all cases, he found that the right hemisphere of the brain was more relevant to changing behavior.

    Then came the most intriguing part of the study. Thirty more participants came in and tried the task. First, Reinhart temporarily disrupted each subject’s brain activity, watching as their brain waves de-synchronized and their performance on the task declined. But this time, in the middle of the task, Reinhart switched the timing of the stimulation — again, turning the knob from “dumb” to “smart.” Participants recovered their original levels of brain synchrony and learning behavior within minutes.

    “We were shocked by the results and how quickly the effects of the stimulation could be reversed,” says Reinhart.

    Though Reinhart cautions that these results are very preliminary, he notes that many psychiatric and neurological disorders — including anxiety, Parkinson’s, autism, schizophrenia, ADHD, and Alzheimer’s — demonstrate disrupted oscillations. Currently, most of these disorders are treated with drugs that act on receptors throughout the brain. “Drugs are really messy,” says Reinhart. “They often affect very large regions of brain.” He imagines, instead, a future with precisely targeted brain stimulation that acts only on one critical node of a brain network, “like a finer scalpel.” Reinhart’s next line of research will test the technology on people with anxiety disorders.

    There is also, of course, the promise of what the technology might offer to healthy brains. Several companies already market brain stimulation devices that claim to both enhance learning and decrease anxiety. YouTube videos show how to make your own, with double-A batteries and off-the-shelf electronics, a practice Reinhart discourages. “You can hurt yourself,” he says. “You can get burned and have current ringing around your head for days.”

    He does, however, see the appeal. “I had volunteers in previous research who came back and said, ‘Hey, where can I get one of these? I’d love to have it prior to an exam,'” he says. “That was after we debriefed them and they were reading the papers about it.”

    Somers notes that there are still many questions to answer about the technology before it goes mainstream: How long can the effect last? How big can you make it? Can you generalize from a simple laboratory task to much more complicated endeavors? “But the biggest question,” says Somers, “is how far you can go with this technology.”

    “Think about any given workday,” says Somers. “You need to be really ‘on’ for one meeting, so you set aside some time on your lunch break for some brain stimulation. I think a lot of people would be really into that — it would be like three cups of coffee without the jitters.”