1. Study suggests dementia risk is increased in 40-something women with high blood pressure

    October 16, 2017 by Ashley

    From the American Academy of Neurology press release:

    Women who develop high blood pressure in their 40s may be more likely to develop dementia years later, according to a study published in the October 4, 2017, online issue of Neurology®, the medical journal of the American Academy of Neurology.

    “High blood pressure in midlife is a known risk factor for dementia, but these results may help us better understand when this association starts, how changes in blood pressure affect the risk of dementia and what the differences are between men and women,” said study author Rachel A. Whitmer, PhD, of Kaiser Permanente Division of Research in Oakland, Calif.

    The study involved 7,238 people who were part of the Kaiser Permanente Northern California health care system. They all had blood pressure checks and other tests from 1964 to 1973 when they were an average age of 33, then again when they were an average age of 44. About 22 percent of the participants had high blood pressure in their 30s (31 percent of men and 14 percent of women). In their 40s, 22 percent overall had high blood pressure, but the makeup was 25 percent of men and 18 percent of women.

    Next the researchers identified the 5,646 participants who were still alive and part of the Kaiser Permanente system in 1996 and followed them for an average of 15 years to see who developed dementia. During that time, 532 people were diagnosed with dementia.

    Having high blood pressure in early adulthood, or in one’s 30s, was not associated with any increased risk of dementia. But having high blood pressure in mid-adulthood, or in one’s 40s, was associated with a 65-percent increased risk of dementia for women. Women who developed high blood pressure in their 40s were 73 percent more likely to develop dementia than women who had stable, normal blood pressure throughout their 30s and 40s.

    The results were the same when researchers adjusted for other factors that could affect risk of dementia, such as smoking, diabetes and body mass index.

    “Even though high blood pressure was more common in men, there was no evidence that having high blood pressure in one’s 30s or 40s increased the risk of dementia for men,” Whitmer said. “More research is needed to identify the possible sex-specific pathways through which the elevated blood pressure accelerates brain aging.”

    For women who made it to age 60 without dementia, the cumulative 25-year risk of dementia was 21 percent for those with high blood pressure in their 30s compared to 18 percent for those who had normal blood pressure in their 30s.

    One limitation of this study is that many developments have been made since the study started in screening for high blood pressure and the use and effectiveness of drugs for it, limiting the ability to generalize the results to today’s population.


  2. New insights into how sleep helps the brain to reorganize itself

    October 15, 2017 by Ashley

    From the University of Surrey press release:

    A study has given new insights into how sleep contributes to brain plasticity — the ability for our brain to change and reorganize itself — and could pave the way for new ways to help people with learning and memory disorders.

    Researchers at the Humboldt and Charité Universities in Berlin, led by Dr Julie Seibt from the University of Surrey, used cutting edge techniques to record activity in a particular region of brain cells that is responsible for holding new information — the dendrites.

    The study, published in Nature Communications, found that activity in dendrites increases when we sleep, and that this increase is linked to specific brain waves that are seen to be key to how we form memories.

    Dr Julie Seibt, Lecturer in Sleep and Plasticity at the University of Surrey and lead author of the study, said: “Our brains are amazing and fascinating organs — they have the ability to change and adapt based on our experiences. It is becoming increasingly clear that sleep plays an important role in these adaptive changes. Our study tells us that a large proportion of these changes may occur during very short and repetitive brain waves called spindles.

    “Sleep spindles have been associated with memory formation in humans for quite some time but nobody knew what they were actually doing in the brain. Now we know that during spindles, specific pathways are activated in dendrites, maybe allowing our memories to be reinforced during sleep.

    “In the near future, techniques that allow brain stimulation, such as transcranial magnetic stimulation (TMS), could be used to stimulate dendrites with the same frequency range as spindles. This could lead to enhance cognitive functions in patients with learning and memory disorders, such as dementia.”


  3. When the brain’s wiring breaks

    by Ashley

    From the University of North Carolina Health Care press release:

    Among all the bad things that can happen to the brain when it is severely jolted — in a car accident, for example — one of the most common and worrisome is axon damage. Axons are the long stalks that grow out of the bodies of neurons and carry signals to other neurons. They are part of the brain’s “wiring,” and they sometimes grow to amazing lengths — from the brain all the way down to the spinal cord. But axons are thin and fragile. When the brain receives a strong blow, axons are often stressed past their structural limits. They either break or swiftly degenerate.

    That much we know. But scientists haven’t understood what happens next. What happens to the neuron when its axon goes south?

    “Getting at the precise mechanisms of what happens after axon damage has been really challenging,” says Anne Marion Taylor, PhD, an assistant professor in the UNC/NC State Joint Department of Biomedical Engineering. “But we think we’ve finally figured out a key part of what happens and why.”

    In a Nature Communications paper, Taylor and colleagues have revealed new molecular details of axotomy — when neurons are damaged or completely severed.

    Shrinking dendritic spines, rising excitability

    Scientists do know that a severed axon will cause a neuron to quickly lose some of its incoming connections from other neurons. These connections occur at short, root-like tendrils called dendrites, which sprout from the neuron’s cell body, or soma. Dendrites themselves grow tiny protrusions called spines to create actual connections, or synapses, with incoming axons. It’s these dendritic spines that shrink in number following axotomy.

    As it loses input connections, the wounded neuron also becomes more excitable: the neuron becomes more likely to fire signals down its truncated axon when stimulated to do so by other neurons. Neurons normally have a mix of inputs. Some are excitatory, pushing the neuron to fire; others are inhibitory, restraining the neuron from firing. Neurons with axons that have been truncated show a disruption of the normal excitatory/inhibitory balance in favor of excitability.

    This enhanced excitability in the weeks and months following injury is thought to be largely an adaptive, beneficial response — a switch to a neuronal “seeking mode” like that seen in developing brains. This beneficial switch increases the chance that the neuron with the truncated axon can hook up with a new partner and continue to be a productive member of neural society.

    “Neurologists know this,” said Taylor, a member of the UNC Neuroscience Center. “It’s why they promote physical therapy and retraining for people who suffer head injury. During this extended period of excitability, PT and retraining can help guide injured neurons along beneficial pathways.”

    But the injury-induced excitability in a neuron may cause problems too. A neuron can die from overexcitement (neuroscientists call this excitotoxicity). Neuronal hyperactivity after injury also may lead to intractable pain, muscle spasms, or agitation in the patient. In the days immediately after injury, doctors often treat brain-injury patients with drugs such as gabapentin designed specifically to suppress neuronal hyper-excitability.

    What scientists haven’t understood very well are the biological details, the hows and whys of dendrite spine loss and hyper-excitability. Those details have been elusive because of the spaghetti-like complexity of the brain, which makes it extremely difficult for a scientist to isolate a neuron and its axon for manipulation and analysis, either in a lab dish or a lab animal.

    Several years ago, as a biomedical engineering graduate student at the University of California-Irvine, Taylor invented a device to help solve this problem. It’s a microfluidic chamber with tiny grooves that trap individual axons from cultured neurons as they grow longer.

    “The axons aren’t able to turn around, so they just keep growing straight until they reach a separate compartment,” Taylor says. “We can cut an axon in its compartment and then look at responses in the associated soma or dendrites without affecting axons in other compartments.”

    A loss of inhibition

    Taylor and her colleagues used the device in the new study to analyze what happens when an axon is severed. They found that events within the neuron itself drive the resulting dendrite spine loss and hyper-excitability. Signals originating at the site of injury move rapidly back along the remaining portion of the axon to the neuronal soma and nucleus, triggering a new pattern of gene activity. Taylor’s team managed to block the neuron’s gene activity to prevent the dendritic spine loss and hyper-excitability.

    Taylor and colleagues analyzed how gene activity changed before and after axotomy. Multiple genes were altered following axotomy. The activity for one of these genes, encoding a protein called netrin-1, turned out to be sharply reduced. A separate analysis showed a similar drop in netrin-1 in affected neurons in rats whose axons from the brain to the spinal cord had been cut. Together, these results hinted that netrin-1’s absence might be a major factor driving neuronal changes after axotomy.

    When Taylor and colleagues added netrin-1 to axotomized neurons to restore the protein to normal levels — even two full days after severing the axon — they found that the treatment quickly reversed all of the dendritic spine loss and most of the hyper-excitability.

    “The treated neurons more closely resembled uninjured controls,” Taylor says. “This was a striking finding and we were surprised to find that netrin-1 normalized both the number of synapses and excitability, even when applied days after injury.”

    She added, “We’re a long way off, but we really do hope to translate this netrin-1 finding into a new therapy. Ideally, it would do what gabapentin and related head-injury drugs aim to do, only better and more precisely.”


  4. Study identifies new mechanism for the development of schizophrenia

    October 14, 2017 by Ashley

    From the Trinity College Dublin press release:

    Scientists from Trinity College Dublin and the Royal College of Surgeons in Ireland (RCSI) have discovered that abnormalities of blood vessels in the brain may play a major role in the development of schizophrenia, a debilitating condition that affects around 1% of people in Ireland.

    The network of blood vessels in the brain regulates the transport of energy and materials in and out of the brain — forming what is known as the blood-brain barrier (BBB). Scientists working in the Smurfit Institute of Genetics at Trinity College Dublin and the Department of Psychiatry, RCSI, have discovered that abnormalities in the integrity of the BBB may be a critical component in the development of schizophrenia and other brain disorders.

    The research, published today in the leading international journal Molecular Psychiatry, was supported by the Health Research Board (HRB), Science Foundation Ireland (SFI) and the US-based charity, Brightfocus Foundation.

    People living with a chromosomal abnormality termed ’22q11 deletion syndrome’ (22q11DS) are 20 times more likely to develop schizophrenia. These people lack approximately 40-60 genes within a small region in one of the pairs of chromosome 22. A gene termed “Claudin-5” is located within this region, and it is changes in the levels of this component of the BBB that are associated with the presence of schizophrenia.

    Assistant Professor in Neurovascular Genetics at Trinity, Dr Matthew Campbell, said: “Our recent findings have, for the first time, suggested that schizophrenia is a brain disorder associated with abnormalities of brain blood vessels. The concept of tailoring drugs to regulate and treat abnormal brain blood vessels is a novel treatment strategy and offers great potential to complement existing treatments of this debilitating disease.”

    “While it is very well accepted that improving cardiovascular health can reduce the risk of stroke and heart attacks, we now believe that drugs aimed at improving cerebrovascular health may be an additional strategy to treating brain diseases in the future.”

    Working with an international group of scientists from Cardiff University, Stanford University and Duke University in addition to screening post-mortem brain samples from the Stanley Medical Research Institute, the scientists are the first to identify a molecular genetic component of the blood brain barrier with the development of schizophrenia.

    Professor Kieran Murphy, Head of Department of Psychiatry, RCSI and Consultant Psychiatrist at Beaumont Hospital, said: “We have shown for the first time that dysfunction of the blood-brain barrier may be an important factor in the development of schizophrenia. These findings greatly add to our understanding of this debilitating and socially isolating condition.”

    Scientists in the laboratories of Dr Matthew Campbell and Professor Kieran Murphy collaborated on this study.


  5. Study suggests being unaware of memory loss predicts Alzheimer’s disease

    by Ashley

    From the Centre for Addiction and Mental Health press release:

    While memory loss is an early symptom of Alzheimer’s disease, its presence doesn’t mean a person will develop dementia. A new study at the Centre for Addiction and Mental Health (CAMH) has found a clinically useful way to predict who won’t develop Alzheimer’s disease, based on patients’ awareness of their memory problems.

    People who were unaware of their memory loss, a condition called anosognosia, were more likely to progress to Alzheimer’s disease, according to the study, published today in the Journal of Clinical Psychiatry. Those who were aware of memory problems were unlikely to develop dementia.

    “If patients complain of memory problems, but their partner or caregiver isn’t overly concerned, it’s likely that the memory loss is due to other factors, possibly depression or anxiety,” says lead author Dr. Philip Gerretsen, Clinician Scientist in CAMH’s Geriatric Division and Campbell Family Mental Health Research Institute. “They can be reassured that they are unlikely to develop dementia, and the other causes of memory loss should be addressed.”

    In other cases, the partner or caregiver is more likely to be distressed while patients don’t feel they have any memory problems. In Alzheimer’s disease, lack of awareness is linked to more burden on caregivers. Both unawareness of illness (anosognosia) and memory loss (known as mild cognitive impairment) can be objectively assessed using questionnaires.

    The study, believed to be the largest of its kind on illness awareness, had data on 1,062 people aged 55 to 90 from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). This included 191 people with Alzheimer’s disease, 499 with mild cognitive impairment and 372 as part of the healthy comparison group.

    The researchers also wanted to identify which parts of the brain were affected in impaired illness awareness. They examined the brain’s uptake of glucose, a type of sugar. Brain cells need glucose to function, but glucose uptake is impaired in Alzheimer’s disease.

    Using PET brain scans, they showed that those with impaired illness awareness also had reduced glucose uptake in specific brain regions, even when accounting for other factors linked to reduced glucose uptake, such as age and degree of memory loss.

    As the next stage of this research, Dr. Gerretsen will be tracking older adults with mild cognitive impairment who are receiving an intervention to prevent Alzheimer’s dementia. This ongoing study, the PACt-MD study, combines brain training exercises and brain stimulation, using a mild electrical current to stimulate brain cells and improve learning and memory. While the main study is focused on dementia prevention, Dr. Gerretsen will be looking at whether the intervention improves illness awareness in conjunction with preventing progression to dementia.


  6. Study suggests brain wiring affects how people perform specific tasks

    October 13, 2017 by Ashley

    From the Rice University press release:

    The way a person’s brain is “wired” directly impacts how well they perform simple and complex tasks, according to a new study from researchers at Rice University.

    The brain is organized into different subnetworks, or “modules,” that support distinct functions for different tasks, such as speaking, memorizing and expressing emotion. The researchers examined how high or low brain modularity — the degree to which the modules communicate with one another — impacts performance of simple and complex tasks.

    “Think of your brain as you would think of a university,” said Simon Fischer-Baum, an assistant professor of psychology in Rice’s School of Social Sciences and one of the study’s authors. “Individuals organize themselves into densely interconnected communities, like the dormitories and sports teams, though individuals within these groups also have connections with people outside of those groups. Brains are the same way: Brain regions are organized into communities with lots of connections between regions in the community and fewer connections to regions outside of the community. But people’s brains are different. Some people have brains that are better described as having rigid community structure — or higher modularity — while other people have brains without such rigid community structure — or lower modularity.”

    Throughout the course of the study, modularity was measured on a scale from zero to one. Zero represented low modularity — brains in which every region of the brain is just as likely to communicate with any other region; one represented high modularity — brains that can be divided into communities of brain regions whose members only communicate with each other.

    In the study, the researchers had 52 participants (16 men, 36 women) between the ages of 18 and 26 undergo functional magnetic resonance imaging (fMRI), a process that measures brain neural activity by detecting changes associated with blood oxygen levels. The neural activity of each participant was studied by fMRI for 21 minutes while they were at rest. If neural activity increased and decreased in two areas at the same time over the course of the scan, it was an indicator that the two areas were connected. Using these data to measure which brain areas were connected to each other, the researchers determined the extent to which participants’ brains could be described as having communities of brain regions that communicate only with each other.

    The researchers then took the participants through a series of behavioral tasks, including complex tasks that tested their memory while simultaneously doing simple arithmetic and simple tasks such as indicating the direction an arrow was pointing when their attention had been already drawn to the location the arrow would appear.

    The researchers found that participants with high-modularity brains were more successful at performing simple tasks than individuals with low-modularity brains. In the experiment measuring reaction time to the arrows, individuals with high modularity performed nearly twice as successfully (a reaction time advantage of 58 milliseconds for knowing where the target would appear) as individuals with low modularity (34 milliseconds advantage).

    However, participants with low-modularity brains had greater success with complex tasks than participants with high-modularity brains. For example, those with low modularity correctly recalled 86 percent of the items in the memory task, while individuals with high modularity correctly recalled only 76 percent.

    Fischer-Baum said that this effect can be considered relative to the decline in working memory with age, which is a hallmark of the cognitive effects of aging. Based on previous research, this difference in memory recall between the high- and low-modularity subgroups of highly educated, healthy young adults is roughly equivalent to the difference between memory recall at age 20 and at age 70.

    Randi Martin, the Elma Schneider Professor of Psychology in Rice’s School of Social Sciences and the lead faculty author on the study, said that one of the major strengths of the study is that the relationship between simple and complex tasks and high and low modularity was predicted by a very general theory of biology proposed by co-author Michael Deem, the John W. Cox Professor of Biochemical and Genetic Engineering and a professor of physics and astronomy at Rice University. According to this theory, high-modular systems in general should perform better on simpler tasks that take less time to perform, while low-modular systems in general should perform better on more complex tasks that take more time to perform. This study demonstrates that this general principle of biology applies to cognitive neuroscience.

    The authors said the research has important implications for understanding the brain as a network.

    “There is an increasing focus in cognitive neuroscience on thinking of cognitive function as emerging from interconnected regions of the brain, rather than existing in a single brain region,” Fischer-Baum said. “While other groups have found correlations between brain network properties and performance on different tasks, our study is the first to show that these relationships can be understood by a more fundamental theory of modularity in biological systems.”


  7. Human brain recalls visual features in reverse order than it detects them

    by Ashley

    From the Zuckerman Institute at Columbia University press release:

    Scientists at Columbia’s Zuckerman Institute have contributed to solving a paradox of perception, literally upending models of how the brain constructs interpretations of the outside world. When observing a scene, the brain first processes details — spots, lines and simple shapes — and uses that information to build internal representations of more complex objects, like cars and people. But when recalling that information, the brain remembers those larger concepts first to then reconstruct the details — representing a reverse order of processing. The research, which involved people and employed mathematical modeling, could shed light on phenomena ranging from eyewitness testimony to stereotyping to autism.

    This study was published in Proceedings of the National Academy of Sciences.

    “The order by which the brain reacts to, or encodes, information about the outside world is very well understood,” said Ning Qian, PhD, a neuroscientist and a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “Encoding always goes from simple things to the more complex. But recalling, or decoding, that information is trickier to understand, in large part because there was no method — aside from mathematical modeling — to relate the activity of brain cells to a person’s perceptual judgment.”

    Without any direct evidence, researchers have long assumed that decoding follows the same hierarchy as encoding: you start from the ground up, building up from the details. The main contribution of this work with Misha Tsodyks, PhD, the paper’s co-senior author who performed this work while at Columbia and is at the Weizmann Institute of Science in Israel, “is to show that this standard notion is wrong,” Dr. Qian said. “Decoding actually goes backward, from high levels to low.”

    As an analogy of this reversed decoding, Dr. Qian cites last year’s presidential election as an example.

    “As you observed the things one candidate said and did over time, you may have formed a categorical negative or positive impression of that person. From that moment forward, the way in which you recalled the candidate’s words and actions are colored by that overall impression,” said Dr. Qian. “Our findings revealed that higher-level categorical decisions — ‘this candidate is trustworthy’ — tend to be stable. But lower-level memories — ‘this candidate said this or that’ — are not as reliable. Consequently, high-level decoding constrains low-level decoding.”

    To explore this decoding hierarchy, Drs. Qian and Tsodyks and their team conducted an experiment that was simple in design in order to have a clear interpretation of the results. They asked 12 people to perform a series of similar tasks. In the first, they viewed a line angled at 50 degrees on a computer screen for half a second. Once it disappeared, the participants repositioned two dots on the screen to match what they remembered to be the angle of the line. They then repeated this task 50 more times. In a second task, the researchers changed the angle of the line to 53 degrees. And in a third task, the participants were shown both lines at the same time, and then had to orient pairs of dots to match each angle.

    Previously held models of decoding predicted that in the two-line task, people would first decode the individual angle of each line (a lower-level feature) and the use that information to decode the two lines’ relationship (a higher-level feature).

    “Memories of exact angles are usually imprecise, which we confirmed during the first set of one-line tasks. So, in the two-line task, traditional models predicted that the angle of the 50-degree line would frequently be reported as greater than the angle of the 53-degree line,” said Dr. Qian.

    But that is not what happened. Traditional models also failed to explain several other aspects of the data, which revealed bi-directional interactions between the way participants recalled the angle of the two lines. The brain appeared to encode one line, then the other, and finally encode their relative orientation. But during decoding, when participants were asked to report the individual angle of each line, their brains used that the lines’ relationship — which angle is greater — to estimate the two individual angles.

    “This was striking evidence of participants employing this reverse decoding method,” said Dr. Qian.

    The authors argue that reverse decoding makes sense, because context is more important than details. Looking at a face, you want to assess quickly if someone is frowning, and only later, if need be, estimate the exact angles of the eyebrows. “Even your daily experience shows that perception seems to go from high to low levels,” Dr. Qian added.

    To lend further support, the authors then constructed a mathematical model of what they think happens in the brain. They used something called Bayesian inference, a statistical method of estimating probability based on prior assumptions. Unlike typical Bayesian models, however, this new model used the higher-level features as the prior information for decoding lower-level features. Going back to the visual line task, they developed an equation to estimate individual lines’ angles based on the lines’ relationship. The model’s predictions fit the behavioral data well.

    In the future, the researchers plan to extend their work beyond these simple tasks of perception and into studies of long-term memory, which could have broad implications — from how we assess a presidential candidate, to if a witness is offering reliable testimony.

    “The work will help to explain the brain’s underlying cognitive processes that we employ every day,” said Dr. Qian. “It might also help to explain complex disorders of cognition, such as autism, where people tend to overly focus on details while missing important context.”


  8. Mental training changes brain structure and reduces social stress

    by Ashley

    From the Max Planck Institute for Human Cognitive and Brain Sciences press release:

    Meditation is beneficial for our well-being. This ancient wisdom has been supported by scientific studies focusing on the practice of mindfulness. However, the words “mindfulness” and “meditation” denote a variety of mental training techniques that aim at the cultivation of various different competencies. In other words, despite growing interest in meditation research, it remains unclear which type of mental practice is particularly useful for improving either attention and mindfulness or social competencies, such as compassion and perspective-taking.

    Other open questions are, for example, whether such practices can induce structural brain plasticity and alter brain networks underlying the processing of such competencies, and which training methods are most effective in reducing social stress. To answer these questions, researchers from the Department of Social Neuroscience at the Max Planck Institute of Human Cognitive and Brain Sciences in Leipzig, Germany conducted the large-scale ReSource Project aiming at teasing apart the unique effects of different methods of mental training on the brain, body, and on social behaviour.

    The ReSource Project consisted of three 3-month training modules, each focusing on a different competency. The first module trained mindfulness-based attention and interoception. Participants were instructed in classical meditation techniques similar to those taught in the 8-week Mindfulness-based Stress Reduction Program (MBSR), which requires one to focus attention on the breath (Breathing Meditation), on sensations in different parts of the body (Body Scan), or on visual or auditory cues in the environment. Both exercises were practised in solitude.

    Training in the second module focused on socio-affective competencies, such as compassion, gratitude, and dealing with difficult emotions. In addition to classical meditation exercises, participants learnt a new technique requiring them to practise each day for 10 minutes in pairs. These partner exercises, or so-called “contemplative dyads,” were characterised by a focused exchange of every-day life affective experiences aiming to train gratitude, dealing with difficult emotions, and empathic listening.

    In the third module, participants trained socio-cognitive abilities, such as metacognition and perspective-taking on aspects of themselves and on the minds of others. Again, besides classical meditation exercises, this module also offered dyadic practices focusing on improving perspective-taking abilities. In pairs, participants learnt to mentally take the perspective of an “inner part” or aspect of their personality. Examples of inner parts were the “worried mother,” the “curious child,” or the “inner judge.”

    By reflecting on a recent experience from this perspective, the speaker in dyadic pair-exercise trained in perspective-taking on the self, thus gaining a more comprehensive understanding of his or her inner world. By trying to infer which inner part is speaking, the listener practices taking the perspective of the other.

    All exercises were trained on six days a week for a total of 30 minutes a day. Researchers assessed a variety of measures such as psychological behavioural tests, brain measures by means of magnetic resonance-imaging (MRI), and stress markers such as cortisol release before and after each of the three three-month training modules.

    Depending on which mental training technique was practised over a period of three months, specific brain structures and related behavioural markers changed significantly in the participants. For example, after the training of mindfulness-based attention for three months, we observed changes in the cortex in areas previously shown to be related to attention and executive functioning.

    Simultaneously, attention increased in computer-based tasks measuring executive aspects of attention, while performance in measures of compassion or perspective-taking had not increased significantly. These social abilities were only impacted in our participants during the other two more intersubjective modules,” states Sofie Valk, first author of the publication, which has just been released by the journal Science Advances.

    “In the two social modules, focusing either on socio-affective or socio-cognitive competencies, we were able to show selective behavioural improvements with regard to compassion and perspective-taking. These changes in behaviour corresponded with the degree of structural brain plasticity in specific regions in the cortex which support these capacities,” according to Valk.

    “Even though brain plasticity in general has long been studied in neuroscience, until now little was known about the plasticity of the social brain. Our results provide impressive evidence for brain plasticity in adults through brief and concentrated daily mental practice, leading to an increase in social intelligence. As empathy, compassion, and perspective-taking are crucial competencies for successful social interactions, conflict resolution, and cooperation, these findings are highly relevant to our educational systems as well as for clinical application,” explains Prof. Tania Singer, principal investigator of the ReSource Project.

    Besides differentially affecting brain plasticity, the different types of mental training also differentially affected the stress response. “We discovered that in participants subjected to a psychosocial stress test, the secretion of the stress hormone cortisol was diminished by up to 51%. However, this reduced stress sensitivity was dependent on the types of previously trained mental practice,” says Dr Veronika Engert, first author of another publication from the ReSource Project, which describes the connection between mental training and the acute psychosocial stress response, also recently published in Science Advances. “Only the two modules focusing on social competencies significantly reduced cortisol release after a social stressor. We speculate that the cortisol stress response was affected particularly by the dyadic exercises practised in the social modules. The daily disclosure of personal information to a stranger coupled with the non-judgmental, empathic listening experience in the dyads may have “immunised” participants against the fear of social shame and judgment by others — typically a salient trigger of social stress. The concentrated training of mindfulness-based attention and interoceptive awareness, on the other hand, had no dampening effect on the release of cortisol after experiencing a social stressor.”

    Interestingly, despite these differences on the level of stress physiology, each of the 3-month training modules reduced the subjective perception of stress. This means that although objective, physiological changes in social stress reactivity were only seen when participants engaged with others and trained their inter-subjective abilities, and participants felt subjectively less stressed after all mental training modules.

    “The current results highlight not only that crucial social competencies necessary for successful social interaction and cooperation can still be improved in healthy adults and that such mental training leads to structural brain changes and to social stress reduction, but also that different methods of mental training have differential effects on the brain, on health, and behaviour. It matters what you train,” suggests Prof. Singer. “Once we have understood which mental training techniques have which effects, we will be able to employ these techniques in a targeted way to support mental and physical health.”

    For example, many currently popular mindfulness programmes may be a valid method to foster attention and strengthen cognitive efficiency. However, if we as a society want to become less vulnerable to social stress or train social competencies, such as empathy, compassion, and perspective-taking, mental training techniques focusing more on the “we” and social connectedness among people may be a better choice.


  9. Study suggests it is easier for bilinguals to pick up new languages

    October 12, 2017 by Ashley

    From the Georgetown University Medical Center press release:

    It is often claimed that people who are bilingual are better than monolinguals at learning languages. Now, the first study to examine bilingual and monolingual brains as they learn an additional language offers new evidence that supports this hypothesis, researchers say.

    The study, conducted at Georgetown University Medical Center and published in the journal Bilingualism: Language and Cognition, suggests that early bilingualism helps with learning languages later in life.

    “The difference is readily seen in language learners’ brain patterns. When learning a new language, bilinguals rely more than monolinguals on the brain processes that people naturally use for their native language,” says the study’s senior researcher, Michael T. Ullman, PhD, professor of neuroscience at Georgetown.

    “We also find that bilinguals appear to learn the new language more quickly than monolinguals,” says lead author Sarah Grey, PhD, an assistant professor in the department of modern languages and literatures at Fordham University. Grey worked with Ullman and co-author Cristina Sanz, PhD, on this study for her PhD research at Georgetown. Sanz is a professor of applied linguistics at Georgetown.

    The 13 bilingual college students enrolled in this study grew up in the U.S. with Mandarin-speaking parents, and learned both English and Mandarin at an early age. The matched comparison group consisted of 16 monolingual college students, who spoke only English fluently.

    The researchers studied Mandarin-English bilinguals because both of these languages differ structurally from the new language being learned. The new language was a well-studied artificial version of a Romance language, Brocanto2, that participants learned to both speak and understand. Using an artificial language allowed the researchers to completely control the learners’ exposure to the language.

    The two groups were trained on Brocanto2 over the course of about a week. At both earlier and later points of training, learners’ brain patterns were examined with electroencephalogram (EEG) electrodes on their scalps, while they listened to Brocanto2 sentences. This captures the natural brain-wave activity as the brain processes language.

    They found clear bilingual/monolingual differences. By the end of the first day of training, the bilingual brains, but not the monolingual brains, showed a specific brain-wave pattern, termed the P600. P600s are commonly found when native speakers process their language. In contrast, the monolinguals only began to exhibit P600 effects much later during learning — by the last day of training. Moreover, on the last day, the monolinguals showed an additional brain-wave pattern not usually found in native speakers of languages.

    “There has been a lot of debate about the value of early bilingual language education,” says Grey. “Now, with this small study, we have novel brain-based data that points towards a distinct language-learning benefit for people who grow up bilingual.”


  10. Study suggests role of inflammation in Alzheimer’s is complicated

    by Ashley

    From the Washington University School of Medicine press release:

    Scientists drilling down to the molecular roots of Alzheimer’s disease have encountered a good news/bad news scenario. A major player is a gene called TREM2, mutations of which can substantially raise a person’s risk of the disease. The bad news is that in the early stages of the disease, high-risk TREM2 variants can hobble the immune system’s ability to protect the brain from amyloid beta, a key protein associated with Alzheimer’s.

    The good news, however, according to researchers at Washington University School of Medicine in St. Louis, is that later in the disease, when the brain is dotted with toxic tangles of another Alzheimer’s protein known as tau, the absence of TREM2 protein seems to protect the brain from damage. Mice without TREM2 suffer much less brain damage than those with it.

    The findings potentially make targeting the TREM2 protein as a means of preventing or treating the devastating neurodegenerative disease a little more complicated, and suggest that doctors may want to activate TREM2 early in the disease and tamp it down later.

    “People in the Alzheimer’s field have already been trying to develop ways to target TREM2,” said senior author David Holtzman, MD, the Andrew B. and Gretchen P. Jones Professor and head of the Department of Neurology. “Now that we have this data, the question is, ‘What does one really want to do? Stimulate it or inhibit it?'”

    The study is published online the week of Oct. 9 in Proceedings of the National Academy of Sciences.

    Amyloid beta plaques start forming in the brains of Alzheimer’s patients years before the characteristic symptoms of memory loss and confusion appear. The plaques themselves appear to do minimal damage — many older people remain mentally sharp despite plentiful plaques — but their presence raises the risk of developing tau tangles, the real engine of destruction. The brain starts to die specifically in the areas where tau tangles are found.

    In the brain, the protein TREM2 is found only on immune cells known as microglia. Holtzman, along with Marco Colonna, MD, the Robert Rock Belliveau, MD, Professor of Pathology and Immunology, and others have shown that when TREM2 is absent, the immune cells can’t generate the energy they need to limit the spread of amyloid beta plaques.

    Knowing that tau also plays a key role in the development of Alzheimer’s, Holtzman, graduate student Cheryl Leyns and Jason Ulrich, PhD, an assistant professor of neurology, decided to investigate the effect of TREM2 on tau.

    The researchers turned to genetically modified mice that carry a mutant form of human tau prone to forming toxic tangles. They snipped out the TREM2 gene in some of the mice so that all the mice developed tau tangles but only some of them also had the TREM2 protein in their microglia.

    At 9 months of age, the brains of mice with tau tangles and TREM2 had visibly shrunk, particularly in areas important for memory. There was significantly less damage in the mice without TREM2.

    To their surprise, the researchers discovered there was no significant difference in the amount of tau tangles in the two groups of mice. Instead, the key difference seemed to lie in how their immune cells responded to the tau tangles. The microglia in mice with TREM2 were active, releasing compounds that in some circumstances help fight disease, but in this case primarily injured and killed nearby neurons. The microglia in mice without TREM2 were much less active, and their neurons were relatively spared.

    “Once we started seeing the difference in the microglial activation, we started to understand that this damage doesn’t have to do with tau aggregation necessarily, but with the immune system’s response to the aggregation,” said Leyns, the study’s co-first author.

    The findings suggest that the same immune cells can be involved in both protecting against and promoting neurological damage in Alzheimer’s.

    “It looks like microglial activation might have very different roles in different settings,” Holtzman said. “Amyloid seems to set off the disease. You need these amyloid-related changes to even get the disease in the first place. Maybe that’s why when you have less TREM2 function — and therefore microglial function — you’re at higher risk of developing Alzheimer’s. Less TREM2 function exacerbates amyloid-related injury. But then once the disease progresses and you start to have tau aggregation, it seems that activated microglia become harmful.”

    For years, doctors have tried to prevent or treat Alzheimer’s disease by targeting amyloid plaques or tau tangles, but no therapy has yet been proven effective. When TREM2 was identified as a major risk factor four years ago, scientists seized on it as a fresh way of thinking about — and tackling — the disease.

    These findings indicate that attempting to treat Alzheimer’s by targeting TREM2 function and microglial activation may be a thornier problem than anyone had suspected.

    “You might want to activate microglia early on, when people are just beginning to collect amyloid,” Holtzman said. “If they’re already developing symptoms, then they’re later in the disease process, so you’d probably want to suppress microglia. However, these ideas would need to be thoroughly tested in animal models before we start talking about taking it into people.”