1. Injury from contact sport has harmful, though temporary effect on memory

    November 22, 2017 by Ashley

    From the McMaster University press release:

    McMaster University neuroscientists studying sports-related head injuries have found that it takes less than a full concussion to cause memory loss, possibly because even mild trauma can interrupt the production of new neurons in a region of the brain responsible for memory.

    Though such losses are temporary, the findings raise questions about the long-term effects of repeated injuries and the academic performance of student athletes.

    The researchers spent months following dozens of athletes involved in high-contact sports such as rugby and football, and believe that concussions and repetitive impact can interrupt neurogenesis — or the creation of new neurons — in the hippocampus, a vulnerable region of the brain critical to memory.

    The findings were presented today (Tuesday, November 14th) at the Society for Neuroscience’s annual conference, Neuroscience 2017, in Washington D.C.

    “Not only are newborn neurons critical for memory, but they are also involved in mood and anxiety,” explains Melissa McCradden, a neuroscience postdoctoral fellow at McMaster University who conducted the work. “We believe these results may help explain why so many athletes experience difficulties with mood and anxiety in addition to memory problems.”

    For the study, researchers administered memory tests and assessed different types of athletes in two blocks over the course of two years. In the first block, they compared athletes who had suffered a concussion, uninjured athletes who played the same sport, same-sport athletes with musculoskeletal injuries, and healthy athletes who acted as a control group.

    Concussed athletes performed worse on the memory assessment called a mnemonic similarity test (MST), which evaluates a person’s ability to distinguish between images that are new, previously presented, or very similar to images previously presented.

    In the second study, rugby players were given the MST before the season started, halfway through the season, and one month after their last game. Scores for injured and uninjured athletes alike dropped midseason, compared to preseason scores, but recovered by the postseason assessment.

    Both concussed and non-concussed players showed a significant improvement in their performance on the test after a reprieve from their sport.

    For the concussed athletes, this occurred after being medically cleared to return to full practice and competition. For the rugby players, they improved after approximately a month away from the sport.

    If neurogenesis is negatively affected by concussion, researchers say, exercise could be an important tool in the recovery process, since it is known to promote the production of neurons. A growing body of new research suggests that gentle exercise which is introduced before a concussed patient is fully symptom free, is beneficial.

    “The important message here is that the brain does recover from injury after a period of reprieve,” says McCradden. “There is a tremendous potential for the brain to heal itself.”


  2. Study finds people can recognise images seen briefly a decade ago

    November 21, 2017 by Ashley

    From the CNRS press release:

    Recalling the names of old classmates 50 years after graduation or of favorite childhood television series illustrates the amazing abilities of human memory. Emotion and repeated exposure are both known to play a role in long-term memorization, but why do we remember things that are not emotionally charged and have only been seen or experienced a few times in the past? To answer this question, scientists from the Centre de recherche cerveau & cognition research unit (CNRS/Université Toulouse III — Paul Sabatier)1 decided to challenge the memory of individuals they had tested in the laboratory a decade previously. They discovered that participants recognized images seen for only a few seconds ten years earlier. These findings were published online on November 5, 2017, in Cognition.

    When conducting laboratory tests, it is difficult to account for key factors involved in memorization. Yet it is known that frequent exposure to sensory data translates into durable memories. And that something seen or experienced only once might never be forgotten when strong emotions are involved.

    The researchers in this study were able to control for these variables — i.e., emotional context and number of exposures — and evaluate another type of memorization. They asked 24 people (having no memory disorders) tested in the laboratory ten years previously to return for new tests. A decade earlier, the same individuals had been shown a sequence of simple clipart images, each for only a few seconds, without being given any particular instructions to memorize them. When they returned to the lab in 2016, participants were asked to identify these pictures presented in pairs alongside new images.

    On average, those surveyed obtained 55% correct answers, compared with 57% in the case of images already seen at least three times and up to 70% for some participants (one third of whom scored between 60% and 70%).

    Under these experimental conditions, it seems that three exposures are sufficient to memorize an image for 10 years. Although scientists have known for several years that memories can be retained implicitly — that is, without being able to consciously access them — this new study shows that they can directly influence participants’ choices and may sometimes even provoke a strong feeling of familiarity.

    The researchers are now seeking to elucidate the biological basis for this memorization. They hypothesize that such memories rely on a small group of ultraspecialized neurons rather than a wide and diffuse neuronal network.

    Note:

    1The study team also included a researcher from the Institut de neurosciences des systèmes (AMU/INSERM).


  3. Study examines prevalence of Subjective Cognitive Decline (SCD)

    November 20, 2017 by Ashley

    From the IOS Press press release:

    A memory complaint, also called Subjective Cognitive Decline (SCD), is a subjective disorder that appears to be relatively common, especially in elderly persons. The reports of its prevalence in various populations range from approximately 10% to as high as 88%, although it is generally thought that the prevalence of everyday memory problems lie within the range of 25% to 50%. It has been suggested that SCD may be an indication of cognitive decline at a very early stage of a neurodegenerative disease (i.e. preclinical stage of Alzheimer’s disease) that is undetectable by standard testing instruments. SCD may represent the first symptomatic manifestation of Alzheimer’s disease in individuals with unimpaired performance on cognitive tests.

    The McNair and Kahn Scale or Cognitive Difficulties Scale was employed to define and characterize cognitive complaints in the GuidAge study, involving a population of more than 2800 individuals aged 70 years or older having voluntarily complained of memory problems to their general practitioner (GPs). It contains items that are related to difficulties in attention, concentration, orientation, memory, praxis, domestic activities and errands, facial recognition, task efficiency, and name finding.

    The results of the GuidAge study suggest that the assessment of cognitive complaint voluntarily reported to primary-care physicians, by the McNair and Kahn scale can predict a decline in cognitive performance, as 5 items out of 20 were statistically significant.

    These 5 items are:

    • item 1, “I hardly remember usual phone numbers”,
    • item 5, “I forget appointment, dates, where I store things”,
    • item 6, “I forget to call people back when they called me”,
    • item 10, “I forget the day of the week”,
    • item 13, “I need to have people repeat instructions several times.”

    Thanks to this short scale GPs, in clinical practice, can identify which patients with memory complaints should be referred to a memory center to assess cognitive functions.


  4. Study suggests customers who pay for their purchases by card are less likely to remember the precise amount paid

    November 19, 2017 by Ashley

    From the Alpen-Adria-Universität Klagenfurt | Graz | Wien press release:

    The transparency of spending money depends on the mode of payment used: cash, single-function cards that offer only a payment function, or multifunctional cards which may also include bonus programmes, user identification or other functions. A recent study has shown that the recall accuracy associated with the act of paying is lower for both card formats than it is for cash transactions.

    According to current estimates, approximately 3 billion new so-called smart cards will be issued across the globe in 2017. Smart cards conveniently combine the payment function with additional types of functionality. Since 2000, the number of smart cards that are carried in users’ wallets has grown by around 20 per cent annually. It is anticipated that these types of functions will be made available directly through smartphones or smart watches in the future.

    Intrigued by this boom, researchers at the University of Cologne and the Alpen-Adria-Universität Klagenfurt have recently examined the effect these shifting payment methods are having on the customers. Rufina Gafeeva, Erik Hoelzl and Holger Roschk have carried out a field study, as part of Rufina Gafeeva’s PhD-thesis, to determine recall accuracy in relation to recently made payments. Data were gathered at two separate time points in cafeterias at a German university; the first time point was during the summer of 2015 (prior to the introduction of a multifunctional card on campus), and the second time point was during the summer of 2016 (following the introduction of the multifunctional card). Researchers were able to analyse guided interviews, which were conducted with a total of 496 students immediately after the act of paying.

    “We were able to show that individuals who pay by card have a less accurate recall of the amount paid than individuals who settle their bill with cash,” the research team summarises the findings. In addition, regarding the multifunctional card, the individual patterns of use play a critical role: “Individuals who use the non-payment functions of the multifunctional card are less likely to remember the transaction details accurately.”

    According to the authors of the study, these results are relevant for the financial wellbeing of everyone. After all: “A precise recollection of past spending has an effect on the willingness to spend money in the future,” the researchers explain. Efforts to encourage the customer to adopt a financially healthy behaviour require increased transparency. “To heighten our awareness we need designs that separate the payment function from other functions, or that visualise the act of spending money, such as immediate payment information or transaction summaries.”


  5. How challenges change the way you think

    November 17, 2017 by Ashley

    From the Frontiers press release:

    Research published today in Frontiers in Behavioral Neuroscience shows that challenging situations make it harder to understand where you are and what’s happening around you. A team of researchers showed participants video clips of a positive, a negative and a neutral situation. After watching the challenging clips — whether positive or negative — the participants performed worse on tests measuring their unconscious ability to acquire information about where and when things happen. This suggests that challenging situations cause the brain to drop nuanced, context-based cognition in favor of reflexive action.

    Previous research suggests that long-term memories formed under stress lack the context and peripheral details encoded by the hippocampus, making false alarms and reflexive reactions more likely. These context details are necessary for situating yourself in space and time, so struggling to acquire them has implications for decision-making in the moment as well as in memory formation.

    The research team, led by Thomas Maran, Marco Furtner and Pierre Sachse, investigated the short-term effects of challenging experiences on acquiring these context details. The team also investigated whether experiences coded as positive produced the same response as those coded as negative.

    “We aimed to make this change measurable on a behavioral level, to draw conclusions on how behavior in everyday life and challenging situations is affected by variations in arousal,” Thomas Maran explains.

    The researchers predicted that study participants would be less able to acquire spatial and sequential context after watching challenging clips, and that their performance would worsen the same way faced with either a positive or a negative clip. To test this, they used clips of film footage used previously to elicit reactions in stress studies: one violent scene (which participants experienced as negative), one sex scene (which participants experienced as positive), and one neutral control scene.

    Immediately after watching the clips, two groups of participants performed tasks designed to test their ability to acquire either spatial or sequential context. Both the sex scene and violent scene disrupted participants’ ability to memorize where objects had been and notice patterns in two different tasks, compared to the neutral scene. This supports the hypothesis that challenging situations — positive or negative — cause the brain to drop nuanced, context-based cognition in favor of reflexive action.

    So if challenging situations decrease the ability to pick up on context cues, how does this happen? The researchers suggest that the answer may lie in the hippocampus region of the brain — although they caution that since no neurophysiological techniques were applied in this study, this can’t be proven. Since existing evidence supports the idea that the hippocampus is deeply involved in retrieving and reconstructing spatial and temporal details, downgrading this function when faced with a potentially dangerous situation could stop this context acquisition and achieve the effect seen in this behavioral study. Reflexive reactions are less complex and demanding, and might stop individuals from making decisions based on unreliable information from unpredictable surroundings.

    Changes in cognition during high arousal states play an important role in psychopathology,” Thomas Maran explains, outlining his hopes for the future use of this research. He considers that the evidence provided by this study may have important therapeutic and forensic applications. It also gives a better basis for understanding reactions to challenging situations — from witnessing a crime to fighting on a battlefield — and the changes in the brain that make those reactions happen.


  6. Study suggests maintaining strong social networks linked to slower cognitive decline

    November 12, 2017 by Ashley

    From the Northwestern University press release:

    Maintaining positive, warm and trusting friendships might be the key to a slower decline in memory and cognitive functioning, according to a new Northwestern Medicine study.

    SuperAgers — who are 80 years of age and older who have cognitive ability at least as good as people in their 50s or 60s — reported having more satisfying, high-quality relationships compared to their cognitively average, same-age peers, the study reports.

    Previous SuperAger research at the Cognitive Neurology and Alzheimer’s Disease Center (CNADC) at Northwestern University Feinberg School of Medicine has focused on the biological differences in SuperAgers, such as discovering that the cortex in their brain is actually larger than their cognitively average, same-age peers. This study, published Oct. 23 in the journal PLOS ONE, was the first to examine the social side of SuperAgers.

    “You don’t have the be the life of the party, but this study supports the theory that maintaining strong social networks seems to be linked to slower cognitive decline,” said senior author Emily Rogalski, associate professor at Northwestern’s CNADC.

    Participants answered a 42-item questionnaire called the Ryff Psychological Well-Being Scale, which is a widely used measure of psychological well-being. The scale examines six aspects of psychological well-being: autonomy, positive relations with others, environmental mastery, personal growth, purpose in life and self-acceptance. SuperAgers scored a median overall score of 40 in positive relations with others while the control group scored 36 — a significant difference, Rogalski said.

    “This finding is particularly exciting as a step toward understanding what factors underlie the preservation of cognitive ability in advanced age, particularly those that may be modifiable,” said first author Amanda Cook, a clinical neuropsychology doctoral student in the laboratory of Rogalski and Sandra Weintraub.

    Other research studies have reported a decline in social networks in people with Alzheimer’s disease and Mild Cognitive Impairment (MCI), and previous literature has shown psychological well-being in older age to be associated with reduced risk of developing Alzheimer’s dementia.

    “It’s not as simple as saying if you have a strong social network, you’ll never get Alzheimer’s disease,” Rogalski said. “But if there is a list of healthy choices one can make, such as eating a certain diet and not smoking, maintaining strong social networks may be an important one on that list. None of these things by themself guarantees you don’t get the disease, but they may still have health benefits.


  7. How memories ripple through the brain

    November 11, 2017 by Ashley

    From the NIH/National Institute of Neurological Disorders and Stroke press release:

    Using an innovative “NeuroGrid” technology, scientists showed that sleep boosts communication between two brain regions whose connection is critical for the formation of memories. The work, published in Science, was partially funded by the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a project of the National Institutes of Health devoted to accelerating the development of new approaches to probing the workings of the brain.

    “Using new technologies advanced by the BRAIN Initiative, these researchers made a fundamental discovery about how the brain creates and stores new memories,” said Nick Langhals, Ph.D., program director at NIH’s National Institute of Neurological Disorders and Stroke.

    A brain structure called the hippocampus is widely thought to turn new information into permanent memories while we sleep. Previous work by the new study’s senior author, New York University professor György Buzsáki, M.D., Ph.D., revealed high-frequency bursts of neural firing called ripples in the hippocampus during sleep and suggested they play a role in memory storage. The current study confirmed the presence of ripples in the hippocampus during sleep and found them in certain parts of association neocortex, an area on the brain’s surface involved in processing complex sensory information.

    “When we first observed this, we thought it was incorrect because it had never been observed before,” said Dion Khodagholy, Ph.D., the study’s co-first author and assistant professor at Columbia University in New York.

    Using a cutting-edge NeuroGrid system they invented, along with recording electrodes placed deeper into the brain, the researchers examined activity in several parts of rats’ brains during non-rapid eye movement (NREM) sleep, the longest stage of sleep. Their NeuroGrid consists of a collection of tiny electrodes linked together like the threads of a blanket, which is then laid across an area of the brain so that each electrode can continuously monitor the activity of a different set of neurons.

    “This particular device allows us to look at multiple areas of the brain at the same time,” said Jennifer Gelinas, M.D., Ph.D., the study’s co-first author and assistant professor at Columbia University.

    The team was also surprised to find that the ripples in the association neocortex and hippocampus occurred at the same time, suggesting the two regions were communicating as the rats slept. Because the association neocortex is thought to be a storage location for memories, the researchers theorized that this neural dialogue could help the brain retain information.

    To test that idea, they examined brain activity during NREM sleep in rats trained to locate rewards in a maze and in rats that explored the maze in a random fashion. In the latter group of animals, the ripples in the hippocampus and cortex were no more synchronized before exploring the maze than afterwards. In the trained rats, the learning task increased the cross-talk between those areas, and a second training session boosted it even more, further suggesting that such communication is important for the creation and storage of memories.

    The group hopes to use the NeuroGrid in people undergoing brain surgery for other reasons to determine if the same ripples occur in the human brain. The researchers also plan to investigate if manipulating that neural firing in animals can boost or suppress memory formation in order to confirm that ripples are important for that process.

    “Identifying the specific neural patterns that go along with memory formation provides a way to better understand memory and potentially even address disorders of memory,” said Dr. Gelinas.


  8. Study examines how adult brain circuits regulate new neuron production

    November 9, 2017 by Ashley

    From the University of North Carolina Health Care press release:

    Before we are born, the developing brain creates an incredible number of neurons, which migrate to specific parts of the brain to ready us for life. Contrary to popular belief, genesis of new neurons does not stop at birth or even in childhood. In a few select areas of the brain, it can continue throughout adulthood, and is believed to be vitally important for certain forms of learning and memory, and in mood regulation. How neurogenesis is switched on and off is still not well understood, but UNC School of Medicine researchers led by Juan Song, PhD, assistant professor in the department of pharmacology, have just discovered a major clue.

    Reported as the cover story in Cell Stem Cell, the researchers identified a neurogenesis-controlling brain circuit that runs from near the front of the brain back to the hippocampus, a learning- and memory-related structure. The hippocampus is one of the major sites of neurogenesis in the adult human brain, and the circuit that Song’s team has identified regulates this neuron-producing process.

    “This circuit controls the activity of stem cells in the part of the hippocampus where neurogenesis occurs,” said Song, a member of the UNC Neuroscience Center. “Our finding ultimately could have implications for understanding and treating many brain disorders arising from aberrant hippocampal neurogenesis, including epilepsy, schizophrenia, depression, and Alzheimer’s disease.”

    Neural stem cells are like stem cells in other tissues and organs — they give birth, if needed, to new cells that replace dead or dying ones. Most of the neurons in the adult brain are wired tightly into complex circuits and are not replaced.

    The chief exception is the dentate gyrus (DG) region of the hippocampus. Neurogenesis in the DG occurs throughout adult life and supports the hippocampus’s crucial functions in storing and retrieving memories. DG neurogenesis has been linked to mood as well. In fact, scientists suspect that the mood-improving effects of antidepressant drugs and physical exercise arise at least in part from the boost they give to DG neurogenesis.

    How the brain controls DG neurogenesis, dialing it up and down when needed, is a mystery that Song and her team have been trying to solve since Song started her lab at UNC in 2013. In a study published in the journal Nature Neuroscience, for example, they found that special local hippocampal neurons called PV interneurons provide signals to DG newborn progeny that appear to be crucial for healthy neurogenesis.

    In the new study, Song and colleagues discovered that this hippocampal PV interneuron-signaling is regulated by a GABA circuit coming from the medial septum, a cluster of neurons near the front of the brain.

    “This medial septum GABA circuit works through the local PV interneurons in the hippocampus to instruct stem cells to become activated or to stay quiet,” Song said. “This GABA circuit is unique, because local PV interneurons are excited by GABA, a brain neurotransmitter that normally inhibits neuronal activity.”

    When a neural stem cell becomes activated, it begins a process of cell division that ultimately yields new neurons that connect to existing brain circuits. In a healthy hippocampus over a normal life span, neurogenesis proceeds at only a low level. Resident stem cells remain mostly in a “quiescent” state, and the population of stem cells is maintained indefinitely.

    Song and her team found that in mice, the medial septum-to-hippocampus circuit works to keep DG stem cells in this normal, low-activity state. It acts like a brake on DG stem cell activation, and thus helps maintain a healthy DG stem cell population.

    By contrast, interfering with this circuit takes off the brake completely, allowing DG stem cells to become not just active but overactive. Specifically, Song’s team found that in mice, this DG stem cell over-activation caused a burst of newly made neurons and a massive depletion of the resident DG stem cell population. Moreover, the new neurons produced in this excessive burst of neurogenesis seemed less healthy.

    “Their appearance was abnormal,” Song said. “Their dendrites — the root-like stalks that receive inputs from other neurons — were too long and had too many crossings, suggesting impaired functions. It’s likely that the production of these abnormal neurons in the hippocampus would lead to learning and memory deficits.”

    She and her team now want to determine whether the medial septum-to-hippocampus circuit can be targeted with therapies to protect DG stem cells and restore normal DG neurogenesis in cases where neurogenesis is abnormal. Alzheimer’s, schizophrenia, depression, and certain forms of epilepsy all have been linked to deficits in DG neurogenesis. There have been hints too that Alzheimer’s specifically involves losses of the medial septum neurons that connect to the hippocampus to control neurogenesis.

    “In principle, restoring the normal signals from the circuit linking the medial septum to the hippocampus may offer therapeutic potential to treat disorders that involve abnormal DG neurogenesis,” Song said.

    She and her laboratory are currently studying the function of the medial septum-to-hippocampus circuit in the context of Alzheimer’s mouse models.


  9. Study suggests babies can use context to look for things

    November 7, 2017 by Ashley

    From the Brown University press release:

    Just six months into the world, babies already have the capacity to learn, remember and use contextual cues in a scene to guide their search for objects of interest, such as faces, a new Brown University study shows.

    “It was pretty surprising to find that 6-month-olds were capable of this memory-guided attention,” said lead author Kristen Tummeltshammer, a postdoctoral scholar at Brown. “We didn’t expect them to be so successful so young.”

    In the experiment described in Developmental Science, babies showed steady improvement in finding faces in repeated scenes, but didn’t get any quicker or more accurate in finding faces in new scenes. Senior author Dima Amso, an associate professor in Brown’s Department of Cognitive, Linguistic and Psychological Sciences, said the finding that infants can recognize and exploit patterns of context provides important new insights into typical and possibly atypical brain development.

    “What that means is that they are efficient in using the structure in their environment to maximize attentional resources on the one hand and to reduce uncertainty and distraction on the other,” Amso said. “A critical question in our lab has been whether infants at risk for neurodevelopmental disorders, especially autism spectrum disorders, have differences in the way that they process visual information, and whether this would impact future learning and attention. These data lay the developmental groundwork for asking whether there are differences in using previously learned visual information to guide future learning and attention across various neurodevelopmental populations.”

    Find the face

    To make the findings, Tummeltshammer and Amso invited 46 healthy, full-term infants, either 6 or 10 months old, to their lab to play a little game of finding faces. Seated on a parent’s lap, the babies simply had to watch a screen as they were presented with a series of arrangements of four colored shapes. In each arrangement, the shapes would turn around with one revealing a face. An eye-tracking system would measure where the baby looked.

    Eventually the babies would always look at the face, especially because after two seconds, the face would become animate and say words like “peekaboo.” In all, each baby saw 48 arrangements over eight minutes, with little breaks to watch clips of Elmo from “Sesame Street.” That, Tummeltshammer said, was to help keep them (and maybe their parents) engaged and happy.

    The trick of the experiment is that while half the time the shape arrangements were randomly scrambled and the face could be revealed anywhere, the other half of the time the same arrangements were repeated, meaning a baby could learn from that context to predict where to look for the face. In this way, the babies beheld faces both in novel and repeated contexts. If babies could notice the repeated context pattern, remember it and put it to use, they should be quicker and more accurate in finding the face when it came up in that kind of scene again.

    By several measures reported in the study, the babies demonstrated that capacity clearly. For example, as they saw more scenes, babies consistently reduced the amount of time it took to find the face in repeated-context scenes, but not in new-context scenes. Also they became better at ignoring non-face shapes in repeated-context scenes as they went along, but didn’t show that same improvement in new-context scenes.

    Babies even learned to anticipate where the faces would be on the screen based on their experiences in the experiment.

    Tummeltshammer said there was little difference between the 6-month-olds and the 10-month-olds, suggesting that the skill is already developed at the younger age.

    In new research, Tummeltshammer said, she and Amso plan to experiment with more realistic scenes. After all, babies rarely need to look for faces among cleanly defined abstract shapes. A more real-world challenge for a baby, for instance, might be finding a parent’s familiar and comforting face across a holiday dinner table.

    But even from this simpler experimental setting, the ability is clearly established.

    “We think of babies as being quite reactive in how they spread their attention,” Tummeltshammer said. “This helps us recognize that they are actually quite proactive. They are able to use recent memory and to extract what’s common in an environment as a shortcut to be able to locate things quickly.”

    A James S. McDonnell Scholar Award and the National Institutes of Health (1-F32-MH108278-01) funded the research.


  10. Study examines how the brain beats distractions to retain memories

    November 6, 2017 by Ashley

    From the National University of Singapore press release:

    Researchers from the National University of Singapore (NUS) have recently discovered a mechanism that could explain how the brain retains working memory when faced with distractions. These findings could endow cognitive flexibility to neural networks used for artificial intelligence.

    Working memory is a form of short-term memory responsible for storing and managing the information required to carry out everyday cognitive tasks such as reasoning and language comprehension. It operates over short time scales and allows us to focus our attention, resist distractions and guide our decision-making. For instance, when we remember a telephone number that we want to dial or perform a mental calculation, we are using our working memory. Importantly, we are able to carry on with these tasks even in the presence of temporary distractions, for instance, when the task is being interrupted or when we receive new information.

    The prefrontal cortex plays a central role in the maintenance of working memory and the suppression of distractors. Past studies have shown that working memory is stored in the activity of populations of neurons in the prefrontal cortex through unchanging neural activity, suggesting that activity of the neurons is unaffected by distractions, allowing for information to be retained.

    However, studies conducted by the NUS team suggest that distractions do change the activity of the neurons, but they are able to retain information by reorganising the information within the same population of neurons. In other words, the “code” used by the neurons to maintain memory information morphs to a different code after a distractor is presented. This unexpected finding has strong implications for our understanding of how the brain processes information, which in turn may lead to inspiration for research in artificial intelligence, as well as neuropsychiatric research, where deficits in memory and attention are common.

    The research team behind this novel discovery was led by Assistant Professor Yen Shih-Cheng from the Department of Electrical and Computer Engineering at NUS Faculty of Engineering and Assistant Professor Camilo Libedinsky from the Department of Psychology at NUS Faculty of Arts and Social Sciences.

    The team’s findings were published online in the journal Nature Neuroscience on 9 October 2017.

    Inspiration for novel and more efficient neural networks

    The brain uses incredibly sophisticated mechanisms to store information very efficiently and reliably. The findings of this study provide new insights into how the brain works, and suggest a mechanism for small populations of neurons to flexibly store different types of information.

    “Our study could potentially provide inspiration for new types of computer architectures and learning rules used in artificial neural networks modelled after the brain. This could potentially enhance the neural network’s ability to store information flexibly using fewer resources, and to exhibit greater resilience in retaining information in these networks — for instance, in the presence of new incoming information or disruption to the activity,” said Asst Prof Yen.

    “Another interesting aspect of this study is to consider the results within the context of abnormal brain function. We found that when memory is disturbed, the code fails to reorganise the neurons appropriately. Patients with Parkinson’s disease, schizophrenia and dementia show declines in working memory. We would like to explore whether the mechanisms we uncovered could help explain these memory deficits,” said Asst Prof Libedinsky.

    Moving forward, the team plans to conduct further studies to understand the conditions that trigger the reorganisation of information in the populations of neurons in the prefrontal cortex. The team is also interested to study how the information is reorganised, and whether this affects activity in other parts of the brain. Furthermore, the researchers hope to use the findings of the study to develop novel neural network architectures.