1. Brain cells show teamwork in short-term memory

    March 23, 2017 by Ashley

    From the University of Western Ontario press release:

    Nerve cells in our brains work together in harmony to store and retrieve short-term memory, and are not solo artists as previously thought, Western-led brain research has determined.

    The research turns on its head decades of studies assuming that single neurons independently encode information in our working memories.

    “These findings suggest that even neurons we previously thought were ‘useless’ because they didn’t individually encode information have a purpose when working in concert with other neurons,” said researcher Julio Martinez-Trujillo, based at the Robarts Research Institute and the Brain and Mind Institute at Western University.

    “Knowing they work together helps us better understand the circuits in the brain that can either improve or hamper executive function. And that in turn may have implications for how we work though brain-health issues where short-term memory is a problem, including Alzheimer disease, schizophrenia, autism, depression and attention deficit disorder.”

    Working memory is the ability to learn, retain and retrieve bits of information we all need in the short term: items on a grocery list or driving directions, for example. Working memory deteriorates faster in people with dementia or other disorders of the brain and mind.

    In the past, researchers have believed this executive function was the job of single neurons acting independently from one another — the brain’s version of a crowd of people in a large room all singing different songs in different rhythms and different keys. An outsider trying to decipher any tune in all that white noise would have an extraordinarily difficult task.

    This research, however, suggests many in the neuron throng are singing from the same songbook, in essence creating chords to strengthen the collective voice of memory. With neural prosthetic technology — microchips that can “listen” to many neurons at the same time — researchers are able to find correlations between the activity of many nerve cells. “Using that same choir analogy, you can start perceiving some sounds that have a rhythm, a tune and chords that are related to each other: in sum, short-term memories,” said Martinez-Trujillo, who is also an associate professor at Western’s Schulich School of Medicine & Dentistry.

    And while the ramifications of this discovery are still being explored, “this gives us good material to work with as we move forward in brain research. It provides us with the necessary knowledge to find ways to manipulate brain circuits and improve short term memory in affected individuals,” Martinez-Trujillo said.

    “The microchip technology also allows us to extract signals from the brain in order to reverse-engineer brain circuitry and decode the information that is in the subject’s mind. In the near future, we could use this information to allow cognitive control of neural prosthetics in patients with ALS or severe cervical spinal cord injury,” said Adam Sachs, neurosurgeon and associate scientist at The Ottawa Hospital and assistant professor at the University of Ottawa Brain and Mind Research Institute.


  2. Sound waves boost older adults’ memory, deep sleep

    by Ashley

    From the Northwestern University press release:

    IF

    Gentle sound stimulation — such as the rush of a waterfall — synchronized to the rhythm of brain waves significantly enhanced deep sleep in older adults and improved their ability to recall words, reports a new Northwestern Medicine study.

    Deep sleep is critical for memory consolidation. But beginning in middle age, deep sleep decreases substantially, which scientists believe contributes to memory loss in aging.

    The sound stimulation significantly enhanced deep sleep in participants and their scores on a memory test.

    “This is an innovative, simple and safe non-medication approach that may help improve brain health,” said senior author Dr. Phyllis Zee, professor of neurology at Northwestern University Feinberg School of Medicine and a Northwestern Medicine sleep specialist. “This is a potential tool for enhancing memory in older populations and attenuating normal age-related memory decline.”

    The study will be published March 8 in Frontiers in Human Neuroscience.

    In the study, 13 participants 60 and older received one night of acoustic stimulation and one night of sham stimulation. The sham stimulation procedure was identical to the acoustic one, but participants did not hear any noise during sleep. For both the sham and acoustic stimulation sessions, the individuals took a memory test at night and again the next morning. Recall ability after the sham stimulation generally improved on the morning test by a few percent. However, the average improvement was three times larger after pink-noise stimulation.

    The older adults were recruited from the Cognitive Neurology and Alzheimer’s Disease Center at Northwestern.

    The degree of slow wave sleep enhancement was related to the degree of memory improvement, suggesting slow wave sleep remains important for memory, even in old age.

    Although the Northwestern scientists have not yet studied the effect of repeated nights of stimulation, this method could be a viable intervention for longer-term use in the home, Zee said.

    Previous research showed acoustic simulation played during deep sleep could improve memory consolidation in young people. But it has not been tested in older adults.

    The new study targeted older individuals — who have much more to gain memory-wise from enhanced deep sleep — and used a novel sound system that increased the effectiveness of the sound stimulation in older populations.

    The study used a new approach, which reads an individual’s brain waves in real time and locks in the gentle sound stimulation during a precise moment of neuron communication during deep sleep, which varies for each person.

    During deep sleep, each brain wave or oscillation slows to about one per second compared to 10 oscillations per second during wakefulness.

    Giovanni Santostasi, a study coauthor, developed an algorithm that delivers the sound during the rising portion of slow wave oscillations. This stimulation enhances synchronization of the neurons’ activity.

    After the sound stimulation, the older participants’ slow waves increased during sleep.

    Larger studies are needed to confirm the efficacy of this method and then “the idea is to be able to offer this for people to use at home,” said first author Nelly Papalambros, a Ph.D. student in neuroscience working in Zee’s lab. “We want to move this to long-term, at-home studies.”

    Northwestern scientists, under the direction of Dr. Roneil Malkani, assistant professor of neurology at Feinberg and a Northwestern Medicine sleep specialist, are currently testing the acoustic stimulation in overnight sleep studies in patients with memory complaints. The goal is to determine whether acoustic stimulation can enhance memory in adults with mild cognitive impairment.

    Previous studies conducted in individuals with mild cognitive impairment in collaboration with Ken Paller, professor of psychology at the Weinberg College of Arts and Sciences at Northwestern, have demonstrated a possible link between their sleep and their memory impairments.


  3. It’s a bird, it’s a plane, it’s – a key discovery about human memory

    March 22, 2017 by Ashley

    From the Johns Hopkins University press release:

    As Superman flies over the city, people on the ground famously suppose they see a bird, then a plane, and then finally realize it’s a superhero. But they haven’t just spotted the Man of Steel — they’ve experienced the ideal conditions to create a very strong memory of him.

    Johns Hopkins University cognitive psychologists are the first to link human’s long-term visual memory with how things move. The key, they found, lies in whether we can visually track an object. When people see Superman, they don’t think they’re seeing a bird, a plane and a superhero. They know it’s just one thing — even though the distance, lighting and angle change how he looks.

    People’s memory improves significantly with rich details about how an object’s appearance changes as it moves through space and time, the researchers concluded. The findings, which shed light on long-term memory and could advance machine learning technology, appear in this month’s Journal of Experimental Psychology: General.

    “The way I look is only a small part of how you know who I am,” said co-author Jonathan Flombaum, an assistant professor in the Department of Psychological and Brain Sciences. “If you see me move across a room, you’re getting data about how I look from different distances and in different lighting and from different angles. Will this help you recognize me later? No one has ever asked that question. We find that the answer is yes.”

    Humans have a remarkable memory for objects, says co-author Mark Schurgin, a graduate student in Flombaum’s Visual Thinking Lab. We recognize things we haven’t seen in decades — like eight-track tapes and subway tokens. We know the faces of neighbors we’ve never even met. And very small children will often point to a toy in a store after seeing it just once on TV.

    Though people almost never encounter a single object the exact same way twice, we recognize them anyway.

    Schurgin and Flombaum wondered if people’s vast ability for recall, a skill machines and computers cannot come close to matching, had something to do with our “core knowledge” of the world, the innate understanding of basic physics that all humans, and many animals, are born with. Specifically, everyone knows something can’t be in two places at once. So if we see one thing moving from place to place, our brain has a chance to see it in varying circumstances — and a chance to form a stronger memory of it.

    Likewise, if something is behaving erratically and we can’t be sure we’re seeing just one thing, those memories won’t form.

    “With visual memory, what matters to our brain is that an object is the same,” Flombaum said. “People are more likely to recognize an object if they see it at least twice, moving in the same path.”

    The researchers tested the theory in a series of experiments where people were shown very short video clips of moving objects, then given memory tests. Sometimes the objects appeared to move across the screen as a single object would. Other times they moved in ways we wouldn’t expect a single object to move, such as popping out from one side of the screen and then the other.

    In every experiment, subjects had significantly better memories — as much as nearly 20 percent better — of trackable objects that moved according to our expectations, the researchers found.

    “Your brain has certain automatic rules for how it expects things in the world to behave,” Schurgin said. “It turns out, these rules affect your memory for what you see.”

    The researchers expect the findings to help computer scientists build smarter machines that can recognize objects. Learning more about how humans do it, Flombaum said, will help us build systems that can do it.


  4. Skilled workers more prone to mistakes when interrupted

    March 20, 2017 by Ashley

    From the Michigan State University press release:

    Expertise is clearly beneficial in the workplace, yet highly trained workers in some occupations could actually be at risk for making errors when interrupted, indicates a new study by two Michigan State University psychology researchers.

    The reason: Experienced workers are generally faster at performing procedural tasks, meaning their actions are more closely spaced in time and thus more confusable when they attempt to recall where to resume a task after being interrupted.

    “Suppose a nurse is interrupted while preparing to give a dose of medication and then must remember whether he or she administered the dose,” said Erik Altmann, lead investigator on the project. “The more experienced nurse will remember less accurately than a less-practiced nurse, other things being equal, if the more experienced nurse performs the steps involved in administering medication more quickly.”

    That’s not to say skilled nurses should avoid giving medication, but only that high skill levels could be a risk factor for increased errors after interruptions and that experts who perform a task quickly and accurately have probably figured out strategies for keeping their place in a task, said Altmann, who collaborated with fellow professor Zach Hambrick.

    Their study, funded by the Office of Naval Research, is published online in the Journal of Experimental Psychology: General.

    For the experiment, 224 people performed two sessions of a computer-based procedural task on separate days. Participants were interrupted randomly by a simple typing task, after which they had to remember the last step they performed to select the correct step to perform next.

    In the second session, people became faster, and on most measures, more accurate, Altmann said. After interruptions, however, they became less accurate, making more errors by resuming the task at the wrong spot.

    “The faster things happen, the worse we remember them,” Altmann said, adding that when workers are interrupted in the middle of critical procedures, as in emergency rooms or intensive care units, they may benefit from training and equipment design that helps them remember where they left off.


  5. Might smartphones help to maintain memory in patients with mild Alzheimer’s disease?

    by Ashley

    From the IOS Press press release:

    The patient is a retired teacher who had reported memory difficulties 12 months prior to the study. These difficulties referred to trouble remembering names and groceries she wanted to purchase, as well as frequently losing her papers and keys. According to the patient and her husband, the main difficulties that she encountered were related to prospective memory (e.g., forgetting medical appointments or to take her medication).

    To help her with her symptoms, Mohamad El Haj, a psychologist and assistant professor at the University of Lille, proposed Google Calendar, a time-management and scheduling calendar service developed by Google. The patient accepted as she was already comfortable using her smartphone. She also declared that she preferred the application as it offers more discrete assistance than a paper-based calendar.

    With the patient and her husband, Dr. El Haj and his colleagues defined several prospective omissions in the patient, such as forgetting her weekly medical appointment, forgetting her weekly bridge game in the community club, and forgetting to go to weekly mass at the church. These omissions were targeted by sending automatic alerts, prompted by Google Calendar, at different times before each event (e.g., the medical appointment).

    The researchers compared omissions before after the use of Google Calendar, they observed less omission after implementing the application.

    The study is the first to suggest positive effects of smartphones applications on everyday life prospective memory in Alzheimer’s disease. The findings, published in Journal of Alzheimer’s disease, are encouraging, however, Dr. El Haj notes that this is a case study and therefore entails a few limitations, including generalizability of the results. The current, anecdotal findings require a larger study, not only to confirm or refute the findings reported here, but also to address challenges such as the long-term benefits of Google calendar.

    Regardless of its potential limitations, Dr. El Haj notes that this study addresses memory loss, the main cognitive hallmark of Alzheimer’s disease and the major concern of the patients and their families. By demonstrating positive effect of Google Calendar on prospective memory in this patient, Dr. El Haj hopes that his study paves the way for exploring the potential of smartphone-integrated memory aids in Alzheimer’s disease. The future generation of patients may be particularly sensitive to the use of smartphones as a tool to alleviate their memory compromise.


  6. Precise technique tracks dopamine in the brain

    March 17, 2017 by Ashley

    From the MIT press release:

    brain scansMIT researchers have devised a way to measure dopamine in the brain much more precisely than previously possible, which should allow scientists to gain insight into dopamine’s roles in learning, memory, and emotion.

    Dopamine is one of the many neurotransmitters that neurons in the brain use to communicate with each other. Previous systems for measuring these neurotransmitters have been limited in how long they provide accurate readings and how much of the brain they can cover. The new MIT device, an array of tiny carbon electrodes, overcomes both of those obstacles.

    “Nobody has really measured neurotransmitter behavior at this spatial scale and timescale. Having a tool like this will allow us to explore potentially any neurotransmitter-related disease,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.

    Furthermore, because the array is so tiny, it has the potential to eventually be adapted for use in humans, to monitor whether therapies aimed at boosting dopamine levels are succeeding. Many human brain disorders, most notably Parkinson’s disease, are linked to dysregulation of dopamine.

    “Right now deep brain stimulation is being used to treat Parkinson’s disease, and we assume that that stimulation is somehow resupplying the brain with dopamine, but no one’s really measured that,” says Helen Schwerdt, a Koch Institute postdoc and the lead author of the paper, which appears in the journal Lab on a Chip.

    Studying the striatum

    For this project, Cima’s lab teamed up with David H. Koch Institute Professor Robert Langer, who has a long history of drug delivery research, and Institute Professor Ann Graybiel, who has been studying dopamine’s role in the brain for decades with a particular focus on a brain region called the striatum. Dopamine-producing cells within the striatum are critical for habit formation and reward-reinforced learning.

    Until now, neuroscientists have used carbon electrodes with a shaft diameter of about 100 microns to measure dopamine in the brain. However, these can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine, and other types of interfering films can also form on the electrode surface over time. Furthermore, there is only about a 50 percent chance that a single electrode will end up in a spot where there is any measurable dopamine, Schwerdt says.

    The MIT team designed electrodes that are only 10 microns in diameter and combined them into arrays of eight electrodes. These delicate electrodes are then wrapped in a rigid polymer called PEG, which protects them and keeps them from deflecting as they enter the brain tissue. However, the PEG is dissolved during the insertion so it does not enter the brain.

    These tiny electrodes measure dopamine in the same way that the larger versions do. The researchers apply an oscillating voltage through the electrodes, and when the voltage is at a certain point, any dopamine in the vicinity undergoes an electrochemical reaction that produces a measurable electric current. Using this technique, dopamine’s presence can be monitored at millisecond timescales.

    Using these arrays, the researchers demonstrated that they could monitor dopamine levels in many parts of the striatum at once.

    “What motivated us to pursue this high-density array was the fact that now we have a better chance to measure dopamine in the striatum, because now we have eight or 16 probes in the striatum, rather than just one,” Schwerdt says.

    The researchers found that dopamine levels vary greatly across the striatum. This was not surprising, because they did not expect the entire region to be continuously bathed in dopamine, but this variation has been difficult to demonstrate because previous methods measured only one area at a time.

    How learning happens

    The researchers are now conducting tests to see how long these electrodes can continue giving a measurable signal, and so far the device has kept working for up to two months. With this kind of long-term sensing, scientists should be able to track dopamine changes over long periods of time, as habits are formed or new skills are learned.

    “We and other people have struggled with getting good long-term readings,” says Graybiel, who is a member of MIT’s McGovern Institute for Brain Research. “We need to be able to find out what happens to dopamine in mouse models of brain disorders, for example, or what happens to dopamine when animals learn something.”

    She also hopes to learn more about the roles of structures in the striatum known as striosomes. These clusters of cells, discovered by Graybiel many years ago, are distributed throughout the striatum. Recent work from her lab suggests that striosomes are involved in making decisions that induce anxiety.

    This study is part of a larger collaboration between Cima’s and Graybiel’s labs that also includes efforts to develop injectable drug-delivery devices to treat brain disorders.

    “What links all these studies together is we’re trying to find a way to chemically interface with the brain,” Schwerdt says. “If we can communicate chemically with the brain, it makes our treatment or our measurement a lot more focused and selective, and we can better understand what’s going on.”


  7. People can match names to faces of strangers with surprising accuracy

    by Ashley

    From the American Psychological Association press release:

    IF

    If your name is Fred, do you look like a Fred? You might — and others might think so, too. New research published by the American Psychological Association has found that people appear to be better than chance at correctly matching people’s names to their faces, and it may have something to do with cultural stereotypes we attach to names.

    In the study, published in the Journal of Personality and Social Psychology, lead author Yonat Zwebner, a PhD candidate at The Hebrew University of Jerusalem at the time of the research, and colleagues conducted a series of experiments involving hundreds of participants in Israel and France. In each experiment, participants were shown a photograph and asked to select the given name that corresponded to the face from a list of four or five names. In every experiment, the participants were significantly better (25 to 40 percent accurate) at matching the name to the face than random chance (20 or 25 percent accurate depending on the experiment) even when ethnicity, age and other socioeconomic variables were controlled for.

    The researchers theorize the effect may be, in part, due to cultural stereotypes associated with names as they found the effect to be culture-specific. In one experiment conducted with students in both France and Israel, participants were given a mix of French and Israeli faces and names. The French students were better than random chance at matching only French names and faces and Israeli students were better at matching only Hebrew names and Israeli faces.

    In another experiment, the researchers trained a computer, using a learning algorithm, to match names to faces. In this experiment, which included over 94,000 facial images, the computer was also significantly more likely (54 to 64 percent accuracy) to be successful than random chance (50 percent accuracy).

    This manifestation of the name in a face might be due to people subconsciously altering their appearance to conform to cultural norms and cues associated with their names, according to Zwebner.

    “We are familiar with such a process from other stereotypes, like ethnicity and gender where sometimes the stereotypical expectations of others affect who we become,” said Zwebner. “Prior research has shown there are cultural stereotypes attached to names, including how someone should look. For instance, people are more likely to imagine a person named Bob to have a rounder face than a person named Tim. We believe these stereotypes can, over time, affect people’s facial appearance.”

    This was supported by findings of one experiment showing that areas of the face that can be controlled by the individual, such as hairstyle, were sufficient to produce the effect.

    “Together, these findings suggest that facial appearance represents social expectations of how a person with a particular name should look. In this way, a social tag may influence one’s facial appearance,” said co-author Ruth Mayo, PhD, also from The Hebrew University of Jerusalem. “We are subject to social structuring from the minute we are born, not only by gender, ethnicity and socioeconomic status, but by the simple choice others make in giving us our name.”


  8. Study notes how musical cues trigger different autobiographical memories

    March 16, 2017 by Ashley

    From the Springer press release:

    Happy memories spring to mind much faster than sad, scary or peaceful ones. Moreover, if you listen to happy or peaceful music, you recall positive memories, whereas if you listen to emotionally scary or sad music, you recall largely negative memories from your past. Those are two of the findings from an experiment in which study participants accessed autobiographical memories after listening to unknown pieces of music varying in intensity or emotional content. It was conducted by Signy Sheldon and Julia Donahue of McGill University in Canada, and is reported in the journal Memory & Cognition, published by Springer.

    The experiment tested how musical retrieval cues that differ on two dimensions of emotion — valence (positive and negative) and arousal (high and low) — influence the way that people recall autobiographical memories. A total of 48 participants had 30 seconds to listen to 32 newly composed piano pieces not known to them. The pieces were grouped into four retrieval cues of music: happy (positive, high arousal), peaceful (positive, low arousal), scary (negative, high arousal) and sad (negative, low arousal).

    Participants had to recall events in which they were personally involved, that were specific in place and time, and that lasted less than a day. As soon as a memory came to mind, participants pressed a computer key and typed in their accessed memory. The researchers noted how long it took participants to access a memory, how vivid it was, and the emotions associated with it. The type of event coming to mind was also considered, and whether for instance it was quite unique or connected with an energetic or social setting.

    Memories were found to be accessed most quickly based on musical cues that were highly arousing and positive in emotion, and could therefore be classified as happy. A relationship between the type of musical cue and whether it triggered the remembrance of a positive or a negative memory was also noted. The nature of the event recalled was influenced by whether the cue was positive or negative and whether it was high or low in arousal.

    “High cue arousal led to lower memory vividness and uniqueness ratings, but both high arousal and positive cues were associated with memories rated as more social and energetic,” explains Sheldon.

    During the experiment, the piano pieces were played to one half of the participants in no particular order, while for the rest the music was grouped together based on whether these were peaceful, happy, sad or scary pieces. This led to the finding that the way in which cues are presented influences how quickly and specifically memories are accessed. Cue valence also affects the vividness of a memory.

    More specifically, the researchers found that a greater proportion of clear memories were recalled when highly arousing positive cues were played in a blocked fashion. Positive cues also elicited more vivid memories than negative cues. In the randomized condition, negative cues were associated more vividly than positive cues.

    “It is possible that when cues were presented in a random fashion, the emotional content of the cue directed retrieval to a similar memory via shared emotional information,” notes Donahue.


  9. Poor sleep in early childhood may lead to cognitive, behavioral problems in later years

    March 15, 2017 by Ashley

    From the Massachusetts General Hospital press release:

    A study led by a Massachusetts General Hospital pediatrician finds that children ages 3 to 7 who don’t get enough sleep are more likely to have problems with attention, emotional control and peer relationships in mid-childhood. Reported online in the journal Academic Pediatrics, the study found significant differences in the responses of parents and teachers to surveys regarding executive function — which includes attention, working memory, reasoning and problem solving — and behavioral problems in 7-year-old children depending on how much sleep they regularly received at younger ages.

    “We found that children who get an insufficient amount of sleep in their preschool and early school-age years have a higher risk of poor neurobehavioral function at around age 7,” says Elsie Taveras, MD, MPH, chief of General Pediatrics at MassGeneral Hospital for Children , who led the study. “The associations between insufficient sleep and poorer functioning persisted even after adjusting for several factors that could influence the relationship.”

    As in previous studies from this group examining the role of sleep in several areas of child health, the current study analyzed data from Project Viva, a long-term investigation of the health impacts of several factors during pregnancy and after birth. Information used in this study was gathered from mothers at in-person interviews when their children were around 6 months, 3 years and 7 years old, and from questionnaires completed when the children were ages 1, 2, 4, 5 and 6. In addition, mothers and teachers were sent survey instruments evaluating each child’s executive function and behavioral issues — including emotional symptoms and problems with conduct or peer relationships, when children were around 7.

    Among 1,046 children enrolled in Project Viva, the study team determined which children were not receiving the recommended amount of sleep at specific age categories — 12 hours or longer at ages 6 months to 2 years, 11 hours or longer at ages 3 to 4 years, and 10 hours or longer at 5 to 7 years. Children living in homes with lower household incomes and whose mothers had lower education levels were more likely to sleep less than nine hours at ages 5 to 7. Other factors associated with insufficient sleep include more television viewing, a higher body mass index, and being African American.

    The reports from both mothers and teachers regarding the neurobehavioral function of enrolled children found similar associations between poor functioning and not receiving sufficient sleep, with teachers reporting even greater problems. Although no association was observed between insufficient sleep during infancy — ages 6 months to 2 years — and reduced neurobehavioral functioning in mid-childhood, Taveras notes that sleep levels during infancy often predict levels at later ages, supporting the importance of promoting a good quantity and quality of sleep from the youngest ages.

    “Our previous studies have examined the role of insufficient sleep on chronic health problems — including obesity — in both mothers and children,” explains Taveras, who is a professor of Pediatrics at Harvard Medical School (HMS). “The results of this new study indicate that one way in which poor sleep may lead to these chronic disease outcomes is by its effects on inhibition, impulsivity and other behaviors that may lead to excess consumption of high-calorie foods. It will be important to study the longer-term effects of poor sleep on health and development as children enter adolescence, which is already underway through Project Viva.”


  10. Protein called GRASP1 is needed to strengthen brain circuits

    by Ashley

    From the Johns Hopkins University press release:

    Learning and memory depend on cells’ ability to strengthen and weaken circuits in the brain. Now, researchers at Johns Hopkins Medicine report that a protein involved in recycling other cell proteins plays an important role in this process.

    Removing this protein reduced mice’s ability to learn and recall information. “We see deficits in learning tasks,” says Richard Huganir, Ph.D., professor and director of the neuroscience department at the Johns Hopkins University School of Medicine.

    The team also found mutations in the gene that produces the recycling protein in a few patients with intellectual disability, and those genetic errors affected neural connections when introduced into mouse brain cells. The results, reported in the March 22 issue of Neuron, suggest that the protein could be a potential target for drugs to treat cognitive disorders such as intellectual disability and autism, Huganir says.

    The protein, known as GRASP1, short for GRIP-associated protein 1, was previously shown to help recycle certain protein complexes that act as chemical signal receptors in the brain. These receptors sit on the edges of neurons, and each cell continually shuttles them between its interior and its surface. By adjusting the balance between adding and removing available receptors, the cell fortifies or weakens the neural connections required for learning and memory.

    Huganir says most previous research on GRASP1 was conducted in laboratory-grown cells, not in animals, while the new study was designed to find out what the protein does at the behavioral level in a living animal.

    To investigate, his team genetically engineered so-called knockout mice that lacked GRASP1 and recorded electrical currents from the animals’ synapses, the interfaces between neurons across which brain chemical signals are transmitted. In mice without GRASP1, neurons appeared to spontaneously fire an average of 28 percent less frequently than in normal mice, suggesting that they had fewer synaptic connections.

    Next, Huganir’s team counted protrusions on the mice’s brain cells called spines, which have synapses at their tips. The average density of spines in knockout mice was 15 percent lower than in normal mice, perhaps because defects in receptor recycling had caused spines to be “pruned” or retracted. Neurons from mice without GRASP1 also showed weaker long-term potentiation, a measure of synapse strengthening, in response to electrical stimulation.

    The team then tested the mice’s learning and memory. First, the animals were placed in a tub of milky water and trained to locate a hidden platform. The normal mice needed five training sessions to quickly find the platform in the opaque water, while the knockout mice required seven; the next day, the normal mice spent more time swimming in that location than in other parts of the tub, but the knockout mice seemed to swim around randomly.

    Second, the mice were put in a box with light and dark chambers and given a slight shock when they entered the dark area. The next day, the normal mice hesitated for an average of about four minutes before crossing into the dark chamber, while the knockout mice paused for less than two minutes. “Their memory was not quite as robust,” Huganir says.

    To assess the importance of GRASP1 in humans, the team identified two mutations in the gene that produce the protein in three young male patients with intellectual disabilities, who had an IQ of less than 70 and were diagnosed at an early age. When the researchers replaced the rodent version of the normal GRASP1 gene with the two mutated mouse versions in mouse brain cells, the spine density decreased by 11 to 16 percent, and the long-term potentiation response disappeared.

    Huganir speculates that defects in GRASP1 might cause learning and memory problems because the cells aren’t efficiently recycling receptors back to the surface. Normally, GRASP1 attaches to traveling cellular compartments called vesicles, which carry the receptors, and somehow helps receptors get transferred from ingoing to outgoing vesicles.

    When Huganir’s team introduced GRASP1 mutations into mouse cells, receptors accumulated inside recycling compartments instead of being shuttled to the surface.

    Huganir cautions that the results don’t prove that the GRASP1 mutations caused the patients’ intellectual disability. But the study may encourage geneticists to start testing other patients for mutations in this gene, he says. If more cases are found, researchers may be able to design drugs that target the pathway. Huganir’s team is now studying GRASP1’s role in the receptor recycling process in more detail.