1. Study suggests mammal brains identify type of scent faster than once thought

    November 23, 2017 by Ashley

    From the NYU Langone Health / NYU School of Medicine press release:

    It takes less than one-tenth of a second — a fraction of the time previously thought — for the sense of smell to distinguish between one odor and another, new experiments in mice show.

    In a study to be published in the journal Nature Communications online Nov. 14, researchers at NYU School of Medicine found that odorants — chemical particles that trigger the sense of smell — need only reach a few signaling proteins on the inside lining of the nose for the mice to identify a familiar aroma. Just as significantly, researchers say they also found that the animals’ ability to tell odors apart was the same no matter how strong the scent (regardless of odorant concentration).

    “Our study lays the groundwork for a new theory about how mammals, including humans, smell: one that is more streamlined than previously thought,” says senior study investigator and neurobiologist Dmitry Rinberg, PhD. His team is planning further animal experiments to look for patterns of brain cell activation linked to smell detection and interpretation that could also apply to people.

    “Much like human brains only need a few musical notes to name a particular song once a memory of it is formed, our findings demonstrate that a mouse’s sense of smell needs only a few nerve signals to determine the kind of scent,” says Rinberg, an associate professor at NYU Langone Health and its Neuroscience Institute.

    When an odorant initially docks into its olfactory receptor protein on a nerve cell in the nose, the cell sends a signal to the part of the brain that assigns the odor, identifying the smell, says Rinberg.

    Key among his team’s latest findings was that mice recognize a scent right after activation of the first few olfactory brain receptors, and typically within the first 100 milliseconds of inhaling any odorant.

    Previous research in animals had shown that it takes as long as 600 milliseconds for almost all olfactory brain receptors involved in their sense of smell to become fully activated, says Rinberg. However, earlier experiments in mice, which inhale through the nose faster than humans and have a faster sense of smell, showed that the number of activated receptors in their brains peaks after approximately 300 milliseconds.

    Earlier scientific investigations had also shown that highly concentrated scents activated more receptors. But Rinberg says that until his team’s latest experiments, researchers had not yet outlined the role of concentration in the odor identification process.

    For the new study, mice were trained to lick a straw to get a water reward based on whether they smelled orange- or pine-like scents.

    Using light-activated fibers inserted into the mouse nose, researchers could turn on individual brain receptors or groups of receptors involved in olfaction to control and track how many receptors were available to smell at any time. The optical technique was developed at NYU Langone.

    The team then tested how well the mice performed on water rewards when challenged by different concentrations of each smell, and with more or fewer receptors available for activation. Early activation of too many receptors, the researchers found, impaired odor identification, increasing the number of errors made by trained mice in getting their reward.

    Researchers found that early interruptions in sensing smell, less than 50 milliseconds from inhalation, reduced odor identification scores nearly to chance. By contrast, reward scores greatly improved when the mouse sense of smell was interrupted at any point after 50 milliseconds, but these gains fell off after 100 milliseconds.


  2. Study suggests reasons why head and face pain causes more suffering

    November 22, 2017 by Ashley

    From the Duke University press release:

    Hate headaches? The distress you feel is not all in your — well, head. People consistently rate pain of the head, face, eyeballs, ears and teeth as more disruptive, and more emotionally draining, than pain elsewhere in the body.

    Duke University scientists have discovered how the brain’s wiring makes us suffer more from head and face pain. The answer may lie not just in what is reported to us by the five senses, but in how that sensation makes us feel emotionally.

    The team found that sensory neurons that serve the head and face are wired directly into one of the brain’s principal emotional signaling hubs. Sensory neurons elsewhere in the body are also connected to this hub, but only indirectly.

    The results may pave the way toward more effective treatments for pain mediated by the craniofacial nerve, such as chronic headaches and neuropathic face pain.

    “Usually doctors focus on treating the sensation of pain, but this shows the we really need to treat the emotional aspects of pain as well,” said Fan Wang, a professor of neurobiology and cell biology at Duke, and senior author of the study. The results appear online Nov. 13 in Nature Neuroscience.

    Pain signals from the head versus those from the body are carried to the brain through two different groups of sensory neurons, and it is possible that neurons from the head are simply more sensitive to pain than neurons from the body.

    But differences in sensitivity would not explain the greater fear and emotional suffering that patients experience in response to head-face pain than body pain, Wang said.

    Personal accounts of greater fear and suffering are backed up by functional Magnetic Resonance Imaging (fMRI), which shows greater activity in the amygdala — a region of the brain involved in emotional experiences — in response to head pain than in response to body pain.

    “There has been this observation in human studies that pain in the head and face seems to activate the emotional system more extensively,” Wang said. “But the underlying mechanisms remained unclear.”

    To examine the neural circuitry underlying the two types of pain, Wang and her team tracked brain activity in mice after irritating either a paw or the face. They found that irritating the face led to higher activity in the brain’s parabrachial nucleus (PBL), a region that is directly wired into the brain’s instinctive and emotional centers.

    Then they used methods based on a novel technology recently pioneered by Wang’s group, called CANE, to pinpoint the sources of neurons that caused this elevated PBL activity.

    “It was a eureka moment because the body neurons only have this indirect pathway to the PBL, whereas the head and face neurons, in addition to this indirect pathway, also have a direct input,” Wang said. “This could explain why you have stronger activation in the amygdala and the brain’s emotional centers from head and face pain.”

    Further experiments showed that activating this pathway prompted face pain, while silencing the pathway reduced it.

    “We have the first biological explanation for why this type of pain can be so much more emotionally taxing than others,” said Wolfgang Liedtke, a professor of neurology at Duke University Medical Center and a co-author on Wang’s paper, who is also treating patients with head- and face-pain. “This will open the door toward not only a more profound understanding of chronic head and face pain, but also toward translating this insight into treatments that will benefit people.”

    Chronic head-face pain such cluster headaches and trigeminal neuralgia can become so severe that patients seek surgical solutions, including severing the known neural pathways that carry pain signals from the head and face to the hindbrain. But a substantial number of patients continue to suffer, even after these invasive measures.

    “Some of the most debilitating forms of pain occur in the head regions, such as migraine,” said Qiufu Ma, a professor of neurobiology at Harvard Medical School, who was not involved in the study. “The discovery of this direct pain pathway might provide an explanation why facial pain is more severe and more unpleasant.”

    Liedtke said targeting the neural pathway identified here can be a new approach toward developing innovative treatments for this devastating head and face pain.


  3. Study suggests visual intelligence is not the same as IQ

    November 17, 2017 by Ashley

    From the Vanderbilt University press release:

    Just because someone is smart and well-motivated doesn’t mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching.

    That is the implication of a new study which shows for the first time that there is a broad range of differences in people’s visual ability and that these variations are not associated with individuals’ general intelligence, or IQ. The research is reported in a paper titled “Domain-specific and domain-general individual differences in visual object recognition” published in the September issue of the journal Cognition and the implications are discussed in a review article in press at Current Directions in Psychological Science.

    “People may think they can tell how good they are at identifying objects visually,” said Isabel Gauthier, David K. Wilson Professor of Psychology at Vanderbilt University, who headed the study. “But it turns out that they are not very good at evaluating their own skills relative to others.”

    In the past, research in visual object recognition has focused largely on what people have in common, but Gauthier became interested in the question of how much visual ability varies among individuals. To answer this question, she and her colleagues had to develop a new test, which they call the Novel Object Memory Test (NOMT), to measure people’s ability to identify unfamiliar objects.

    Gauthier first wanted to gauge public opinions about visual skills. She did so by surveying 100 laypeople using the Amazon Mechanical Turk crowdsourcing service. She found that respondents generally consider visual tasks as fairly different from other tasks related to general intelligence. She also discovered that they feel there is less variation in people’s visual skills than there is in non-visual skills such as verbal and math ability.

    The main problem that Gauthier and colleagues had to address in assessing individuals’ innate visual recognition ability was familiarity. The more time a person spends learning about specific types of objects, such as faces, cars or birds, the better they get at identifying them. As a result, performance on visual recognition tests that use images of common objects are a complex mixture of people’s visual ability and their experience with these objects. Importantly, they have proven to be a poor predictor of how well someone can learn to identify objects in a new domain.

    Gauthier addressed this problem by using novel computer-generated creatures called greebles, sheinbugs and ziggerins to study visual recognition. The basic test consists of studying six target creatures, followed by a number of test trials displaying creatures in sets of three. Each set contains a creature from the target group along with two unfamiliar creatures, and the participant is asked to pick out the creature that is familiar.

    Analyzing the results from more than 2000 subjects, Gauthier and colleagues discovered that the ability to recognize one kind of creature was well predicted by how well subjects could recognize the other kind, although these objects were visually quite different. This confirmed the new test can predict the ability to learn new categories.

    The psychologists also used performance on several IQ-related tests and determined that the visual ability measured on the NOMT is distinct from and independent of general intelligence.

    “This is quite exciting because performance on cognitive skills is almost always associated with general intelligence,” Gauthier said. “It suggests that we really can learn something new about people using these tests, over and beyond all the abilities we already know how to measure.” Although the study confirms the popular intuition that visual skill is different from general intelligence, it found that individual variations in visual ability are much larger than most people think. For instance, on one metric, called the coefficient of variation, the spread of people was wider on the NOMT than on a nonverbal IQ test.

    “A lot of jobs and hobbies depend on visual skills,” Gauthier said. “Because they are independent of general intelligence, the next step is to explore how we can use these tests in real-world applications where performance could not be well predicted before.”


  4. Engineer finds how brain encodes sounds

    November 16, 2017 by Ashley

    From the Washington University in St. Louis press release:

    When you are out in the woods and hear a cracking sound, your brain needs to process quickly whether the sound is coming from, say, a bear or a chipmunk. In new research published in PLoS Biology, a biomedical engineer at Washington University in St. Louis has a new interpretation for an old observation, debunking an established theory in the process.

    Dennis Barbour, MD, PhD, associate professor of biomedical engineering in the School of Engineering & Applied Science who studies neurophysiology, found in an animal model that auditory cortex neurons may be encoding sounds differently than previously thought. Sensory neurons, such as those in auditory cortex, on average respond relatively indiscriminately at the beginning of a new stimulus, but rapidly become much more selective. The few neurons responding for the duration of a stimulus were generally thought to encode the identity of a stimulus, while the many neurons responding at the beginning were thought to encode only its presence. This theory makes a prediction that had never been tested — that the indiscriminate, initial responses would encode stimulus identity less accurately than how the selective responses register over the sound’s duration.

    “At the beginning of a sound transition, things are diffusely encoded across the neuron population, but sound identity turns out to be more accurately encoded,” Barbour said. “As a result, you can more rapidly identify sounds and act on that information. If you get about the same amount of information for each action potential spike of neural activity, as we found, then the more spikes you can put toward a problem, the faster you can decide what to do. Neural populations spike most and encode most accurately at the beginning of stimuli.”

    Barbour’s study involved recording individual neurons. To make similar kinds of measurements of brain activity in humans, researchers must use noninvasive techniques that average many neurons together. Event-related potential (ERP) techniques record brain signals through electrodes on the scalp and reflect neural activity synchronized to the onset of a stimulus. Functional MRI (fMRI), on the other hand, reflects activity averaged over several seconds. If the brain were using fundamentally different encoding schemes for onsets versus sustained stimulus presence, these two methods might be expected to diverge in their findings. Both reveal the neural encoding of stimulus identity, however.

    “There has been a lot of debate for a very long time, but especially in the past couple of decades, about whether information representation in the brain is distributed or local,” Barbour said.

    “If function is localized, with small numbers of neurons bunched together doing similar things, that’s consistent with sparse coding, high selectivity, and low population spiking rates. But if you have distributed activity, or lots of neurons contributing all over the place, that’s consistent with dense coding, low selectivity and high population spiking rates. Depending on how the experiment is conducted, neuroscientists see both. Our evidence suggests that it might just be both, depending on which data you look at and how you analyze it.”

    Barbour said the research is the most fundamental work to build a theory for how information might be encoded for sound processing, yet it implies a novel sensory encoding principle potentially applicable to other sensory systems, such as how smells are processed and encoded.

    Earlier this year, Barbour worked with Barani Raman, associate professor of biomedical engineering, to investigate how the presence and absence of an odor or a sound is processed. While the response times between the olfactory and auditory systems are different, the neurons are responding in the same ways. The results of that research also gave strong evidence that there may exist a stored set of signal processing motifs that is potentially shared by different sensory systems and even different species.


  5. ‘Mind-reading’ brain-decoding tech

    October 30, 2017 by Ashley

    From the Purdue University press release:

    Researchers have demonstrated how to decode what the human brain is seeing by using artificial intelligence to interpret fMRI scans from people watching videos, representing a sort of mind-reading technology.

    The advance could aid efforts to improve artificial intelligence and lead to new insights into brain function. Critical to the research is a type of algorithm called a convolutional neural network, which has been instrumental in enabling computers and smartphones to recognize faces and objects.

    “That type of network has made an enormous impact in the field of computer vision in recent years,” said Zhongming Liu, an assistant professor in Purdue University’s Weldon School of Biomedical Engineering and School of Electrical and Computer Engineering. “Our technique uses the neural network to understand what you are seeing.”

    Convolutional neural networks, a form of “deep-learning” algorithm, have been used to study how the brain processes static images and other visual stimuli. However, the new findings represent the first time such an approach has been used to see how the brain processes movies of natural scenes, a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings, said doctoral student Haiguang Wen.

    He is lead author of a new research paper appearing online Oct. 20 in the journal Cerebral Cortex.

    The researchers acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including those showing people or animals in action and nature scenes. First, the data were used to train the convolutional neural network model to predict the activity in the brain’s visual cortex while the subjects were watching the videos. Then they used the model to decode fMRI data from the subjects to reconstruct the videos, even ones the model had never watched before.

    The model was able to accurately decode the fMRI data into specific image categories. Actual video images were then presented side-by-side with the computer’s interpretation of what the person’s brain saw based on fMRI data.

    “For example, a water animal, the moon, a turtle, a person, a bird in flight,” Wen said. “I think what is a unique aspect of this work is that we are doing the decoding nearly in real time, as the subjects are watching the video. We scan the brain every two seconds, and the model rebuilds the visual experience as it occurs.”

    The researchers were able to figure out how certain locations in the brain were associated with specific information a person was seeing. “Neuroscience is trying to map which parts of the brain are responsible for specific functionality,” Wen said. “This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. A scene with a car moving in front of a building is dissected into pieces of information by the brain: one location in the brain may represent the car; another location may represent the building.

    Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain’s visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene.”

    The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called cross-subject encoding and decoding. This finding is important because it demonstrates the potential for broad applications of such models to study brain function, even for people with visual deficits.

    “We think we are entering a new era of machine intelligence and neuroscience where research is focusing on the intersection of these two important fields,” Liu said. “Our mission in general is to advance artificial intelligence using brain-inspired concepts. In turn, we want to use artificial intelligence to help us understand the brain. So, we think this is a good strategy to help advance both fields in a way that otherwise would not be accomplished if we approached them separately.”


  6. Appetizing imagery puts visual perception on fast forward

    October 11, 2017 by Ashley

    From the Association for Psychological Science press release:

    People rated images containing positive content as fading more smoothly compared with neutral and negative images, even when they faded at the same rate, according to findings published in Psychological Science, a journal of the Association for Psychological Science.

    “Our research shows that emotionally-charged stimuli, specifically positive and negative images, may influence the speed, or the temporal resolution, of visual perception,” says psychological scientist Kevin H. Roberts of the University of British Columbia.

    The idea that things in our environment, or even our own emotional states, can affect how we experience time is a common one. We say that time “drags” when we’re bored and it “flies” when we’re having fun. But how might this happen? Roberts and colleagues hypothesized that the emotional content of stimuli or experiences could impact the speed of our internal pacemaker.

    Specifically, they hypothesized that our motivation to approach positive stimuli or experiences would make us less sensitive to temporal details. Change in these stimuli or experiences would, therefore, seem relatively smooth, similar to what happens when you press ‘fast forward’ on a video. Our desire to avoid negative stimuli or experiences, on the other hand, would enhance our sensitivity to temporal details and would make changes seem more discrete and choppy, similar to a slow-motion video.

    To test this hypothesis, Roberts and colleagues used an approach common in psychophysics experiments — estimating relative magnitudes — to gauge how people’s moment-to-moment experiences vary when they view different types of stimuli.

    In one experiment, 23 participants looked at a total of 225 image pairs. In each pair, they first saw a standard stimulus that faded to black over 2 seconds and then saw a target stimulus that also faded to black over 2 seconds. The frame rate of the target stimulus varied, displaying at 16, 24, or 48 frames per second.

    Participants were generally sensitive to the differences in frame rate, as the researchers expected. Participants rated the smoothness of the target image relative to the standard image using a 21-point scale: The higher the frame rate of the target image, the smoother they rated it relative to the standard image.

    The emotional content of the images also made a difference in perceptions of smoothness. Regardless of the frame rate, participants rated negative images — which depicted things we generally want to avoid, including imagery related to confrontation and death — as the least smooth. They rated positive stimuli — depicting appetizing desserts — as the smoothest, overall.

    Most importantly, the researchers found that people perceived images that faded at the same rate differently depending on their content. Positive target images that faded at 16 fps seemed smoother than neutral target images that faded at the same rate. Positive images that faded at 24 fps seemed smoother than both negative and neutral images with the same frame rate. And positive images that faded at 48 fps seemed smoother than negative images at the same rate.

    Further analyses suggest that this effect occurred primarily because positive images elicited higher approach motivation.

    Because the words “smooth” and “choppy” could themselves come with positive or negative connotations, the researchers replaced them with “continuous” and “discrete” in a second experiment. Once again, they found that the emotional content of the images swayed how participants perceived the frame rate of the fade.

    Brain-activity data gathered in a third experiment indicated that the blurring of perceptual experience associated with positive images was accompanied by changes in high-level visual processing.

    “Even when we made adjustments to the instructions and the task structure, the overall effect remained — people subjectively reported seeing less fine-grained temporal changes in positive images, and they reported seeing more fine-grained temporal changes in negative images,” says Roberts.

    Together, these findings suggest that the emotional content of the images affected how participants experienced what they were seeing.

    “What remains to be seen is whether emotional stimuli impact objective measures of temporal processing,” says Roberts. “In other words, do individuals actually perceive less temporal information when they view positive images, or do they just believe they perceive less?”


  7. Predicting when a sound will occur relies on the brain’s motor system

    October 10, 2017 by Ashley

    From the McGill University press release:

    Whether it is dancing or just tapping one foot to the beat, we all experience how auditory signals like music can induce movement. Now new research suggests that motor signals in the brain actually sharpen sound perception, and this effect is increased when we move in rhythm with the sound.

    It is already known that the motor system, the part of the brain that controls our movements, communicates with the sensory regions of the brain. The motor system controls eye and other body part movements to orient our gaze and limbs to explore our spatial environment. However, because ears are immobile it was less clear what role the motor system plays in distinguishing sounds.

    Benjamin Morillon, a researcher at the Montreal Neurological Institute of McGill University, designed a study based on the hypothesis that signals coming from the sensorimotor cortex could prepare the auditory cortex to process sound, and by doing so improve its ability to decipher complex sound flows like speech and music.

    Working in the lab of MNI researcher Sylvain Baillet, he recruited 21 participants who listened to complex tone sequences and had to indicate whether a target melody was on average higher or lower-pitched compared to a reference. The researchers also played an intertwined distracting melody to measure the participants’ ability to focus on the target melody.

    The exercise was done in two stages, one in which the participants were completely still, and another in which they tapped on a touchpad in rhythm with the target melody. The participants performed this task while their brain oscillations, a form of neural signaling brain regions use to communicate with each other, were recorded with magnetoencephalography (MEG).

    MEG millisecond imaging revealed that bursts of fast neural oscillations coming from the left sensorimotor cortex were directed at the auditory regions of the brain. These oscillations occurred in anticipation of the occurrence of the next tone of interest. This finding revealed that the motor system can predict in advance when a sound will occur and send this information to auditory regions so they can prepare to interpret the sound.

    One striking aspect of this discovery is that timed brain motor signaling anticipated the incoming tones of the target melody, even when participants remained completely still. Hand tapping to the beat of interest further improved performance, confirming the important role of motor activity in the accuracy of auditory perception.

    “A realistic example of this is the cocktail party concept: when you try to listen to someone but many people are speaking around at the same time,” says Morillon. “In real life, you have many ways to help you focus on the individual of interest: pay attention to the timbre and pitch of the voice, focus spatially toward the person, look at the mouth, use linguistic cues, use what was the beginning of the sentence to predict the end of it, but also pay attention to the rhythm of the speech. This latter case is what we isolated in this study to highlight how it happens in the brain.”

    A better understanding of the link between movement and auditory processing could one day mean better therapies for people with hearing or speech comprehension problems.

    “It has implications for clinical research and rehabilitation strategies, notably on dyslexic children and hearing-impaired patients,” says Morillon. “Teaching them to better rely on their motor system by at first overtly moving in synchrony with a speaker’s pace could help them to better understand speech.”


  8. Study suggests smell loss predicts cognitive decline in healthy older people

    October 7, 2017 by Ashley

    From the University of Chicago Medical Center press release:

    A long-term study of nearly 3,000 adults, aged 57 to 85, found that those who could not identify at least four out of five common odors were more than twice as likely as those with a normal sense of smell to develop dementia within five years.

    Although 78 percent of those tested were normal — correctly identifying at least four out of five scents — about 14 percent could name just three out of five, five percent could identify only two scents, two percent could name just one, and one percent of the study subjects were not able to identify a single smell.

    Five years after the initial test, almost all of the study subjects who were unable to name a single scent had been diagnosed with dementia. Nearly 80 percent of those who provided only one or two correct answers also had dementia, with a dose-dependent relationship between degree of smell loss and incidence of dementia.

    “These results show that the sense of smell is closely connected with brain function and health,” said the study’s lead author, Jayant M. Pinto, MD, a professor of surgery at the University of Chicago and ENT specialist who studies the genetics and treatment of olfactory and sinus disease. “We think smell ability specifically, but also sensory function more broadly, may be an important early sign, marking people at greater risk for dementia.”

    “We need to understand the underlying mechanisms,” Pinto added, “so we can understand neurodegenerative disease and hopefully develop new treatments and preventative interventions.”

    “Loss of the sense of smell is a strong signal that something has gone wrong and significant damage has been done,” Pinto said. “This simple smell test could provide a quick and inexpensive way to identify those who are already at high risk.”

    The study, “Olfactory Dysfunction Predicts Subsequent Dementia in Older US Adults,” published September 2?, 2017, in the Journal of the American Geriatrics Society, follows a related 2014 paper, in which olfactory dysfunction was associated with increased risk of death within five years. In that study, loss of the sense of smell was a better predictor of death than a diagnosis of heart failure, cancer or lung disease.

    For both studies, the researchers used a well-validated tool, known as “Sniffin’Sticks.” These look like a felt-tip pen, but instead of ink, they are infused with distinct scents. Study subjects smell each item and are asked to identify that odor, one at a time, from a set of four choices. The five odors, in order of increasing difficulty, were peppermint, fish, orange, rose and leather.

    Test results showed that:

    • 78.1 percent of those examined had a normal sense of smell; 48.7 percent correctly identified five out of five odors and 29.4 percent identified four out of five.
    • 18.7 percent, considered “hyposmic,” got two or three out of five correct.
    • The remaining 3.2 percent, labelled “anosmic,” could identify just one of the five scents (2.2%), or none (1%).

    The olfactory nerve is the only cranial nerve directly exposed to the environment. The cells that detect smells connect directly with the olfactory bulb at the base of the brain, potentially exposing the central nervous system to environmental hazards such as pollution or pathogens. Olfactory deficits are often an early sign of Parkinson’s or Alzheimer’s disease. They get worse with disease progression.

    Losing the ability to smell can have a substantial impact on lifestyle and wellbeing, said Pinto, a specialist in sinus and nasal diseases and a member of the Section of Otolaryngology-Head and Neck Surgery at UChicago Medicine. “Smells influence nutrition and mental health,” Pinto said. People who can’t smell face everyday problems such as knowing whether food is spoiled, detecting smoke during a fire, or assessing the need a shower after a workout. Being unable to smell is closely associated with depression as people don’t get as much pleasure in life.”

    “This evolutionarily ancient special sense may signal a key mechanism that also underlies human cognition,” noted study co-author Martha K. McClintock, PhD, the David Lee Shillinglaw Distinguished Service Professor of Psychology at the University of Chicago, who has studied olfactory and pheromonal communication throughout her career.

    McClintock noted that the olfactory system also has stem cells which self-regenerate, so “a decrease in the ability to smell may signal a decrease in the brain’s ability to rebuild key components that are declining with age, leading to the pathological changes of many different dementias.”

    In an accompanying editorial, Stephen Thielke, MD, a member of the Geriatric Research, Education and Clinical Center at Puget Sound Veterans Affairs Medical Center and the psychiatry and behavioral sciences faculty at the University of Washington, wrote: “Olfactory dysfunction may be easier to quantify across time than global cognition, which could allow for more-systematic or earlier assessment of neurodegenerative changes, but none of this supports that smell testing would be a useful tool for predicting the onset of dementia.”

    “Our test simply marks someone for closer attention,” Pinto explained. “Much more work would need to be done to make it a clinical test. But it could help find people who are at risk. Then we could enroll them in early-stage prevention trials.”

    “Of all human senses,” Pinto added, “smell is the most undervalued and underappreciated — until it’s gone.”

    Both studies were part of the National Social Life, Health and Aging Project (NSHAP), the first in-home study of social relationships and health in a large, nationally representative sample of men and women ages 57 to 85.

    The study was funded by the National Institutes of Health — including the National Institute on Aging and the National Institute of Allergy and Infectious Disease — the Institute of Translational Medicine at the University of Chicago, and the McHugh Otolaryngology Research Fund.

    Additional authors were Dara Adams, David W. Kern, Kristen E. Wroblewski and William Dale, all from the University of Chicago. Linda Waite is the principal investigator of NSHAP, a transdisciplinary effort with experts in sociology, geriatrics, psychology, epidemiology, statistics, survey methodology, medicine, and surgery collaborating to advance knowledge about aging.


  9. Study suggests link between BMI and how we assess food

    October 2, 2017 by Ashley

    From the Scuola Internazionale Superiore di Studi Avanzati press release:

    A new study demonstrated that people of normal weight tend to associate natural foods such as apples with their sensory characteristics. On the other hand, processed foods such as pizzas are generally associated with their function or the context in which they are eaten.

    “It can be considered an instance of ‘embodiment‘ in which our brain interacts with our body.” This is the comment made by Raffaella Rumiati, neuroscientist at the International School for Advanced Studies — SISSA in Trieste, on the results of research carried out by her group which reveals that the way we process different foods changes in accordance with our body mass index. With two behavioural and electroencephalographic experiments, the study demonstrated that people of normal weight tend to associate natural foods such as apples with their sensory characteristics such as sweetness or softness.

    On the other hand, processed foods such as pizzas are generally associated with their function or the context in which they are eaten such as parties or picnics.

    “The results are in line with the theory according to which sensory characteristics and the functions of items are processed differently by the brain,” comments Giulio Pergola, the work’s primary author. “They represent an important step forward in our understanding of the mechanisms at the basis of the assessments we make of food.” But that’s not all.

    Recently published in the Biological Psychology journal, the research also highlighted the ways in which underweight people pay greater attention to natural foods and overweight people to processed foods. Even when subjected to the same stimuli, these two groups show different electroencephalography signals. These results show once again the importance of cognitive neuroscience also in the understanding of extremely topical clinical fields such as dietary disorders.

     


  10. Different brain areas interact to recognize partially covered shapes

    September 28, 2017 by Ashley

    From the University of Washington Health Sciences/UW Medicine press release:

    How does a driver’s brain realize that a stop sign is behind a bush when only a red edge is showing? Or how can a monkey suspect that the yellow sliver in the leaves is a round piece of fruit?

    The human (and non-human) primate brain is remarkable in recognizing objects when the view is nearly blocked. This skill let our ancient ancestors find food and avoid danger. It continues to be critical to making sense of our surroundings.

    UW Medicine scientists are conducting research to discover ways that the brain operates when figuring out shapes, from those that are completely visible to those that are mostly hidden.

    Although computers can beat the world’s best chess players, scientists have not yet designed artificial intelligence that performs as well as the average person in distinguishing shapes that are semi-obscured.

    Studies of signals generated by the brain are helping to fill in the picture of what goes on when looking at, then trying to recognize, shapes. Such research is also showing why attempts have failed to mechanically replicate the ability of humans and primates to identify partially hidden objects.

    The most recent results of this work are published Sept. 19 in the scientific journal eLife.

    The senior investigator is Anitha Pasupathy, associate professor of biological structure at the University of Washington School of Medicine in Seattle and a member of the Washington National Primate Research Center.

    There’s a computer game at the center that can be played to tell if two shapes are alike or different. A correct answer wins a treat. As dots start to appear over the shapes, the task becomes more difficult.

    The researchers learned that, during the simpler part of the game, the brain generates signals in certain areas of the visual cortex — the part for sight. The neurons, or brain nerve cells, in that section respond more strongly to uncovered shapes.

    However, when the shapes begin to disappear behind the dots, certain neurons in the part of the brain that governs functions like memory and planning — the ventrolateral prefrontal cortex — respond more intensely.

    The researchers also observed that many of the neurons in the visual cortex had two quick response peaks. The second one occurred after the response onset in the thinking section of the brain. This seemed to enhance the response of the neurons in the visual cortex to the partially hidden shapes.

    The results, according to Pasupathy, suggest how signals from the two different areas of the brain — thinking and vision — could interact to assist in recognizing shapes that are not fully visible.

    They researches believe that other regions of the brain, in addition to those they studied, are likely to participate in object recognition.

    “It’s not just the information flowing from the eyes into the sensory location of the brain that’s important to know what a shape is when it’s partially covered,” she said. “Feedback from other regions of the brain also help in making this determination.”

    Relying only on the image of an object that appears on the eye’s retina makes it hard to make out what it is, because that image could have many interpretations.

    Recognition stems not only from the physical appearance of the object, but also the scene, the context, the degree of covering, and the viewer’s experience, the researchers explained.

    The study helps advance knowledge about how the brain typically works in solving this frequently encountered perceptual puzzle.

    “The neural mechanisms that mediate perceptual capacities, such as this one, have been largely unknown, which is why we were interested in studying them,” Pasupathy noted.

    Their recent findings also make the scientists wonder if impairments in this and other types of communication between the cognitive and sensory parts of the brain might have a role in certain difficulties that people with autism or Alzheimer’s encounter.

    Pasupathy said, for example, some people with autism have a profound inability to function in cluttered or disorderly environments. They have problems processing sensory information and can become confused and distressed. Many patients with Alzheimer’s disease experience what is called visual agnosia. They have no trouble seeing objects, but they can’t tell what they are.

    “So understanding how the sensory and cognitive areas in the brain communicate is of utmost importance to ultimately understand what might go wrong inside the nervous system that can cause these deficits,” Pasupathy said.