1. Study suggests visual intelligence is not the same as IQ

    November 17, 2017 by Ashley

    From the Vanderbilt University press release:

    Just because someone is smart and well-motivated doesn’t mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching.

    That is the implication of a new study which shows for the first time that there is a broad range of differences in people’s visual ability and that these variations are not associated with individuals’ general intelligence, or IQ. The research is reported in a paper titled “Domain-specific and domain-general individual differences in visual object recognition” published in the September issue of the journal Cognition and the implications are discussed in a review article in press at Current Directions in Psychological Science.

    “People may think they can tell how good they are at identifying objects visually,” said Isabel Gauthier, David K. Wilson Professor of Psychology at Vanderbilt University, who headed the study. “But it turns out that they are not very good at evaluating their own skills relative to others.”

    In the past, research in visual object recognition has focused largely on what people have in common, but Gauthier became interested in the question of how much visual ability varies among individuals. To answer this question, she and her colleagues had to develop a new test, which they call the Novel Object Memory Test (NOMT), to measure people’s ability to identify unfamiliar objects.

    Gauthier first wanted to gauge public opinions about visual skills. She did so by surveying 100 laypeople using the Amazon Mechanical Turk crowdsourcing service. She found that respondents generally consider visual tasks as fairly different from other tasks related to general intelligence. She also discovered that they feel there is less variation in people’s visual skills than there is in non-visual skills such as verbal and math ability.

    The main problem that Gauthier and colleagues had to address in assessing individuals’ innate visual recognition ability was familiarity. The more time a person spends learning about specific types of objects, such as faces, cars or birds, the better they get at identifying them. As a result, performance on visual recognition tests that use images of common objects are a complex mixture of people’s visual ability and their experience with these objects. Importantly, they have proven to be a poor predictor of how well someone can learn to identify objects in a new domain.

    Gauthier addressed this problem by using novel computer-generated creatures called greebles, sheinbugs and ziggerins to study visual recognition. The basic test consists of studying six target creatures, followed by a number of test trials displaying creatures in sets of three. Each set contains a creature from the target group along with two unfamiliar creatures, and the participant is asked to pick out the creature that is familiar.

    Analyzing the results from more than 2000 subjects, Gauthier and colleagues discovered that the ability to recognize one kind of creature was well predicted by how well subjects could recognize the other kind, although these objects were visually quite different. This confirmed the new test can predict the ability to learn new categories.

    The psychologists also used performance on several IQ-related tests and determined that the visual ability measured on the NOMT is distinct from and independent of general intelligence.

    “This is quite exciting because performance on cognitive skills is almost always associated with general intelligence,” Gauthier said. “It suggests that we really can learn something new about people using these tests, over and beyond all the abilities we already know how to measure.” Although the study confirms the popular intuition that visual skill is different from general intelligence, it found that individual variations in visual ability are much larger than most people think. For instance, on one metric, called the coefficient of variation, the spread of people was wider on the NOMT than on a nonverbal IQ test.

    “A lot of jobs and hobbies depend on visual skills,” Gauthier said. “Because they are independent of general intelligence, the next step is to explore how we can use these tests in real-world applications where performance could not be well predicted before.”


  2. Engineer finds how brain encodes sounds

    November 16, 2017 by Ashley

    From the Washington University in St. Louis press release:

    When you are out in the woods and hear a cracking sound, your brain needs to process quickly whether the sound is coming from, say, a bear or a chipmunk. In new research published in PLoS Biology, a biomedical engineer at Washington University in St. Louis has a new interpretation for an old observation, debunking an established theory in the process.

    Dennis Barbour, MD, PhD, associate professor of biomedical engineering in the School of Engineering & Applied Science who studies neurophysiology, found in an animal model that auditory cortex neurons may be encoding sounds differently than previously thought. Sensory neurons, such as those in auditory cortex, on average respond relatively indiscriminately at the beginning of a new stimulus, but rapidly become much more selective. The few neurons responding for the duration of a stimulus were generally thought to encode the identity of a stimulus, while the many neurons responding at the beginning were thought to encode only its presence. This theory makes a prediction that had never been tested — that the indiscriminate, initial responses would encode stimulus identity less accurately than how the selective responses register over the sound’s duration.

    “At the beginning of a sound transition, things are diffusely encoded across the neuron population, but sound identity turns out to be more accurately encoded,” Barbour said. “As a result, you can more rapidly identify sounds and act on that information. If you get about the same amount of information for each action potential spike of neural activity, as we found, then the more spikes you can put toward a problem, the faster you can decide what to do. Neural populations spike most and encode most accurately at the beginning of stimuli.”

    Barbour’s study involved recording individual neurons. To make similar kinds of measurements of brain activity in humans, researchers must use noninvasive techniques that average many neurons together. Event-related potential (ERP) techniques record brain signals through electrodes on the scalp and reflect neural activity synchronized to the onset of a stimulus. Functional MRI (fMRI), on the other hand, reflects activity averaged over several seconds. If the brain were using fundamentally different encoding schemes for onsets versus sustained stimulus presence, these two methods might be expected to diverge in their findings. Both reveal the neural encoding of stimulus identity, however.

    “There has been a lot of debate for a very long time, but especially in the past couple of decades, about whether information representation in the brain is distributed or local,” Barbour said.

    “If function is localized, with small numbers of neurons bunched together doing similar things, that’s consistent with sparse coding, high selectivity, and low population spiking rates. But if you have distributed activity, or lots of neurons contributing all over the place, that’s consistent with dense coding, low selectivity and high population spiking rates. Depending on how the experiment is conducted, neuroscientists see both. Our evidence suggests that it might just be both, depending on which data you look at and how you analyze it.”

    Barbour said the research is the most fundamental work to build a theory for how information might be encoded for sound processing, yet it implies a novel sensory encoding principle potentially applicable to other sensory systems, such as how smells are processed and encoded.

    Earlier this year, Barbour worked with Barani Raman, associate professor of biomedical engineering, to investigate how the presence and absence of an odor or a sound is processed. While the response times between the olfactory and auditory systems are different, the neurons are responding in the same ways. The results of that research also gave strong evidence that there may exist a stored set of signal processing motifs that is potentially shared by different sensory systems and even different species.


  3. ‘Mind-reading’ brain-decoding tech

    October 30, 2017 by Ashley

    From the Purdue University press release:

    Researchers have demonstrated how to decode what the human brain is seeing by using artificial intelligence to interpret fMRI scans from people watching videos, representing a sort of mind-reading technology.

    The advance could aid efforts to improve artificial intelligence and lead to new insights into brain function. Critical to the research is a type of algorithm called a convolutional neural network, which has been instrumental in enabling computers and smartphones to recognize faces and objects.

    “That type of network has made an enormous impact in the field of computer vision in recent years,” said Zhongming Liu, an assistant professor in Purdue University’s Weldon School of Biomedical Engineering and School of Electrical and Computer Engineering. “Our technique uses the neural network to understand what you are seeing.”

    Convolutional neural networks, a form of “deep-learning” algorithm, have been used to study how the brain processes static images and other visual stimuli. However, the new findings represent the first time such an approach has been used to see how the brain processes movies of natural scenes, a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings, said doctoral student Haiguang Wen.

    He is lead author of a new research paper appearing online Oct. 20 in the journal Cerebral Cortex.

    The researchers acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including those showing people or animals in action and nature scenes. First, the data were used to train the convolutional neural network model to predict the activity in the brain’s visual cortex while the subjects were watching the videos. Then they used the model to decode fMRI data from the subjects to reconstruct the videos, even ones the model had never watched before.

    The model was able to accurately decode the fMRI data into specific image categories. Actual video images were then presented side-by-side with the computer’s interpretation of what the person’s brain saw based on fMRI data.

    “For example, a water animal, the moon, a turtle, a person, a bird in flight,” Wen said. “I think what is a unique aspect of this work is that we are doing the decoding nearly in real time, as the subjects are watching the video. We scan the brain every two seconds, and the model rebuilds the visual experience as it occurs.”

    The researchers were able to figure out how certain locations in the brain were associated with specific information a person was seeing. “Neuroscience is trying to map which parts of the brain are responsible for specific functionality,” Wen said. “This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. A scene with a car moving in front of a building is dissected into pieces of information by the brain: one location in the brain may represent the car; another location may represent the building.

    Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain’s visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene.”

    The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called cross-subject encoding and decoding. This finding is important because it demonstrates the potential for broad applications of such models to study brain function, even for people with visual deficits.

    “We think we are entering a new era of machine intelligence and neuroscience where research is focusing on the intersection of these two important fields,” Liu said. “Our mission in general is to advance artificial intelligence using brain-inspired concepts. In turn, we want to use artificial intelligence to help us understand the brain. So, we think this is a good strategy to help advance both fields in a way that otherwise would not be accomplished if we approached them separately.”


  4. Appetizing imagery puts visual perception on fast forward

    October 11, 2017 by Ashley

    From the Association for Psychological Science press release:

    People rated images containing positive content as fading more smoothly compared with neutral and negative images, even when they faded at the same rate, according to findings published in Psychological Science, a journal of the Association for Psychological Science.

    “Our research shows that emotionally-charged stimuli, specifically positive and negative images, may influence the speed, or the temporal resolution, of visual perception,” says psychological scientist Kevin H. Roberts of the University of British Columbia.

    The idea that things in our environment, or even our own emotional states, can affect how we experience time is a common one. We say that time “drags” when we’re bored and it “flies” when we’re having fun. But how might this happen? Roberts and colleagues hypothesized that the emotional content of stimuli or experiences could impact the speed of our internal pacemaker.

    Specifically, they hypothesized that our motivation to approach positive stimuli or experiences would make us less sensitive to temporal details. Change in these stimuli or experiences would, therefore, seem relatively smooth, similar to what happens when you press ‘fast forward’ on a video. Our desire to avoid negative stimuli or experiences, on the other hand, would enhance our sensitivity to temporal details and would make changes seem more discrete and choppy, similar to a slow-motion video.

    To test this hypothesis, Roberts and colleagues used an approach common in psychophysics experiments — estimating relative magnitudes — to gauge how people’s moment-to-moment experiences vary when they view different types of stimuli.

    In one experiment, 23 participants looked at a total of 225 image pairs. In each pair, they first saw a standard stimulus that faded to black over 2 seconds and then saw a target stimulus that also faded to black over 2 seconds. The frame rate of the target stimulus varied, displaying at 16, 24, or 48 frames per second.

    Participants were generally sensitive to the differences in frame rate, as the researchers expected. Participants rated the smoothness of the target image relative to the standard image using a 21-point scale: The higher the frame rate of the target image, the smoother they rated it relative to the standard image.

    The emotional content of the images also made a difference in perceptions of smoothness. Regardless of the frame rate, participants rated negative images — which depicted things we generally want to avoid, including imagery related to confrontation and death — as the least smooth. They rated positive stimuli — depicting appetizing desserts — as the smoothest, overall.

    Most importantly, the researchers found that people perceived images that faded at the same rate differently depending on their content. Positive target images that faded at 16 fps seemed smoother than neutral target images that faded at the same rate. Positive images that faded at 24 fps seemed smoother than both negative and neutral images with the same frame rate. And positive images that faded at 48 fps seemed smoother than negative images at the same rate.

    Further analyses suggest that this effect occurred primarily because positive images elicited higher approach motivation.

    Because the words “smooth” and “choppy” could themselves come with positive or negative connotations, the researchers replaced them with “continuous” and “discrete” in a second experiment. Once again, they found that the emotional content of the images swayed how participants perceived the frame rate of the fade.

    Brain-activity data gathered in a third experiment indicated that the blurring of perceptual experience associated with positive images was accompanied by changes in high-level visual processing.

    “Even when we made adjustments to the instructions and the task structure, the overall effect remained — people subjectively reported seeing less fine-grained temporal changes in positive images, and they reported seeing more fine-grained temporal changes in negative images,” says Roberts.

    Together, these findings suggest that the emotional content of the images affected how participants experienced what they were seeing.

    “What remains to be seen is whether emotional stimuli impact objective measures of temporal processing,” says Roberts. “In other words, do individuals actually perceive less temporal information when they view positive images, or do they just believe they perceive less?”


  5. Predicting when a sound will occur relies on the brain’s motor system

    October 10, 2017 by Ashley

    From the McGill University press release:

    Whether it is dancing or just tapping one foot to the beat, we all experience how auditory signals like music can induce movement. Now new research suggests that motor signals in the brain actually sharpen sound perception, and this effect is increased when we move in rhythm with the sound.

    It is already known that the motor system, the part of the brain that controls our movements, communicates with the sensory regions of the brain. The motor system controls eye and other body part movements to orient our gaze and limbs to explore our spatial environment. However, because ears are immobile it was less clear what role the motor system plays in distinguishing sounds.

    Benjamin Morillon, a researcher at the Montreal Neurological Institute of McGill University, designed a study based on the hypothesis that signals coming from the sensorimotor cortex could prepare the auditory cortex to process sound, and by doing so improve its ability to decipher complex sound flows like speech and music.

    Working in the lab of MNI researcher Sylvain Baillet, he recruited 21 participants who listened to complex tone sequences and had to indicate whether a target melody was on average higher or lower-pitched compared to a reference. The researchers also played an intertwined distracting melody to measure the participants’ ability to focus on the target melody.

    The exercise was done in two stages, one in which the participants were completely still, and another in which they tapped on a touchpad in rhythm with the target melody. The participants performed this task while their brain oscillations, a form of neural signaling brain regions use to communicate with each other, were recorded with magnetoencephalography (MEG).

    MEG millisecond imaging revealed that bursts of fast neural oscillations coming from the left sensorimotor cortex were directed at the auditory regions of the brain. These oscillations occurred in anticipation of the occurrence of the next tone of interest. This finding revealed that the motor system can predict in advance when a sound will occur and send this information to auditory regions so they can prepare to interpret the sound.

    One striking aspect of this discovery is that timed brain motor signaling anticipated the incoming tones of the target melody, even when participants remained completely still. Hand tapping to the beat of interest further improved performance, confirming the important role of motor activity in the accuracy of auditory perception.

    “A realistic example of this is the cocktail party concept: when you try to listen to someone but many people are speaking around at the same time,” says Morillon. “In real life, you have many ways to help you focus on the individual of interest: pay attention to the timbre and pitch of the voice, focus spatially toward the person, look at the mouth, use linguistic cues, use what was the beginning of the sentence to predict the end of it, but also pay attention to the rhythm of the speech. This latter case is what we isolated in this study to highlight how it happens in the brain.”

    A better understanding of the link between movement and auditory processing could one day mean better therapies for people with hearing or speech comprehension problems.

    “It has implications for clinical research and rehabilitation strategies, notably on dyslexic children and hearing-impaired patients,” says Morillon. “Teaching them to better rely on their motor system by at first overtly moving in synchrony with a speaker’s pace could help them to better understand speech.”


  6. Study suggests smell loss predicts cognitive decline in healthy older people

    October 7, 2017 by Ashley

    From the University of Chicago Medical Center press release:

    A long-term study of nearly 3,000 adults, aged 57 to 85, found that those who could not identify at least four out of five common odors were more than twice as likely as those with a normal sense of smell to develop dementia within five years.

    Although 78 percent of those tested were normal — correctly identifying at least four out of five scents — about 14 percent could name just three out of five, five percent could identify only two scents, two percent could name just one, and one percent of the study subjects were not able to identify a single smell.

    Five years after the initial test, almost all of the study subjects who were unable to name a single scent had been diagnosed with dementia. Nearly 80 percent of those who provided only one or two correct answers also had dementia, with a dose-dependent relationship between degree of smell loss and incidence of dementia.

    “These results show that the sense of smell is closely connected with brain function and health,” said the study’s lead author, Jayant M. Pinto, MD, a professor of surgery at the University of Chicago and ENT specialist who studies the genetics and treatment of olfactory and sinus disease. “We think smell ability specifically, but also sensory function more broadly, may be an important early sign, marking people at greater risk for dementia.”

    “We need to understand the underlying mechanisms,” Pinto added, “so we can understand neurodegenerative disease and hopefully develop new treatments and preventative interventions.”

    “Loss of the sense of smell is a strong signal that something has gone wrong and significant damage has been done,” Pinto said. “This simple smell test could provide a quick and inexpensive way to identify those who are already at high risk.”

    The study, “Olfactory Dysfunction Predicts Subsequent Dementia in Older US Adults,” published September 2?, 2017, in the Journal of the American Geriatrics Society, follows a related 2014 paper, in which olfactory dysfunction was associated with increased risk of death within five years. In that study, loss of the sense of smell was a better predictor of death than a diagnosis of heart failure, cancer or lung disease.

    For both studies, the researchers used a well-validated tool, known as “Sniffin’Sticks.” These look like a felt-tip pen, but instead of ink, they are infused with distinct scents. Study subjects smell each item and are asked to identify that odor, one at a time, from a set of four choices. The five odors, in order of increasing difficulty, were peppermint, fish, orange, rose and leather.

    Test results showed that:

    • 78.1 percent of those examined had a normal sense of smell; 48.7 percent correctly identified five out of five odors and 29.4 percent identified four out of five.
    • 18.7 percent, considered “hyposmic,” got two or three out of five correct.
    • The remaining 3.2 percent, labelled “anosmic,” could identify just one of the five scents (2.2%), or none (1%).

    The olfactory nerve is the only cranial nerve directly exposed to the environment. The cells that detect smells connect directly with the olfactory bulb at the base of the brain, potentially exposing the central nervous system to environmental hazards such as pollution or pathogens. Olfactory deficits are often an early sign of Parkinson’s or Alzheimer’s disease. They get worse with disease progression.

    Losing the ability to smell can have a substantial impact on lifestyle and wellbeing, said Pinto, a specialist in sinus and nasal diseases and a member of the Section of Otolaryngology-Head and Neck Surgery at UChicago Medicine. “Smells influence nutrition and mental health,” Pinto said. People who can’t smell face everyday problems such as knowing whether food is spoiled, detecting smoke during a fire, or assessing the need a shower after a workout. Being unable to smell is closely associated with depression as people don’t get as much pleasure in life.”

    “This evolutionarily ancient special sense may signal a key mechanism that also underlies human cognition,” noted study co-author Martha K. McClintock, PhD, the David Lee Shillinglaw Distinguished Service Professor of Psychology at the University of Chicago, who has studied olfactory and pheromonal communication throughout her career.

    McClintock noted that the olfactory system also has stem cells which self-regenerate, so “a decrease in the ability to smell may signal a decrease in the brain’s ability to rebuild key components that are declining with age, leading to the pathological changes of many different dementias.”

    In an accompanying editorial, Stephen Thielke, MD, a member of the Geriatric Research, Education and Clinical Center at Puget Sound Veterans Affairs Medical Center and the psychiatry and behavioral sciences faculty at the University of Washington, wrote: “Olfactory dysfunction may be easier to quantify across time than global cognition, which could allow for more-systematic or earlier assessment of neurodegenerative changes, but none of this supports that smell testing would be a useful tool for predicting the onset of dementia.”

    “Our test simply marks someone for closer attention,” Pinto explained. “Much more work would need to be done to make it a clinical test. But it could help find people who are at risk. Then we could enroll them in early-stage prevention trials.”

    “Of all human senses,” Pinto added, “smell is the most undervalued and underappreciated — until it’s gone.”

    Both studies were part of the National Social Life, Health and Aging Project (NSHAP), the first in-home study of social relationships and health in a large, nationally representative sample of men and women ages 57 to 85.

    The study was funded by the National Institutes of Health — including the National Institute on Aging and the National Institute of Allergy and Infectious Disease — the Institute of Translational Medicine at the University of Chicago, and the McHugh Otolaryngology Research Fund.

    Additional authors were Dara Adams, David W. Kern, Kristen E. Wroblewski and William Dale, all from the University of Chicago. Linda Waite is the principal investigator of NSHAP, a transdisciplinary effort with experts in sociology, geriatrics, psychology, epidemiology, statistics, survey methodology, medicine, and surgery collaborating to advance knowledge about aging.


  7. Study suggests link between BMI and how we assess food

    October 2, 2017 by Ashley

    From the Scuola Internazionale Superiore di Studi Avanzati press release:

    A new study demonstrated that people of normal weight tend to associate natural foods such as apples with their sensory characteristics. On the other hand, processed foods such as pizzas are generally associated with their function or the context in which they are eaten.

    “It can be considered an instance of ‘embodiment‘ in which our brain interacts with our body.” This is the comment made by Raffaella Rumiati, neuroscientist at the International School for Advanced Studies — SISSA in Trieste, on the results of research carried out by her group which reveals that the way we process different foods changes in accordance with our body mass index. With two behavioural and electroencephalographic experiments, the study demonstrated that people of normal weight tend to associate natural foods such as apples with their sensory characteristics such as sweetness or softness.

    On the other hand, processed foods such as pizzas are generally associated with their function or the context in which they are eaten such as parties or picnics.

    “The results are in line with the theory according to which sensory characteristics and the functions of items are processed differently by the brain,” comments Giulio Pergola, the work’s primary author. “They represent an important step forward in our understanding of the mechanisms at the basis of the assessments we make of food.” But that’s not all.

    Recently published in the Biological Psychology journal, the research also highlighted the ways in which underweight people pay greater attention to natural foods and overweight people to processed foods. Even when subjected to the same stimuli, these two groups show different electroencephalography signals. These results show once again the importance of cognitive neuroscience also in the understanding of extremely topical clinical fields such as dietary disorders.

     


  8. Different brain areas interact to recognize partially covered shapes

    September 28, 2017 by Ashley

    From the University of Washington Health Sciences/UW Medicine press release:

    How does a driver’s brain realize that a stop sign is behind a bush when only a red edge is showing? Or how can a monkey suspect that the yellow sliver in the leaves is a round piece of fruit?

    The human (and non-human) primate brain is remarkable in recognizing objects when the view is nearly blocked. This skill let our ancient ancestors find food and avoid danger. It continues to be critical to making sense of our surroundings.

    UW Medicine scientists are conducting research to discover ways that the brain operates when figuring out shapes, from those that are completely visible to those that are mostly hidden.

    Although computers can beat the world’s best chess players, scientists have not yet designed artificial intelligence that performs as well as the average person in distinguishing shapes that are semi-obscured.

    Studies of signals generated by the brain are helping to fill in the picture of what goes on when looking at, then trying to recognize, shapes. Such research is also showing why attempts have failed to mechanically replicate the ability of humans and primates to identify partially hidden objects.

    The most recent results of this work are published Sept. 19 in the scientific journal eLife.

    The senior investigator is Anitha Pasupathy, associate professor of biological structure at the University of Washington School of Medicine in Seattle and a member of the Washington National Primate Research Center.

    There’s a computer game at the center that can be played to tell if two shapes are alike or different. A correct answer wins a treat. As dots start to appear over the shapes, the task becomes more difficult.

    The researchers learned that, during the simpler part of the game, the brain generates signals in certain areas of the visual cortex — the part for sight. The neurons, or brain nerve cells, in that section respond more strongly to uncovered shapes.

    However, when the shapes begin to disappear behind the dots, certain neurons in the part of the brain that governs functions like memory and planning — the ventrolateral prefrontal cortex — respond more intensely.

    The researchers also observed that many of the neurons in the visual cortex had two quick response peaks. The second one occurred after the response onset in the thinking section of the brain. This seemed to enhance the response of the neurons in the visual cortex to the partially hidden shapes.

    The results, according to Pasupathy, suggest how signals from the two different areas of the brain — thinking and vision — could interact to assist in recognizing shapes that are not fully visible.

    They researches believe that other regions of the brain, in addition to those they studied, are likely to participate in object recognition.

    “It’s not just the information flowing from the eyes into the sensory location of the brain that’s important to know what a shape is when it’s partially covered,” she said. “Feedback from other regions of the brain also help in making this determination.”

    Relying only on the image of an object that appears on the eye’s retina makes it hard to make out what it is, because that image could have many interpretations.

    Recognition stems not only from the physical appearance of the object, but also the scene, the context, the degree of covering, and the viewer’s experience, the researchers explained.

    The study helps advance knowledge about how the brain typically works in solving this frequently encountered perceptual puzzle.

    “The neural mechanisms that mediate perceptual capacities, such as this one, have been largely unknown, which is why we were interested in studying them,” Pasupathy noted.

    Their recent findings also make the scientists wonder if impairments in this and other types of communication between the cognitive and sensory parts of the brain might have a role in certain difficulties that people with autism or Alzheimer’s encounter.

    Pasupathy said, for example, some people with autism have a profound inability to function in cluttered or disorderly environments. They have problems processing sensory information and can become confused and distressed. Many patients with Alzheimer’s disease experience what is called visual agnosia. They have no trouble seeing objects, but they can’t tell what they are.

    “So understanding how the sensory and cognitive areas in the brain communicate is of utmost importance to ultimately understand what might go wrong inside the nervous system that can cause these deficits,” Pasupathy said.


  9. Scents and social preference: Neuroscientists ID the roots of attraction

    September 15, 2017 by Ashley

    From the University of California – San Diego press release:

    A baby lamb is separated from its family. Somehow, in vast herds of sheep that look virtually identical, the lost youngling locates its kin. Salmon swim out to the vast expanses of the sea and migrate back home to their precise spawning grounds with bewildering accuracy.

    Scientists have long known about such animal kinship attachments, some known as “imprinting,” but the mechanisms underlying them have been hidden in a black box at the cellular and molecular levels. Now biologists at the University of California San Diego have unlocked key elements of these mysteries, with implications for understanding social attraction and aversion in a range of animals and humans.

    Davide Dulcis of UC San Diego’s Psychiatry Department at the School of Medicine, Giordano Lippi, Darwin Berg and Nick Spitzer of the Division of Biological Sciences and their colleagues published their results in the August 31, 2017 online issue of the journal Neuron.

    In a series of neurobiological studies stretching back eight years, the researchers examined larval frogs (tadpoles), which are known to swim with family members in clusters. Focusing the studies on familial olfactory cues, or kinship odors, the researchers identified the mechanisms by which two- to four-day old tadpoles chose to swim with family members over non-family members. Their tests also revealed that tadpoles that were exposed to early formative odors of those outside of their family cluster were also inclined to swim with the group that generated the smell, expanding their social preference beyond their own true kin.

    The researchers discovered that this change is rooted in a process known as “neurotransmitter switching,” an area of brain research pioneered by Spitzer and further investigated by Dulcis in the context of psychostimulants and the diseased brain. The dopamine neurotransmitter was found in high levels during normal family kinship bonding, but switched to the GABA neurotransmitter in the case of artificial odor kinship, or “non-kin” attraction.

    “In the reversed conditions there is a clear sign of neurotransmitter switching, so now we can see that these neurotransmitters are really controlling a specific behavior,” said Dulcis, an associate professor. “You can imagine how important this is for social preference and behavior. We have innate responses in relationships, falling in love and deciding whether we like someone. We use a variety of cues and these odorants can be part of the social preference equation.”

    The scientists took the study to a deeper level, seeking to find how this mechanism unfolds at the genetic level.

    Sequencing helped isolate two key microRNAs, molecules involved in coordinating gene expression. Sifting through hundreds of possibilities they identified microRNA-375 and microRNA-200b as the key regulators mediating the neurotransmitter switching for attraction and aversion, affecting the expression of genes known as Pax6 and Bcl11b that ultimately control the tadpole’s swimming behavior.

    “MicroRNAs were ideal candidates for the job,” said Lippi, a project scientist in Berg’s laboratory in the Division’s Neurobiology Section. “They are post-transcriptional repressors and can target hundreds of different mRNAs to consolidate specific genetic programs and trigger developmental switches.”

    The study began in 2009 and deepened in size and scope over the years. Reviewers of the paper were impressed with the project’s breadth, including one who commended the authors “for this heroic study which is both fascinating and comprehensive.”

    “Social interaction, whether it’s with people in the workplace or with family and friends, has many determinants,” said Spitzer, a distinguished professor in the Division of Biological Sciences, the Atkinson Family Chair and co-director of the Kavli Institute for Brain and Mind at UC San Diego. “As human beings we are complicated and we have multiple mechanisms to achieve social bonding, but it seems likely that this mechanism for switching social preference in response to olfactory stimuli contributes to some extent.”


  10. High-fat ice cream may not necessarily mean tastier ice cream

    August 6, 2017 by Ashley

    From the Penn State press release:

    Even though ice cream connoisseurs may insist that ice cream with more fat tastes better, a team of Penn State food scientists found that people generally cannot tell the difference between fat levels in ice creams.

    In a series of taste tests, participants were unable to distinguish a 2 percent difference in fat levels in two vanilla ice cream samples as long as the samples were in the 6 to 12 percent fat-level range. While the subjects were able to detect a 4 percent difference between ice cream with 6 and 10 percent fat levels, they could not detect a 4 percent fat difference in samples between 8 and 12 percent fat.

    “I think the most important finding in our study was that there were no differences in consumer acceptability when changing fat content within a certain range,” said Laura Rolon, a former graduate student in food science and lead author of the study. “There is a preconception of ‘more fat is better,’ but we did not see it within our study.”

    The researchers, who released their findings in a recent issue of the Journal of Dairy Science, also found that fat levels did not significantly sway consumers’ preferences in taste. The consumers’ overall liking of the ice cream did not change when fat content dropped from 14 percent to 6 percent, for example.

    “Was there a difference in liking — that was our primary question — and could they tell the difference was our secondary question,” said Robert Roberts, professor and head of the food science department.

    John Hayes, associate professor of food science and director of the sensory evaluation center, said that perception and preference are often two separate questions in food science.

    “Another example of this is how some people might like both regular lemonade and pink lemonade equally,” said Hayes. “They can tell the difference when they taste the different lemonades, but still like them both. Differences in perception and differences in liking are not the same thing.”

    The study may challenge some ice cream marketing that suggests ice cream with high fat levels are higher quality and better tasting products, according to researchers.

    “People think premium ice cream means only high fat ice cream, but it doesn’t,” said Roberts.

    Because there are only slight differences in taste perception and preferences at certain fat levels, ice cream manufacturers may have more latitude in adjusting their formulas to help control costs and create products for customers with certain dietary restrictions without sacrificing taste, according to the researchers.

    “Fat is always the most expensive bulk ingredient of ice cream and so when you’re talking about premium ice cream, it tends to have a higher fat content and cost more, while the less expensive economy brands tend to have lower fat content,” said John Coupland, professor of food science.

    The researchers recruited a total of 292 regular ice-cream consumers to take part in the blind taste tests to determine their overall acceptability of various fat levels in fresh ice cream and to see if they could tell the difference between samples. They changed the fat content by adjusting the levels of cream and by adding maltodextrin, a mostly tasteless, starch-based material that is used to add bulk to products, such as frozen desserts.

    Maltodextrin is not necessarily a healthy fat replacement alternative, according to the researchers.

    “We don’t want to give the impression that we were trying to create a healthier type of ice cream,” said Coupland. “But, if you were in charge of an ice cream brand this information may help you decide if you are getting any advantage of having high fat in your product, or whether it’s worth the economic cost, or worth the brand risk to change the fat level of your ice cream.”

    During storage, ice crystals can increase in size, which affects the quality of the ice cream. Because of this effect, the researchers also studied stored samples and found no significant difference in preference after storage.

    Hayes said that Penn State and the College of Agriculture Sciences’ focus on interdisciplinary research was critical for this work.

    “I think this shows how interdisciplinary and translational food science is,” Hayes said. “You take a physical chemist, a behavioral scientist and someone who knows ice cream processing and put us all together and you can investigate questions like these.”