1. Study suggests brain waves reflect different types of learning

    October 20, 2017 by Ashley

    From the Massachusetts Institute of Technology press release:

    Figuring out how to pedal a bike and memorizing the rules of chess require two different types of learning, and now for the first time, researchers have been able to distinguish each type of learning by the brain-wave patterns it produces.

    These distinct neural signatures could guide scientists as they study the underlying neurobiology of how we both learn motor skills and work through complex cognitive tasks, says Earl K. Miller, the Picower Professor of Neuroscience at the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences, and senior author of a paper describing the findings in the Oct. 11 edition of Neuron.

    When neurons fire, they produce electrical signals that combine to form brain waves that oscillate at different frequencies. “Our ultimate goal is to help people with learning and memory deficits,” notes Miller. “We might find a way to stimulate the human brain or optimize training techniques to mitigate those deficits.”

    The neural signatures could help identify changes in learning strategies that occur in diseases such as Alzheimer’s, with an eye to diagnosing these diseases earlier or enhancing certain types of learning to help patients cope with the disorder, says Roman F. Loonis, a graduate student in the Miller Lab and first author of the paper. Picower Institute research scientist Scott L. Brincat and former MIT postdoc Evan G. Antzoulatos, now at the University of California at Davis, are co-authors.

    Explicit versus implicit learning

    Scientists used to think all learning was the same, Miller explains, until they learned about patients such as the famous Henry Molaison or “H.M.,” who developed severe amnesia in 1953 after having part of his brain removed in an operation to control his epileptic seizures. Molaison couldn’t remember eating breakfast a few minutes after the meal, but he was able to learn and retain motor skills that he learned, such as tracing objects like a five-pointed star in a mirror.

    “H.M. and other amnesiacs got better at these skills over time, even though they had no memory of doing these things before,” Miller says.

    The divide revealed that the brain engages in two types of learning and memory — explicit and implicit.

    Explicit learning “is learning that you have conscious awareness of, when you think about what you’re learning and you can articulate what you’ve learned, like memorizing a long passage in a book or learning the steps of a complex game like chess,” Miller explains.

    Implicit learning is the opposite. You might call it motor skill learning or muscle memory, the kind of learning that you don’t have conscious access to, like learning to ride a bike or to juggle,” he adds. “By doing it you get better and better at it, but you can’t really articulate what you’re learning.”

    Many tasks, like learning to play a new piece of music, require both kinds of learning, he notes.

    Brain waves from earlier studies

    When the MIT researchers studied the behavior of animals learning different tasks, they found signs that different tasks might require either explicit or implicit learning. In tasks that required comparing and matching two things, for instance, the animals appeared to use both correct and incorrect answers to improve their next matches, indicating an explicit form of learning. But in a task where the animals learned to move their gaze one direction or another in response to different visual patterns, they only improved their performance in response to correct answers, suggesting implicit learning.

    What’s more, the researchers found, these different types of behavior are accompanied by different patterns of brain waves.

    During explicit learning tasks, there was an increase in alpha2-beta brain waves (oscillating at 10-30 hertz) following a correct choice, and an increase delta-theta waves (3-7 hertz) after an incorrect choice. The alpha2-beta waves increased with learning during explicit tasks, then decreased as learning progressed. The researchers also saw signs of a neural spike in activity that occurs in response to behavioral errors, called event-related negativity, only in the tasks that were thought to require explicit learning.

    The increase in alpha-2-beta brain waves during explicit learning “could reflect the building of a model of the task,” Miller explains. “And then after the animal learns the task, the alpha-beta rhythms then drop off, because the model is already built.”

    By contrast, delta-theta rhythms only increased with correct answers during an implicit learning task, and they decreased during learning. Miller says this pattern could reflect neural “rewiring” that encodes the motor skill during learning.

    “This showed us that there are different mechanisms at play during explicit versus implicit learning,” he notes.

    Future Boost to Learning

    Loonis says the brain wave signatures might be especially useful in shaping how we teach or train a person as they learn a specific task. “If we can detect the kind of learning that’s going on, then we may be able to enhance or provide better feedback for that individual,” he says. “For instance, if they are using implicit learning more, that means they’re more likely relying on positive feedback, and we could modify their learning to take advantage of that.”

    The neural signatures could also help detect disorders such as Alzheimer’s disease at an earlier stage, Loonis says. “In Alzheimer’s, a kind of explicit fact learning disappears with dementia, and there can be a reversion to a different kind of implicit learning,” he explains. “Because the one learning system is down, you have to rely on another one.”

    Earlier studies have shown that certain parts of the brain such as the hippocampus are more closely related to explicit learning, while areas such as the basal ganglia are more involved in implicit learning. But Miller says that the brain wave study indicates “a lot of overlap in these two systems. They share a lot of the same neural networks.”


  2. Study suggests phones are keeping students from concentrating during lectures

    by Ashley

    From the Stellenbosch University press release:

    Digital technologies, especially smartphones, have become such an integral part of our lives that it is difficult to picture life without them. Today, people spend over three hours on their phones every day.

    “While ever-smarter digital devices have made many aspects of our lives easier and more efficient, a growing body of evidence suggests that, by continuously distracting us, they are harming our ability to concentrate,” say researchers Dr Daniel le Roux and Mr Douglas Parry from the Cognition and Technology Research Group in the Department of Information Science at Stellenbosch University.

    Le Roux heads the research group, while Parry is a doctoral candidate. Their work focuses on the impact of digital media, particularly phones, on students’ ability to concentrate in the classroom.

    According to them, today’s students are digital natives ? individuals born after 1980 – who have grown up surrounded by digital media and quickly adapted to this environment to such an extent that “they are constantly media-multitasking, that is, concurrently engaging with, and rapidly switching between, multiple media to stay connected, always updated and always stimulated.”

    The researchers say it shouldn’t be surprising that university lecturers are encouraged to develop blended learning initiatives and bring tech – videos, podcasts, Facebook pages, etc. – into the classroom more and more to offer students the enhanced experiences enabled by digital media.

    They warn, however, that an important effect of these initiatives has been to establish media use during university lectures as the norm.

    “Studies by ourselves and researchers across the world show that students constantly use their phones when they are in class.

    “But here’s the kicker: if you think they are following the lecture slides or engaging in debates about the topic you are mistaken. In fact, this is hardly ever the case. When students use their phones during lectures they do it to communicate with friends, engage in social networks, watch YouTube videos or just browse around the web to follow their interests.”

    The researchers say there are two primary reasons why this form of behaviour is problematic from a cognitive control and learning perspective.

    “The first is that when we engage in multitasking our performance on the primary task suffers. Making sense of lecture content is very difficult when you switch attention to your phone every five minutes. A strong body of evidence supports this, showing that media use during lectures is associated with lower academic performance.”

    “The second reason is that it harms students’ ability to concentrate on any particular thing for an extended period of time. They become accustomed to switching to alternative streams of stimuli at increasingly short intervals. The moment the lecture fails to engage or becomes difficult to follow, the phones come out.”

    The researchers say awareness of this trend has prompted some lecturers, even at leading tech-oriented universities like MIT in the United States, to declare their lectures device-free in an attempt to cultivate engagement, attentiveness and, ultimately, critical thinking skills among their students.

    “No one can deny that mobile computing devices make our lives easier and more fun in a myriad of ways. But, in the face of all the connectedness and entertainment they offer, we should be mindful of the costs.”

    The researchers encourage educational policy makers and lecturers, in particular, to consider the implications of their decisions with a much deeper awareness of the dynamics between technology use and the cognitive functions which enable us to learn.


  3. Study links anxiety and depression to migraines

    by Ashley

    From the Wiley press release:

    In a study of 588 patients who attended an outpatient headache clinic, more frequent migraines were experienced by participants with symptoms of anxiety and depression. In the Headache study, poor sleep quality was also found to be an independent predictor of more severe depression and anxiety symptoms.

    The study’s investigators noted that factors such as emotional distress and frequency of headache may influence each other through a common pathophysiological mechanism. For example, emotional responses have the potential to alter pain perception and modulation through certain signaling pathways.

    “These findings potentially suggest that adequate medical treatment to decrease headache frequency may reduce the risk of depression and anxiety in migraine patients,” said Dr. Fu-Chi Yang, corresponding author of the study and an investigator in the Department of Neurology, Tri-Service General Hospital, National Defense Medical Center, Taiwan.


  4. Study suggests metacognition training can help boost exam scores

    by Ashley

    From the University of Utah press release:

    It’s a lesson in scholastic humility: You waltz into an exam, confident that you’ve got a good enough grip on the class material to swing an 80 percent or so, maybe a 90 if some of the questions go your way.

    Then you get your results: 60 percent. Your grade and your stomach both sink. What went wrong?

    Students, and people in general, can tend to overestimate their own abilities. But University of Utah research shows that students who overcome this tendency score better on final exams. The boost is strongest for students in the lower 25 percent of the class. By thinking about their thinking, a practice called metacognition, these students raised their final exam scores by 10 percent on average – a full letter grade.

    The study, published today in the Journal of Chemical Education, is authored by University of Utah doctoral student Brock Casselman and professor Charles Atwood.

    “The goal was to create a system that would help the student to better understand their ability,” says Casselman, “so that by the time they get to the test, they will be ready.”

    Errors in estimation

    General chemistry at the University of Utah is a rigorous course. In 2010 only two-thirds of the students who took the course passed it – and of those who didn’t, only a quarter ever retook and passed the class.

    “We’re trying to stop that,” Atwood says. “We always want our students to do better, particularly on more difficult, higher-level cognitive tasks, and we want them to be successful and competitive with any other school in the country.”

    Part of the problem may lie in how students view their own abilities. When asked to predict their scores on a midterm pretest near the beginning of the school year, students of all performance levels overestimated their scores by an average of 11 percent over the whole class. The students in the lower 25 percent of class scores, also called the “bottom quartile,” overestimated by around 22 percent.

    This phenomenon isn’t unknown – in 1999 psychologists David Dunning and Justin Kruger published a paper stating that people who perform poorly at a task tend to overestimate their performance ability, while those who excel at the task may slightly underestimate their competence. This beginning-of-year survey showed that general chemistry students are not exempt.

    “They convince themselves that they know what they’re doing when in fact they really don’t,” Atwood says.

    The antidote to such a tendency is engagement in metacognition, or thinking about and recognizing one’s own strengths and limitations. Atwood says that scientists employ metacognition skills to evaluate the course of their research.

    “Once they have got some chunk figured out and realize ‘I don’t understand this as well as I thought I did,’ they will adjust their learning pattern,” he says. After reviewing previous research on metacognition in education, Atwood and Casselman set out to design a system to help chemistry students accurately estimate their performance and make adjustments as necessary.

    Accurate estimation

    In collaboration with Madra Learning, an online homework and learning assessment platform, Casselman and Atwood put together practice materials that would present a realistic test, and asked students to predict their scores on the practice test before taking it. They also implemented a feedback system that would identify the topics the students were struggling with so they could make a personal study plan.

    After a few years of tweaking the feedback system, they added the element of weekly quizzes into the experimental metacognition training to provide students more frequent feedback. By the first midterm exam of the 2016 class, Casselman and Atwood could see that the experimental course section’s scores were significantly higher than a control section’s that did not receive metacognition training. “I was ecstatic!” Casselman says.

    By the final exam, students’ predictions of their scores were about right on, or a little underpredicted. Overall, the researchers report, students who learned metacognition skills scored around 4 percent higher on the final exam than their peers in the control section. But the strongest improvement was in the bottom quartile of students, who scored a full 10 percent better, on average, than the bottom quartile of the control section.

    “This will take D and F students and turn them into C students,” Atwood says. “We also see it taking higher-end C students and making them into B students. Higher-end B students become A students.”

    Atwood adds that the students took a nationally standardized test as their final exam. That means that the researchers can compare the U students’ performance to other students nationwide. The bottom quartile of students at the U who received metacognition training scored in the 54th percentile. “So, our bottom students are now performing better than the national average,” Atwood says.

    “They’re not going to be overpredicting their ability,” Casselman says. “They’re going to go in knowing exactly how well they’re going to do and they will have prepared in the areas they knew they were weakest.”

    A cumulative effect

    This study covered students in the first semester of general chemistry. Casselman has now expanded the study into the second semester, meaning some students have had no semesters of metacognition training, some have had one and some have had two. Preliminary analysis suggests that the training may have a cumulative effect across semesters.

    “The students who are successful will ask themselves — what is this question asking me to do?” Atwood says. “How does that relate to what we’re doing in class? Why are they giving me this question? If there’s an equation, why does this equation work? That’s the metacognitive part. If they will kick that in, they will see their grades go straight through the roof.”

    Both Atwood and Casselman say this principle is not limited to chemistry and could be applied throughout campus. It’s a principle universally applicable to learning, and has been hinted at for centuries, including in a Confucian proverb:

    “Real knowledge is to know the extent of one’s ignorance.”


  5. Study suggests eye-catching labels may stigmatize many healthy foods

    by Ashley

    From the University of Delaware press release:

    When customers walk down aisles of grocery stores, they are inundated with labels such as organic, fair-trade and cage free, just to name a few. Labels such as these may be eye-catching but are often free of any scientific basis and stigmatize many healthy foods, a new University of Delaware-led study found.

    The paper published recently in the journal Applied Economics Perspectives and Policy examined the good, the bad and the ugly of food labeling to see how labels identifying the process in which food was produced positively and negatively influenced consumer behavior.

    By reviewing over 90 academic studies on consumer response to process labels, the researchers found that while these labels satisfy consumer demand for quality assurances and can create value for both consumers and producers, misinterpretation is common and can stigmatize food produced by conventional processes even when there is no scientific evidence those foods cause harm.

    For the poor, in particular, there is danger in misunderstanding which food items are safe, said Kent Messer, the study’s lead author and the Unidel Howard Cosgrove Career Development Chair for the Environment.

    “That has me worried about the poor and those who are food insecure,” said Messer, who is also director of the Center for Experimental and Applied Economics in the College of Agriculture and Natural Resources. “Because now you’re trying to make everything a high-end food choice and frankly, we just want to have healthy food choices, we don’t need to have extra labels that scare away people,”

    Process labels, by definition, focus on the production of a food, but largely ignore important outcomes of the process such as taste or healthiness. According to Messer and his study co-authors, policy changes could help consumers better understand their choices. They argue governments should not impose bans on process labels but rather encourage labels that help document how the processes affect important quality traits, such as calorie count.

    “Relying on process labels alone, on the other hand, is a laissez faire approach that inevitably surrenders the educational component of labeling to mass media, the colorful array of opinion providers, and even food retailers, who may not always be honest brokers of information,” the researchers wrote.

    The Good

    With regards to the positive impact process labels have on consumers, Messer said that consumers are able to more freely align their purchasing decisions with their values and preferences.

    If, for example, a consumer wants to buy fair trade coffee, they are able to do so with greater ease.

    “The good part is that process labels can help bridge the trust between the producer and the consumer because it gives the consumer more insight into the market,” said Messer. “New products can be introduced this way, niche markets can be created, and consumers, in many cases, are willing to pay more for these products. It’s good for industry, consumers are getting what they want, and new players get to find ways of getting a higher price.”

    The Bad

    The bad part is that consumers are already in the midst of a marketplace filled with information that can be overwhelming because of the sheer amount of product choices and information available.

    In addition, when most consumers go to buy food, they are often crunched for time.

    “Human choice tends to be worse when you put time constraints on it,” said Messer. “Maybe you’ve got a child in the aisle with you and now you’re adding this new label and there’s lots of misinterpretation of what it means. The natural label is a classic one which means very little, yet consumers assume it means more than it does. They think it means ‘No GMO’ but it doesn’t. They think it means it is ‘organic’ but it isn’t. This label is not helping them align their values to their food, and they’re paying a price premium but not getting what they wanted to buy.”

    Messer said that another problem are “halo effects,” overly optimistic misinterpretation of what a label means.

    “If you show consumers a chocolate bar that is labeled as ‘fair trade’, some will tell you that it has lower calories,” Messer said. “But the label is not about calories. Consumers do this frequently with the ‘organic’ label as they think it is healthy for the consumer. Organic practices may be healthier for the farm workers or the environment, but for the actual consumer, there’s very little evidence behind that. You’re getting lots of mixed, wrong messages out there.”

    The Ugly

    Like halo effects, the ugly side of food processing labels comes into play when labels sound like they have a positive impact but really have a negative one.

    A label such as “low food miles” might sound nice but could actually be causing more harm than good.

    “Sometimes, where food is grown doesn’t mean that it’s actually the best for climate change,” said Messer.

    Hot house tomatoes grown in Canada, for example, might have low food miles for Canadian consumers but it’s probably far better environmentally — because of all the energy expended in creating tomatoes in an energy intensive hot house in Canada — to grow the tomatoes in Florida and then ship them to Canada.

    “If you just count miles and not true energy use, you can get people paying more money for something that’s actually going the opposite of what they wanted which is to get a lower carbon footprint,” said Messer.

    He added that the ugly side of food labeling is that a lot of fear is being introduced into the marketplace that isn’t based on science.

    “When you start labeling everything as ‘free of this’ such as ‘gluten free water,’ you can end up listing stuff that could never have been present in the food in the first place,” Messer said. “These ‘free of’ labels can cause unnecessary fear and cast the conventionally produced food in a harsh, negative light.”

    Since the vast majority of the food market is still conventionally produced and is the lower cost product, there is a danger in taking that safe food and calling it unsafe because of a few new entrants into the food market.

    Messer also said that there is evidence that food companies are getting worried about investing in science and technology because they don’t know how the consumer is going to respond or how marketers are going to attack their food product because it’s new and different and therefore, can be labeled as bad or dangerous.

    “We’ve got a lot of mouths to feed in our country and around the world,” Messer said. “We are currently able to feed so many because of advances in agricultural science and technology. If we’re afraid of that now, we have a long-term impact on the poor that could be quite negative in our country and around the world. That’s when I start thinking these process labels could really be ugly.”


  6. Study suggests that learning about slot-machine tricks can help new players avoid gambling addiction

    by Ashley

    From the University of Waterloo press release:

    Novice gamblers who watched a short video about how slot machines disguise losses as wins have a better chance of avoiding gambling problems, according to new research.

    Slot machines present losses disguised as wins (LDWs) with celebratory music and flashing lights, even though players actually won less money than they bet. People can mistakenly believe that they are winning and continue paying to play.

    Researchers at the University of Waterloo found that showing inexperienced gamblers a brief educational video before they play helps make them more aware and curb false perceptions about the number of times they won.

    “One of the keys to gambling harm prevention is to curtail misperceptions before they become ingrained in the minds of gamblers,” said Michael Dixon, professor and research director in the Gambling Research Lab at Waterloo. “By exposing these outcomes for what they are, our study shows a way in which we can lead slots gamblers to have a more realistic view of their gambling experiences and possibly prevent problems down the road.”

    Earlier research from the University’s Gambling Research Lab found that LDWs can also lead players to gamble for longer even when they are losing money — a symptom of gambling addiction.

    As part of this study, one group of participants watched an educational video on slot machines and how they present LDWs, while a second group watched a different, unrelated video. All participants then played two games, one with few LDWs and one with many LDWs. They then had to estimate the number of times they won more than they wagered on each game.

    “We found that the video was effective in correcting multiple misperceptions. Players not only remembered their actual number of wins more correctly, but they were also more capable of labelling losses disguised as wins during slot machine play,” said Candice Graydon, lead author and a PhD candidate in Waterloo’s Department of Psychology at the time of this study. “We’d like to assess whether shining the light on LDWs will make gamblers stop playing sooner.”

    On the many LDW games, both groups got actual wins on approximately 10 per cent of spins. The group that did not watch the video drastically overestimated their wins — believing won on 23 per cent of spins. The group that watched the educational video, however, gave accurate win estimates. They recalled winning on only 12 per cent of spins. The study suggests that novice players who view the educational video will become more aware of LDWs, which could make them more attentive to other slot features such as the running total counter. Researchers would like to see the animation available to players both online and on casino floors.


  7. Study suggests learning during development is regulated by an unexpected brain region

    October 19, 2017 by Ashley

    From the Netherlands Institute for Neuroscience – KNAW press release:

    Half a century of research on how the brain learns to integrate visual inputs from the two eyes has provided important insights in critical period regulation, leading to the conclusion that it occurs within the cortex. Scientists have now made the surprising discovery that a brain region that passes on input from the eyes to the cortex also plays a crucial role in opening the critical period of binocular vision.

    During childhood, the brain goes through critical periods in which its learning ability for specific skills and functions is strongly increased. It is assumed that the beginning and ending of these critical periods are regulated in the cortex, the outermost layer of the brain. However, scientists from the Netherlands Institute for Neuroscience discovered that a structure deep in the brain also plays a crucial role in the regulation of these critical periods. These findings, published today in the leading journal Nature Neuroscience, have important implications for understanding developmental problems ranging from a lazy eye to intellectual disability.

    Critical periods

    We can only flawlessly learn skills and functions such as speaking a language or seeing in 3D through binocular vision during critical periods of development. When these developmental forms of learning fail, lifelong problems arise.

    Scientists have been investigating the mechanisms by which critical periods are switched on and off in the hope to extend or reopen them for the treatment of developmental problems. Half a century of research on how the brain learns to integrate visual inputs from the two eyes has provided important insights in critical period regulation, leading to the conclusion that it occurs within the cortex. Neuroscientist Christiaan Levelt and his team now made the surprising discovery that a brain region that passes on input from the eyes to the cortex also plays a crucial role in opening the critical period of binocular vision.

    Using electrophysiological recordings in genetically modified mice, they showed that this brain region, known as the thalamus, contains inhibitory neurons that regulate how efficiently the brain learns to integrate binocular inputs. Levelt: “To improve developmental problems resulting in learning problems during critical periods, reinstating flexibility in the visual cortex may not be sufficient. Scientists and clinicians should not limit themselves to studying cortical deficits alone. They should also focus on the thalamus and the way it preprocesses information before it enters the cortex.”

    Albinism

    The study may also provide some hope for people with albinism, who often have limited binocular vision due to misrouting of inputs from the eyes to the thalamus. Levelt’s team found that in contrast to what is generally assumed, plasticity of binocular vision also occurs in the thalamus itself, suggesting that this might be improved in children with albinism through training.


  8. Marketing study examines what types of searches click for car buyers

    by Ashley

    From the University of Texas at Dallas press release:

    When making important purchase decisions, consumers often consult multiple sources of information.

    A new study from The University of Texas at Dallas examines how consumers allocated their time when searching offline and on the internet as they shopped for a new automobile, and what the outcomes were for price satisfaction.

    Dr. Ashutosh Prasad and Dr. Brian Ratchford, marketing professors in the Naveen Jindal School of Management, recently published the study online in the Journal of Interactive Marketing. It will appear in the journal’s November issue.

    “Our data says that it’s very common for a person to spend time searching online and offline prior to making a big purchase,” said Ratchford, who holds the Charles and Nancy Davidson Chair in Marketing.

    “The same information is available both places for the most part — whether it’s a manufacturer’s website or a brochure at the dealer. It’s just a matter of which one a person is more comfortable accessing.”

    Over the long term, consumer searches have been moving online, Ratchford said. It’s more convenient, and consumers can do more on the internet than before, such as take a virtual test drive or configure a vehicle according to their preferences.

    By analyzing survey data on automobile purchases between 2002 and 2012, the researchers compared time spent on internet sources with time spent on offline sources, such as car dealerships.

    Generally, those who search more online tend to spend more time with offline sources, the study found. In contrast, previous studies looked at the internet as a substitution for offline sources.

    The analysis also revealed insights into buyer demographics and the impact of national brands, Prasad said.

    Consumers older than 50 spend less time searching, both online and offline, before making a vehicle purchase, according to the study. Many people don’t search at all. They merely buy the same type of automobile they already had.

    “Men were more likely to search online comparison websites than women,” Prasad said. “Married consumers spent more time at dealerships and were more likely to be satisfied with the price paid. The time spent at dealerships was significantly more for buyers of Korean brand cars versus U.S. brands. Knowing even minor differences in behavior can help fine-tune marketing campaigns.”

    Generally, longer search times were associated with higher price satisfaction — except for time spent at the dealer, the researchers found.

    Ratchford said that finding is possibly related to the price negotiation process.

    “We don’t know exactly why, but chances are they’re spending time trying to get a better deal, and they are getting frustrated,” he said.

    The study also found that time spent on manufacturer websites was less effective at generating price satisfaction, possibly because offline manufacturer and dealer sources, such as advertisements and brochures, perform similar functions. Dealer websites remain important because they list inventory and provide online price quotes, researchers said.

    The study’s results may have practical implications for manufacturers and dealers.

    For example, the use of independent websites was associated with reduced time at the dealer. If dealers could identify those who obtain information online, they could save considerable demonstration time, lowering costs as a result.

    Manufacturers also may want to rethink the content of their websites. According to the study, consumers who searched longer on manufacturer websites reduced their time on independent websites but increased their time on dealer websites. This suggests that more informative manufacturer websites can deter consumers from visiting comparison websites to get information.


  9. Study suggests fear of spiders and snakes is deeply embedded in us

    by Ashley

    From the Max Planck Institute for Human Cognitive and Brain Sciences press release:

    Presumably, in industrialized countries, especially in middle Europe, most people have never come across a poisonous spider or snake in the wild. In most of this countries there are nearly no spiders or snakes that pose a threat to humans. Nevertheless, there are few people that would not shiver at the thought of a spider crawling up their arm, however harmless it may be.

    This fear can even develop into anxiety which limits a person’s daily life. Such people are always on edge and cannot enter a room before it is declared “spider free” or cannot venture out into nature for sheer fear that they may encounter a snake. In developed countries one to five per cent of the population are affected by a real phobia of these creatures.

    Until now, it was not clear where this widespread aversion or anxiety stems from. While some scientists assume that we learn this fear from our surroundings when we are a child, others suppose that it is innate. The drawback of most previous studies on this topic was that they were conducted with adults or older children — making it hard to distinguish which behaviour was learnt and which was inborn. Such studies with children only tested whether they spot spiders and snakes faster than harmless animals or objects, not whether they show a direct physiological fear reaction.

    Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and the Uppsala University, Sweden, recently made a crucial observation: Even in infants a stress reaction is evoked when they see a spider or a snake. And this already at the age of six months, when they are still very immobile and have had little opportunity to learn that these animals can be dangerous.

    “When we showed pictures of a snake or a spider to the babies instead of a flower or a fish of the same size and colour, they reacted with significantly bigger pupils,” says Stefanie Hoehl, lead investigator of the underlying study and neuroscientist at MPI CBS and the University of Vienna. “In constant light conditions this change in size of the pupils is an important signal for the activation of the noradrenergic system in the brain, which is responsible for stress reactions. Accordingly, even the youngest babies seem to be stressed by these groups of animals.”

    “We conclude that fear of snakes and spiders is of evolutionary origin. Similar to primates, mechanisms in our brains enable us to identify objects as ‘spider’ or ‘snake’ and to react to them very fast. This obviously inherited stress reaction in turn predisposes us to learn these animals as dangerous or disgusting. When this accompanies further factors it can develop into a real fear or even phobia. “A strong panicky aversion exhibited by the parents or a genetic predisposition for a hyperactive amygdala, which is important for estimating hazards, can mean that increased attention towards these creatures becomes an anxiety disorder.

    Interestingly, it is known from other studies that babies do not associate pictures of rhinos, bears or other theoretically dangerous animals with fear. “We assume that the reason for this particular reaction upon seeing spiders and snakes is due to the coexistence of these potentially dangerous animals with humans and their ancestors for more than 40 to 60 million years — and therefore much longer than with today’s dangerous mammals. The reaction which is induced by animal groups feared from birth could have been embedded in the brain for an evolutionarily long time.

    For modern risks such as knives, syringes or sockets, presumably the same is true. From an evolutionary perspective they have only existed for a short time, and there has been no time to establish reaction mechanisms in the brain from birth. “Parents know just how difficult it is to teach their children about everyday risks such as not poking their fingers into a socket,” Hoehl adds with a smile.


  10. Study suggests “dark triad” personality traits are liabilities in hedge fund managers

    by Ashley

    From the Society for Personality and Social Psychology press release:

    When it comes to financial investments, hedge fund managers higher in “dark triad” personality traits — psychopathy, narcissism, and Machiavellianism — perform more poorly than their peers, according to new personality psychology research. The difference is a little less than 1% annually compared to their peers, but with large investments over several years that slight underperformance can add up. The results appear in the journal Personality and Social Psychology Bulletin, published by the Society for Personality and Social Psychology.

    While the average person doesn’t invest in hedge funds, “We should re-think our assumptions that might favor ruthlessness or callousness in an investment manager,” says Leanne ten Brinke, lead author and a social psychologist at the University of Denver. “Not only do these personality traits not improve performance, our data suggest that they many hinder it.”

    Researchers from the University of Denver and the University of California, Berkeley, measured personality traits of 101 hedge fund managers, then compared the personality types with their investments and financial returns from 2005 — 2015. They compared not only the annualized returns, but also risk measures.

    The researchers found managers with psychopathic traits made less profitable investments than peers, by just under 1% per year, but this can add up over the course of years on large investments. Managers with narcissistic traits took more investment risks to earn the same amount of money as less narcissistic peers.

    Some may be surprised that most hedge fund managers rank pretty low on the Dark Triad traits. However, the results did show correlations between personality traits, investment success, and risk management.

    These findings build on their earlier work, studying behavioral evidence of Dark Triad traits in U.S. Senators, and finding that “those who displayed behaviors associated with psychopathy were actually less likely to gain co-sponsors on their bills,” says ten Brinke. That study also showed those who displayed behaviors associated with courage, humanity, and justice, “were the most effective political leaders.”

    The results add to a growing body of literature suggesting that Dark Triad personality traits are not desirable in leaders in a variety of contexts, summarizes ten Brinke.

    “When choosing our leaders in organizations and in politics,” write the authors, “we should keep in mind that psychopathic traits — like ruthlessness and callousness — don’t produce the successful outcomes that we might expect them to.”