1. Study suggests smoking, binge drinking and unsafe tanning may be linked in men

    November 11, 2017 by Ashley

    From the University of Connecticut press release:

    Even though men use tanning beds at lower rates than women, men who tan tend to do it in riskier ways, according to a study by researchers at the University of Connecticut. The findings should help public health officials rethink how, and to whom, they’re targeting anti-tanning messages.

    Because the stereotypical tanning salon client is a young woman, almost all the research and health messaging on tanning has focused on that demographic. But the new research in press in the Journal of the American Academy of Dermatology found that one in three people who use tanning beds in the U.S. are male.

    Men who tan report using tanning beds with about the same frequency as women, but smoke and binge drink at higher rates than their female counterparts, and they also tend to treat tanning more like an addiction than women do, say the authors. A full 49 percent of men who used tanning beds fit a pattern of addictive behavior around tanning.

    “That was really surprising,” says lead author Sherry Pagoto, a clinical psychologist and director of the UConn Center for mHealth and Social Media. “If they tan with the same frequency as women, why would tanning in men be more addictive?”

    Pagoto and her colleagues conducted a national survey of 636 people who answered “yes” when asked whether they had ever used a tanning bed. They queried the participants about frequency of use, preferred locations to tan, how they felt about tanning, and why they did it.

    The differences between men and women were marked. Women preferred to tan in salons, and said they valued low cost, cleanliness, and convenience. Men who tanned preferred less regulated settings, such as gyms or private homes. They said they liked to tan to accentuate the appearance of their muscles, or as a reward after working out. They also reported smoking tobacco, binge drinking alcohol, and drinking soda significantly more often than women who tan.

    Men also answered “yes” when asked if they ever felt anxious if they weren’t able to tan, tanned to relieve stress, or spent money on tanning even when they couldn’t afford it. They agreed with statements such as “I’d like to quit but I keep going back to it.”

    There’s a population of men who tan and engage in other risky behaviors and are very unlike the young women that health educators assume are at risk of tanning bed health impacts, says Pagoto.

    Pagoto and her team are pursuing another study to delve more deeply into who tans, asking questions about sexual orientation, given that recent research has revealed that homosexual men are just as likely to use tanning beds as young women. The research should help health officials trying to warn the public of the very real connection between tanning beds and skin cancer, she says.

    Sun lamps and tanning beds are legal for adult use in all 50 states, even though the Food and Drug Administration (FDA) classifies them as a Class 1 carcinogen like tobacco, radon, and arsenic, and the use of tanning beds has been linked to melanoma, the deadliest form of skin cancer.

    Most current marketing messaging is targeted to teen- and college-aged women, according to Pagoto. Men who tan are unlikely to relate to that type of message. Pagoto is now applying social media marketing principles to develop prevention messages that resonate with specific audience segments.

    “We’re also hoping to spread the message on college campuses, since the tanning industry heavily markets to college students,” she says.


  2. Study suggests willingness to take risks is a personality trait

    November 5, 2017 by Ashley

    From the Universität Basel press release:

    People differ in their willingness to take risks. An individual’s propensity for risk taking can also vary across domains. However, there is new evidence showing that there is also a general factor of individual risk preference, which remains stable over time — akin to the general Intelligence Quotient (IQ). Researchers from Switzerland and Germany report these findings based on over 1500 participants in the journals Science Advances and Nature Human Behaviour.

    Should I invest my money or leave it in my savings account? Have surgery or not? We make decisions like these knowing that they have consequences and involve risks. But what is the nature of the risk preference driving risk-related decisions? Does our risk preference depend on the context or is it largely consistent across situations? Both is true, according to findings from a large-scale study from researches of the Max Planck Institute for Human Development in Berlin and the University of Basel, with funding from the Swiss National Science Foundation.

    To assess the risk preferences of 1,507 adults aged between 20 and 36 years, they used three distinct approaches: Self-reports on hypothetical risk scenarios, experimental behavioral tests involving financial incentives as well as information on actual risky activities in everyday life. In total, participants completed 39 tests over the course of a day. To examine how stable the risk preference is over time, the researchers had 109 participants repeat the tests after six months. Previous studies on risk preference mostly used just one or only a few selected measurement instruments.

    Stable factor over time

    “Our findings indicate that risk-taking propensity has a psychometric structure similar to that of psychological personality characteristics. Like the general factor of intelligence, there is also a general factor of risk preference,” says Dr. Renato Frey from the University of Basel and the Max Planck Institute for Human Development. “In other words, your willingness to take risks may vary across different areas of your life, but it will always be affected by the underlying general factor of risk preference.” Backing up this idea, the study’s findings show that individuals’ general factor of risk preference remains stable over time.

    Another finding of this study is that the hypothetical scenarios and the reports on actual risk-taking behavior both painted a similar picture of an individual’s risk preference. However, a rather different picture emerged from the experimental behavior tests. A detailed analysis of these inconsistencies revealed that for different behavior test participants used different decision-making strategies. These depended on the type of behavioral task — whether it presented risk in a context of a game, for example, or in a more abstract form. “These results show that behavioral tests, which tend to be the preferred approach of economists, often give an inconsistent picture of people’s risk preferences that is difficult to explain with unified theories of risk behavior,” says Prof. Dr. Jörg Rieskamp from the University of Basel.

    A better understanding of risk behavior

    These results are important both methodologically as well as theoretically: “Our work is a wake-up call for researchers, who need to think twice about the various measurement traditions. In particular, there needs to be a better understanding of what exactly the behavioral tasks measure. It seems clear that they don’t assess risk preference across situations,” says Prof. Dr. Ralph Hertwig from the Max Planck Institute for Human Development. “But our finding of a general factor of risk preference — based on self-reports and frequency measures of actual risky activities — suggests that risk preference is a personality characteristic in its own right. This insight will make it possible to examine the biological underpinnings of risk preference in future studies.”


  3. Study looks at how disliked classes affect incidence of college student cheating

    October 13, 2017 by Ashley

    From the Ohio State University press release:

    One of the tactics that discourages student cheating may not work as well in courses that college students particularly dislike, a new study has found.

    Previous research suggests instructors who emphasize mastering the content in their classes encounter less student cheating than those who push students to get good grades.

    But this new study found emphasizing mastery isn’t related as strongly to lower rates of cheating in classes that students list as their most disliked. Students in disliked classes were equally as likely to cheat, regardless of whether the instructors emphasized mastery or good grades.

    The factor that best predicted whether a student would cheat in a disliked class was a personality trait: a high need for sensation, said Eric Anderman, co-author of the study and professor of educational psychology at The Ohio State University.

    People with a high need for sensation are risk-takers, Anderman said.

    “If you enjoy taking risks, and you don’t like the class, you may think ‘why not cheat.’ You don’t feel you have as much to lose,” he said.

    Anderman conducted the study with Sungjun Won, a graduate student in educational psychology at Ohio State. It appears online in the journal Ethics & Behavior and will be published in a future print edition.

    The study is the first to look at how academic misconduct might differ in classes that students particularly dislike.

    “You could understand why students might be less motivated in classes they don’t like and that could affect whether they were willing to cheat,” Anderman said.

    The researchers surveyed 409 students from two large research universities in different parts of the country.

    The students were asked to answer questions about the class in college that they liked the least.

    Participants were asked if they took part in any of 22 cheating behaviors in that class, including plagiarism and copying test answers from another student. The survey also asked students their beliefs about the ethics of cheating, their perceptions of how much the instructor emphasized mastery and test scores, and a variety of demographic questions, as well as a measure of sensation-seeking.

    A majority of the students (57 percent) reported a math or science course as their most disliked. Large classes were not popular: Nearly half (45 percent) said their least favorite class had more than 50 students enrolled, while two-thirds (65 percent) said the course they disliked was required for their major.

    The most interesting finding was that an emphasis on mastery or on test scores did not predict cheating in disliked classes, Anderman said.

    In 20 years of research on cheating, Anderman said he and his colleagues have consistently found that students cheated less — and believed cheating was less acceptable — in classes where the goals were intrinsic: learning and mastering the content. They were more likely to cheat in classes where they felt the emphasis was on extrinsic goals, such as successful test-taking and getting good grades.

    This study was different, Anderman said.

    In classes that emphasized mastery, some students still believed cheating was wrong, even in their most-disliked class. But when classes are disliked, the new findings suggest a focus on mastery no longer directly protects against cheating behaviors. Nevertheless, there is still a positive relation between actual cheating and the belief that cheating is morally acceptable in those classes.

    “When you have students who are risk-takers in classes that they dislike, the benefits of a class that emphasizes learning over grades seems to disappear,” he said.

    But Anderman noted that this study reinforced results from earlier studies that refute many of the common beliefs about student cheating.

    “All of the things that people think are linked to cheating don’t really matter,” he said.

    “We examined gender, age, the size of classes, whether it was a required class, whether it was graded on a curve — and none of those were related to cheating once you took into account the need for sensation in this study,” he said. “And in other studies, the classroom goals were also important.”

    The good news is that the factors that cause cheating are controllable in some measure, Anderman said. Classes can be designed to emphasize mastery and interventions could be developed to help risk-taking students.

    “We can find ways to help minimize cheating,” he said.


  4. Study suggests women just as willing to take risks as men

    October 11, 2017 by Ashley

    From the University of Exeter press release:

    Women can be just as risk-taking as men — or even more so — when the conventional macho measures of daring — such as betting vast sums on a football game — are replaced by less stereotypical criteria, according to new research led by the University of Exeter.

    Traditional barometers of high risk behaviour — such as betting a day’s wages at a high stakes poker game or riding a motorcycle without a helmet — are often stereotypically masculine. A team of psychologists — Dr Thekla Morgenroth and Professor Michelle Ryan from the University of Exeter and Professor Cordelia Fine and Anna Genat from the University of Melbourne -devised a new measure of risk-taking including activities that women might more typically pursue such as going horseback riding or making risky purchases online.

    They found that research and surveys about risk behaviour has reinforced cultural assumptions about who takes risks. Assessments of heroism, daring or audacity often focus on traditionally masculine behaviour such as gambling or skydiving. When this bias is addressed, women and men rate themselves as equally likely to take risks.

    The findings of the study, published in the journal of Social Psychological and Personality Science, raises fresh questions about whether risk-taking is overwhelmingly a masculine personality trait, and whether women are as risk averse as previously suggested.

    Dr Thekla Morgenroth, post-doctoral Research Fellow at the University of Exeter, and lead author of the paper said: “When people imagine a risk-taker, they might picture someone risking their fortune at a high-stakes poker game, an ambitious CEO, or someone crossing the Grand Canyon on a tightrope — but chances are that the person they picture will be a man. In other words, in our culture, risk is strongly associated with masculinity — and our research shows that this also biases how scientists measure risk.”

    Dr Morgenroth went on to say:

    “Traditional measures of risk-taking tend to overlook the fact that women take many risks all the time — they go horseback riding, they challenge sexism, they are more likely to donate their kidneys to family members. In our research, we show that when you ask men and women how likely they are to take more feminine risks, the gender difference in risk-taking suddenly disappears or even reverses with women reporting slightly higher levels of risk-taking. We’ve been overlooking female risk-taking because our measures have been biased.”

    Co-author Professor Cordelia Fine, whose book Testosterone Rex: Unmaking the myths of our gendered minds, was this week awarded the Royal Society Insight Investment Science Book Prize 2017, added:

    “Even when we matched the level of physical or financial risk, we found that the choice of risky activity — masculine, or more neutral or feminine — makes a difference to the conclusion about which gender is more risk-taking.”

    The academics surveyed a total of 238 people in two studies using traditional measures of risk and new questions which included more activities which were rated as feminine by a group of 99 men and women.

    When stereotypically masculine criteria such as gambling at a casino or going white-water rafting were used, men rated themselves as more likely to engage in risk-taking. However, when new behaviour was included, such as taking a cheerleading class or cooking an impressive but difficult meal for a dinner party, women rated themselves as equally or more likely to take risks.

    Co-author Prof Michelle Ryan noted: “Understanding the nature of gender differences in risk taking is particularly important as the assumption that women are risk averse is often used to justify ongoing gender inequality — such as the gender pay gap and women’s under-representation in politics and leadership.”


  5. Study suggests gender differences in financial risk tolerance linked to different response to income uncertainty

    September 30, 2017 by Ashley

    From the University of Missouri-Columbia press release:

    Prior research has long shown that women are, on average, less risk tolerant in their financial decisions than men. This is a concern as investors with low levels of risk tolerance might have greater difficulty reaching their financial goals and building adequate retirement wealth because they are unlikely to invest in stocks. Now, a researcher from the University of Missouri has found that men and women do not think about investment risks differently. Instead, income uncertainty affects men and women differently, which leads to differences in risk tolerance.

    “Risk tolerance is one of the most important factors that contributes to wealth accumulation and retirement,” said Rui Yao, an associate professor of personal financial planning in the MU College of Health and Environmental Sciences. “It is important to understand what causes women to be less risk tolerant so that financial planners can better serve their needs as women, on average, live longer than men and often need more retirement savings.”

    For her study, Yao examined data from the Survey of Consumer Finances. The survey is conducted every three years and supported by the Federal Reserve and U.S. Department of the Treasury. By analyzing data from nearly 2,250 unmarried American individuals, Yao found that women are more likely to have uncertain incomes from year to year. Life events such as child birth, child care and care-giving often contribute to women’s income uncertainty.

    Yao also found that, on average, women had lower net worth than men. This may have resulted, in part, from women keeping funds in accounts with low returns to buffer the risk of negative income shocks.

    One-quarter of women and one-fifth of men in the sample reported using a financial planner for saving and investment decisions, but the advice given to women may not be in their best interest. Yao suggests that financial planners need to understand the differences in income uncertainty and net worth between men and women and the way in which these differences relate to risk tolerance.

    “Simply telling women to be more risk tolerant is ineffective,” Yao said. “In fact, it might encourage women to take more financial risks than they can tolerate, which could lead to more problems in the future. The difference in investment advice received by men and women requires further investigation, particularly given the new fiduciary standards for financial advisors.”

    Yao’s advice to women is to plan for income uncertainty by creating a financial strategy that fits their needs. For example, when anticipating child-rearing or care-giving periods in the near future, women can and should be more conservative in their investing. When those periods are coming to an end, women should work with their financial planners to make riskier investments.


  6. Chronic lack of sleep increases risk-seeking

    September 12, 2017 by Ashley

    From the University of Zürich press release:

    Young adults have a natural sleep requirement of about 9 hours a day on average, older adults 7.5 hours. Many people in western societies, however, get considerably less sleep. According to studies, about one-third of the persons surveyed in several industrial countries reported too little sleep. If a young adult sleeps less than 8 hours a night, increased attention deficits occur, which can lead to considerable negative consequences. In sleep clinics there is an increasing number of healthy people who are suffering from the negative consequences of insufficient sleep.

    Not enough sleep leads to riskier decision-making

    Researchers at the University of Zurich and the University Hospital Zurich have now identified a further critical consequence of a chronic lack of sleep: increased risk-seeking. The sleep and neuroeconomics scientists studied the risk behavior of 14 healthy male students aged from 18 to 28 years. If the students slept only 5 hours a night for a week, they displayed clearly riskier behavior in comparison with a normal sleep duration of about 8 hours. Twice a day, they had to choose between obtaining a specified amount of money paid out with a given probability or playing it safe with a lower amount of money paid out for sure. The riskier the decision, the higher the possible prize — but also the risk of getting nothing.

    Riskier behavior remains unnoticed

    While a single sleepless night had no effect on risk-seeking, 11 of 14 of the subjects behaved significantly and increasingly riskier as the week of a reduced sleep duration went on. An additional finding is particularly alarming: The students assess their risk-taking behavior to be the same as under regular sleep conditions. “We therefore do not notice ourselves that we are acting riskier when suffering from a lack of sleep,” emphasizes Christian Baumann, professor of neurology and the head of the Clinical Research Priority Programs (CRPP) “Sleep and Health” at UZH. According to the authors of the study, we should therefore all strive for a sufficient sleep duration — especially political and economic leaders who make wide-reaching decisions daily. “The good news is,” Baumann says, “that, in the high-powered world of managers, getting enough sleep is increasingly being seen as desirable.”

    Lack of recovery in important regions of the brain

    For the first time, the researchers have proven that a low depth of sleep in the right prefrontal cortex is directly connected with higher risk-seeking behavior. This part of the cerebral cortex has already been associated with risk-taking behavior in earlier studies. “We assume that behavioral changes occur for anatomical-functional reasons to some extent as a result of the right prefrontal cortex not being able to recover properly due to a chronic lack of sleep,” Baumann concludes.


  7. High moral reasoning associated with increased activity in the human brain’s reward system

    September 10, 2017 by Ashley

    From the University of Pennsylvania School of Medicine press release:

    Individuals who have a high level of moral reasoning show increased activity in the brain’s frontostriatal reward system, both during periods of rest and while performing a sequential risk taking and decision making task according to a new study from researchers at the Perelman School of Medicine, the Wharton School of the University of Pennsylvania, Shanghai International Studies University in Shanghai, China and Charité Universitätsmediz in Berlin, Germany. The findings from the study, published this month in Scientific Reports, may help researchers to understand how brain function differs in individuals at different stages of moral reasoning and why some individuals who reach a high level of moral reasoning are more likely to engage in certain “prosocial” behaviors — such as performing community service or giving to charity — based on more advanced principles and ethical rules.

    The study refers to Lawrence Kohlberg’s stages of moral development theory which proposes that individuals go through different stages of moral reasoning as their cognitive abilities mature. According to the researchers, Kohlberg’s theory implies that individuals at a lower level of moral reasoning are more prone to judge moral issues primarily based on personal interests or adherence to laws and rules, whereas individuals with higher levels of moral reasoning judge moral issues based on deeper principles and shared ideals.

    The researchers’ previous work found an association between high levels of moral reasoning and gray matter volume, establishing a critical link between moral reasoning and brain structure. This more recent study sought to discover whether a link exists between moral reasoning and brain function.

    In this study, the researchers aimed to investigate whether the development of morality is associated with measurable aspects of brain function. To answer this question, they tested moral reasoning in a large sample of more than 700 Wharton MBA students, and looked at the brain reward system activity in a subset of 64 students, both with and without doing a task. According to Hengyi Rao, PhD, a research assistant professor of Cognitive Neuroimaging in Neurology and Psychiatry in the Perelman School of Medicine and senior author of the study, the team observed considerable individual differences in moral development levels and brain function in this relatively homogeneous and well-educated MBA group of subjects.

    “It is well established in the literature that the brain reward system is involved in moral judgment, decision making, and prosocial behavior. However, it remains unknown whether brain reward system function can be affected by stages of moral development,” Rao said. “To our knowledge, this study is the first to demonstrate the modulation effect of moral reasoning level on human brain reward system activity. Findings from our study provide new insights into the potential neural basis and underlying psychological processing mechanism of individual differences in moral development. ”

    The finding of increased brain reward system activity in individuals at a high level of moral reasoning suggests the importance of positive motivations towards others in moral reasoning development, rather than selfish motives. These findings also support Kohlberg’s theory that higher levels of moral reasoning tend to be promotion and other-focused (do it because it is right) rather than prevention or self-focused (do not do it because it is wrong).

    “Our study documents brain function differences associated with higher and lower levels of moral reasoning. It is still unclear whether the observed brain function differences are the cause or the result of differential levels of moral reasoning,” explained Diana Robertson, PhD, a James T. Riady professor of Legal Studies and Business Ethics at the Wharton School and a co-author of the study. “However, we believe that both factors of nurture, such as education, parental socialization and life experience, and factors of nature, like biological or evolutionary basis, the innate capacities of the mind, and the genetic basis may contribute to individual differences in moral development.”

    The researchers say future studies could expand on this work by assessing to what extent individual differences in moral reasoning development depend on in-born differences or learned experience, and whether education can further promote moral reasoning stage in individuals even past the age at which structural and functional brain maturation is complete.


  8. Telling people not to ‘down’ drinks could make them drink more

    September 1, 2017 by Ashley

    From the University of Exeter press release:

    Campaigns designed to stop young people “bolting” drinks can be ineffective and can even make them more likely to do it, new research suggests.

    Scientists from the University of Exeter and the University of Queensland examined reactions to a poster warning of the consequences of bolting (downing an alcoholic drink in one) and found it had no effect on people’s future intentions.

    And when a statement was added saying other people disapproved of bolting, study participants reported stronger intentions to bolt in the future.

    However, changing this to a message saying most people “do not bolt drinks on a night out” was effective.

    “Many young people overestimate the extent to which their peers both approve of and engage in risky drinking behaviours,” said study author Dr Joanne Smith, of the University of Exeter.

    “One way to tackle risky drinking is to try to correct these misperceptions through health campaigns, such as posters.

    “In our research, we wanted to explore what kinds of messages are more effective in changing people’s intentions to bolt.

    “Our results highlight the potentially harmful effects of exposure to what’s called an ‘injunctive norm’ — a message about the approval or disapproval of others.

    “Meanwhile, a ‘descriptive norm’ — telling people what others do rather than what they think — had a positive impact.”

    The study is published in the journal Addiction Research and Theory.

    Professor Charles Abraham, of the University of Exeter Medical School, said: “This demonstrates how careful we need to be in selecting the right message in campaigns, and evaluating them before wider dissemination, as poorly designed campaigns, however well-intentioned, can backfire.”

    The research consisted of three studies, in which volunteers (221 in total) saw the poster or did not, and then either received or did not receive messages about what their peers thought or how they behaved.

    In one study, some participants received an accurate message saying 70% of their peers “disapprove of bolting,” and in another some received an accurate message saying 65% of their peers “do not bolt drinks on a night out.”

    They all then completed identical questionnaires to measure their perceptions of group norms related to bolting, and their own intentions to do it in the future.

    The researchers point out that beliefs about how other people behave are often the “best predictor” in terms of general drinking behaviour and binge drinking, but note that using these beliefs to change behaviour needs to be done carefully to ensure campaigns have the desired effect.


  9. Study suggests risktaking in teens is not because of brain development deficit

    August 31, 2017 by Ashley

    From the Annenberg Public Policy Center of the University of Pennsylvania press release:

    A popular theory in recent neuroscience proposes that slow development of the prefrontal cortex — and its weak connectivity with brain reward regions — explains teenagers’ seemingly impulsive and risky behavior. But an extensive literature review to be published in the journal Developmental Cognitive Neuroscience challenges that interpretation.

    The researchers examined the evidence behind that argument and found that much of it misinterpreted adolescent exploratory behavior as impulsive and lacking in control. Instead, the review suggests that much of what looks like adolescent impulsivity is behavior that is often guided by the desire to learn about the world.

    “Not long ago, the explanation for teenage behavior was raging hormones,” said lead author Daniel Romer, Ph.D., research director of the Annenberg Public Policy Center of the University of Pennsylvania. “Now, it’s that the prefrontal cortex isn’t fully developed. Neuroscientists were quick to interpret what appeared to be a characteristic of the developing brain as evidence of stereotypes about adolescent risk taking. But these behaviors are not symptoms of a brain deficit.”

    In their article, now posted online, the authors note that the brain development theory fails to take into account the implications of different kinds of risk taking. Teens have a heightened attraction to novel and exciting experiences, known as sensation seeking, which peaks during adolescence. But teens who exhibit that tendency alone are not necessarily more likely to suffer from health issues like substance use or gambling addiction. In fact, the authors noted that the rise in adolescent levels of the neurotransmitter dopamine, which may underlie the increased drive for sensation seeking, also supports the brain’s ability to exert greater control and to learn from experience.

    “What’s happening is that adolescents lack experience,” Romer said. “So they’re trying things out for the first time — like learning how to drive. They’re also trying drugs, deciding what to wear and who to hang out with. For some youth, this leads to problems. But when you’re trying things for the first time, you sometimes make mistakes. Researchers have interpreted this as a lack of control when for most youth, it’s just exploration.”

    Brain development and risk taking

    In their article, Romer and his co-authors say that the stereotype of the risky adolescent is based more on the rise of such behavior in adolescence than on its prevalence. “For the vast majority of adolescents,” the researchers write, “this period of development passes without substance dependence, sexually transmitted infection, pregnancy, homicide, depression, suicide, or death due to car crashes.”

    It’s a smaller subset of teens — those who exhibit impulsive behavior and have weak cognitive control — who are most at risk of unhealthy outcomes. Teens with impulse control problems can often be identified at ages four or five, and they are disproportionately likely to experience the hazards of adolescence and beyond, including higher rates of injuries and illnesses from car crashes, violence, and sexually transmitted infections, the authors say.

    “Further research is clearly needed to understand the brain development of youth who are at risk for adverse outcomes, as abnormalities of brain development are certainly linked to diverse neuropsychiatric conditions,” said co-author Theodore Satterthwaite, M.D., a faculty member in the Department of Psychiatry at the Perelman School of Medicine at the University of Pennsylvania. “This research will help us to understand not only what makes adolescence a period of growth but also of risk.”

    An alternative model

    The authors propose an alternative model that emphasizes the role that risk taking and the experience gained by it play in adolescent development. This model explains much of the apparent increase in risk taking by adolescents as “an adaptive need to gain the experience required to assume adult roles and behaviors.” That experience eventually changes the way people think about risk, making it more “gist-like” or thematic and making them more risk averse.

    “Recent meta-analyses suggest that the way individuals think about risks and rewards changes as they mature, and current accounts of brain development must take these newer ideas into account to explain adolescent risk taking,” said co-author Valerie Reyna, Ph.D., director of the Human Neuroscience Institute at Cornell University.

    Romer added, “The reason teens are doing all of this exploring and novelty seeking is to build experience so that they can do a better job in making the difficult and risky decisions in later life — decisions like ‘Should I take this job?’ or ‘Should I marry this person?’ There’s no doubt that this period of development is a challenge for parents, but that’s doesn’t mean that the adolescent brain is somehow deficient or lacking in control.”


  10. Well-designed visual aids improve risk understanding

    August 17, 2017 by Ashley

    From the University of Oklahoma press release:

    A University of Oklahoma professor, Edward T. Cokely, shows that informed decision making depends on the ability to accurately evaluate and understand information about risk in a newly published study in the scientific journal Human Factors. A state-of-the-science review of the literature concludes that visual aids are beneficial for diverse people with different levels of numeracy and graph literacy. Cokely identifies five categories of practical, evidence-based guidelines for the evaluation and design of visual aids.

    “It is striking to see how effective visual aids can be for diverse people facing complex, life-changing decisions, including physicians, patients and their families,” Cokely said.

    Cokely, Presidential Research Professor and associate professor of psychology, in the OU Department of Psychology and National Institute of Risk and Resilience, collaborated with Rocio Garcia-Retamero, University of Granada in Spain, on the review of “how to” build visual aids that promote understanding and good decision making. Data for the study covered research from January 1995 to April 2016, and 36 publications provided data on 27,885 diverse participants from 60 different countries, concluding that well-designed visual aids tend to be highly effective tools for improving informed decision making among diverse decision makers.

    Cokely and Garcia-Retamero reviewed literature concentrated in health and medical decision making that included findings on the link between skills and quality outcomes. Next, the researchers presented findings on the psychological, social and technological factors which shape the influence of numeracy on risk literacy, decision making and health outcomes. Lastly, they presented a review of research investigating the influence of skills on the benefits of visual aids.

    Visual aids are graphical representations of numerical expressions of probability and include icon arrays, bar and line charts and others and have been used to communicate risk information. However, not all visual aids are equally effective. Visual aids provide an efficient means of risk communication when they are transparent — that is, when they promote unbiased risk understanding and evaluation. This means the visual aid is well defined and accurately and clearly represents the essential risk information. Researchers then focused on individual differences in two relevant skills: numeracy and graph literacy. Numeracy, the ability to use mathematical skills to solve everyday problems, including statistical numeracy, has been found to be one of the strongest single predictors of general decision-making skill and risk literacy. Graph literacy is the ability to evaluate and extract data and meaning from graphical representations of numerical information — another essential component of risk literacy.

    A review of static visual aids to improve risk literacy and promote healthy behavior focused on studies involving people with different levels of numeracy that included a control condition, which compared visual aids with numerical information in written text. Eighty-eight percent of the studies showed static visual aids tend to be beneficial. Static visual aids were helpful for people with low numeracy as long as they had moderate-to-high graph literacy.

    One theme that emerged across some studies is that, on average, “less is more.” Very simple icon arrays including clear explanations to convey the meaning of information can improve understanding. People with different levels of numeracy and graph literacy like, trust and prefer simple icon arrays. These icon arrays offer an efficient means of reaching individuals with different levels of numeracy and graphic literacy. A simple training in the use of icon arrays maximizes potential benefits and reaches vulnerable groups of people with limited graph literacy.

    Visual aids improve accuracy of risk understanding in part because they increase the likelihood that people deliberate more about the relevant risks and trade-offs. Because visual aids cause relatively robust changes in risk understanding by shaping and fine-tuning knowledge representation in long-term memory, visual aids also tend to give rise to more enduring changes in attitudes and behavioral intentions, which can directly affect decision making and healthy behavior. Based on all relevant, available scientific data from the last 20 years, findings indicate that icon arrays tend to be the best “all purpose” type of visual aids.