1. Study suggests mirror neuron activity predicts people’s decision-making in moral dilemmas

    January 13, 2018 by Ashley

    From the University of California – Los Angeles press release:

    It is wartime. You and your fellow refugees are hiding from enemy soldiers, when a baby begins to cry. You cover her mouth to block the sound. If you remove your hand, her crying will draw the attention of the soldiers, who will kill everyone. If you smother the child, you’ll save yourself and the others.

    If you were in that situation, which was dramatized in the final episode of the ’70s and ’80s TV series “M.A.S.H.,” what would you do?

    The results of a new UCLA study suggest that scientists could make a good guess based on how the brain responds when people watch someone else experience pain. The study found that those responses predict whether people will be inclined to avoid causing harm to others when facing moral dilemmas.

    “The findings give us a glimpse into what is the nature of morality,” said Dr. Marco Iacoboni, director of the Neuromodulation Lab at UCLA’s Ahmanson-Lovelace Brain Mapping Center and the study’s senior author. “This is a foundational question to understand ourselves, and to understand how the brain shapes our own nature.”

    In the study, which was published in Frontiers in Integrative Neuroscience, Iacoboni and colleagues analyzed mirror neurons, brain cells that respond equally when someone performs an action or simply watches someone else perform the same action. Mirror neurons play a vital role in how people learn through mimicry and feel empathy for others.

    When you wince while seeing someone experience pain — a phenomenon called “neural resonance” — mirror neurons are responsible.

    Iacoboni wondered if neural resonance might play a role in how people navigate complicated problems that require both conscious deliberation and consideration of another’s feelings.

    To find out, researchers showed 19 volunteers two videos: one of a hypodermic needle piercing a hand, and another of a hand being gently touched by a cotton swab. During both, the scientists used a functional MRI machine to measure activity in the volunteers’ brains.

    Researchers later asked the participants how they would behave in a variety of moral dilemmas, including the scenario involving the crying baby during wartime, the prospect of torturing another person to prevent a bomb from killing several other people and whether to harm research animals in order to cure AIDS.

    Participants also responded to scenarios in which causing harm would make the world worse — inflicting harm on another person in order to avoid two weeks of hard labor, for example — to gauge their willingness to cause harm for moral reasons and for less-noble motives.

    Iacoboni and his colleagues hypothesized that people who had greater neural resonance than the other participants while watching the hand-piercing video would also be less likely to choose to silence the baby in the hypothetical dilemma, and that proved to be true. Indeed, people with stronger activity in the inferior frontal cortex, a part of the brain essential for empathy and imitation, were less willing to cause direct harm, such as silencing the baby.

    But the researchers found no correlation between people’s brain activity and their willingness to hypothetically harm one person in the interest of the greater good — such as silencing the baby to save more lives. Those decisions are thought to stem from more cognitive, deliberative processes.

    The study confirms that genuine concern for others’ pain plays a causal role in moral dilemma judgments, Iacoboni said. In other words, a person’s refusal to silence the baby is due to concern for the baby, not just the person’s own discomfort in taking that action.

    Iacoboni’s next project will explore whether a person’s decision-making in moral dilemmas can be influenced by decreasing or enhancing activity in the areas of the brain that were targeted in the current study.

    “It would be fascinating to see if we can use brain stimulation to change complex moral decisions through impacting the amount of concern people experience for others’ pain,” Iacoboni said. “It could provide a new method for increasing concern for others’ well-being.”

    The research could point to a way to help people with mental disorders such as schizophrenia that make interpersonal communication difficult, Iacoboni said.

    The study’s first author is Leo Moore, a UCLA postdoctoral scholar in psychiatry and biobehavioral sciences. Paul Conway of Florida State University and the University of Cologne, Germany, is the paper’s other co-author.

    The study was supported by the National Institute of Mental Health, the Brain Mapping Medical Research Organization, the Brain Mapping Support Foundation, the Pierson-Lovelace Foundation, the Ahmanson Foundation, the William M. and Linda R. Dietel Philanthropic Fund at the Northern Piedmont Community Foundation, the Tamkin Foundation, the Jennifer Jones-Simon Foundation, the Capital Group Companies Charitable Foundation, the Robson family, and the Northstar Fund.


  2. Study suggests consumers wilfully “forget” when products have ethical issues

    January 5, 2018 by Ashley

    From the Ohio State University press release:

    Many consumers have found a way to cope with the knowledge that products they like have been made unethically: They simply forget they ever knew it.

    In a series of studies, researchers found that consumers conveniently “forgot” that brands of desks were made with wood from rainforests or that jeans may have been made with child labor.

    In fact, consumers not only forget the uncomfortable truth, but sometimes misremember the facts and believe that the offending product was made ethically.

    “It’s not necessarily a conscious decision by consumers to forget what they don’t want to know,” said Rebecca Reczek, co-author of the study and associate professor of marketing at The Ohio State University’s Fisher College of Business.

    “It is a learned coping mechanism to tune out uncomfortable information because it makes their lives easier.”

    The study appears online in the Journal of Consumer Research and will be published in a future print edition.

    In one study, 236 college students were asked to read and memorize descriptions of six made-up brands of desks. The descriptions discussed quality, price and an ethical dimension — the source of the wood. Participants read that the wood either came from sustainable tree farms or endangered rainforests.

    When asked immediately after memorization, participants accurately recalled whether the wood came from rainforests or tree farms in 94 percent of the cases.

    But those memories faded quickly. After the participants completed 15 to 20 minutes of tasks meant to distract them, they were then given a sheet of paper with all six desk brand names and were asked to write down as much as they could remember about each desk.

    Participants were right 60 percent of the time about desks made from tree farms, but right only 45 percent of the time about desks made of rainforest wood.

    “It is not that the participants didn’t pay attention to where the wood came from. We know that they successfully memorized that information,” said study co-author Daniel Zane, a doctoral student in marketing at Ohio State.

    “But they forget it in this systematic pattern. They remembered the quality and price attributes of the desks. It is only the ethical attributes that cause people to be willfully ignorant.”

    A second study involved a national sample of 402 people who participated online. Here, participants were asked to put together an outfit that included a pair of jeans. About half of the participants saw a brand of jeans that was described as being made with child labor while the other half saw a brand of jeans made ethically, with no child labor.

    Results were similar to the first study: People who saw the jeans made with child labor were much less likely to remember this information than people who saw a brand of jeans made with adult labor.

    Why is forgetting ethical information so popular with consumers? Well, another study suggested that it makes people feel a little better about themselves.

    In this study, the researchers had participants consider a hypothetical person named Chris who bought a pair of jeans made with child labor. In some cases, participants were told Chris had earlier learned that the jeans were made with child labor, but forgot when making the purchase. In other cases, participants were told Chris remembered the information, but ignored it when buying the jeans.

    “What we found is that people judged the person who forgot the ethical information as more moral than the person who ignored the information,” Reczek said.

    “So, for most people, forgetting is seen as the more acceptable coping strategy.”

    If you really want to be an ethical consumer, there are steps you can take, Zane said.

    “You need to realize that this memory bias exists and eliminate memory from your buying process,” he said.

    “Don’t put something in your online shopping cart that you know was made unethically and say you’ll think about it. By the time you come back, there is a good chance you will have forgotten what troubled you in the first place.”

    There’s a lesson for ethical companies too, Reczek said.

    Don’t make your customers rely on memory. Make sure you have reminders at the point of purchase that you’re an ethical brand,” she said.


  3. Study finds high-pressure expectations may lead to unethical behavior

    November 30, 2017 by Ashley

    From the University of Georgia press release:

    It can happen in the branch office or the boardroom. Volkswagen did it to pass emissions tests. Wells-Fargo did it to squeeze more profits from their customers. Some school districts have it done it to boost their standardized test scores. Workplace cheating is a real and troublesome phenomenon, and new research from the University of Georgia explains how it starts — and how employers can help prevent it.

    “It’s the desire for self-protection that primarily causes employees to cheat,” said Marie Mitchell, an associate professor of management in UGA’s Terry College of Business. “Employees want to look valuable and productive, especially if they think their job is at risk.”

    In a recently published paper in the Journal of Applied Psychology, Mitchell and her co-authors examined performance pressure in the workplace and the behaviors that result from it. They found when employees feel their job depends on meeting high benchmarks, some fudge results in order to stay employed.

    For example, when Wells Fargo employees were told to meet new goals that included opening sky-high numbers of new accounts, thousands began to open fraudulent accounts in order to meet their quotas. Wells Fargo was fined $185 million in 2016 and publicly scorned as a result. Similar scenarios can play out across all industries, Mitchell said.

    “We’ve seen it in finance, we’ve seen it with educators and test scores, we’ve seen it in sports, it’s everywhere,” she said. “Performance pressure elicits cheating when employees feel threatened. Even though there is the potential of getting a good payoff if they heighten their performance, there’s also significant awareness that if they don’t, their job is going to be at risk.”

    This is especially true when employees feel they cannot meet expectations any other way. That perception leads to anger, which in turn leads to unethical behavior, Mitchell said. This crucible of pressure and anger causes employees to focus on doing what is beneficial to them — even if it harms others.

    “Angry and self-serving employees turn to cheating to meet performance demands. It’s understandable,” Mitchell said. “There’s a cycle in which nothing is ever good enough today. Even if you set records last month, you may get told to break them again this month. People get angry about that, and their self-protective reflex is elicited almost subconsciously.”

    An expert on “dark side” behaviors and a former human resources manager, Mitchell has been interested in cheating phenomena since her graduate school days.

    “There were individuals in law school who would race to get to law journals before anyone else and tear out certain pages so that other students couldn’t be as prepared in class,” she said. “So I know cheating happens. I’ve seen it. But the research on this has taken place in behavioral labs, and that doesn’t always translate well to the workplace. I wanted to find out a bit more about what actually happens at work.”

    To do so, her research team devised three studies. The first created a measure of workplace cheating behavior through a nationwide survey that asked participants about cheating behavior at work — what it is and if they’d seen it. The second and third studies were time-separated field surveys in which employees were asked about their performance pressure at one point in time, then were asked about their feelings and perceptions of the pressure and their cheating behaviors about a month later.

    The findings led to a breakthrough. The key, Mitchell said, is for managers to understand the potential threat of performance pressure to employees. If they coach employees on how to view pressure as non-threatening and focus on how to enhance performance ethically, cheating may be prevented.

    “It could be that if you pair performance pressure with ethical standards and give employees the right kind of assurance within the workplace, it can actually motivate great performance,” she said. “There have been many scholars who have argued that you need to stretch your employees because it motivates them, makes them step outside of their normal boxes and be more creative. Our research says that it could, but it also might cause them to act unethically.”

    The paper, “Cheating Under Pressure: A Self-Protection Model of Workplace Cheating Behavior,” was co-authored by Michael D. Baer of Arizona State University, Maureen L. Ambrose and Robert Folger of the University of Central Florida and Noel F. Palmer of the University of Nebraska-Kearney.


  4. Study suggests willingness to support corporate social responsibility initiatives contingent on perception of boss’ ethics

    November 14, 2017 by Ashley

    From the University of Vermont press release:

    A new study shows that people who perceive their employer as committed to environmental and community-based causes will, in turn, engage in green behavior and local volunteerism, with one caveat: their boss must display similarly ethical behavior.

    The forthcoming study in the Journal of Business Ethics by Kenneth De Roeck, assistant professor at the University of Vermont, and Omer Farooq of UAE University, shows that people who work for socially and environmentally responsible companies tend to identify more strongly with their employer, and as a result, increase their engagement in green and socially responsible behaviors like community volunteerism.

    “When you identify with a group, you tend to adopt its values and goals as your own,” says De Roeck. “For example, if you are a fan who identifies with the New England Patriots, their objective to win the Super Bowl becomes your objective too. If they win it, you will say ‘we,’ rather than ‘they,’ won the Super Bowl, because being a fan of the New England Patriots became part of your own identity.”

    That loyalty goes out the window, however, if employees don’t perceive their immediate supervisor as ethical, defined as conduct that shows concern for how their decisions affect others’ well-being. Results show that the propensity for the company’s environmental initiatives to foster employees’ green behaviors disappears if they think their boss has poor ethics. Employees’ engagement in volunteer efforts in support of their company’s community-based initiatives also declines if they believe their boss is not ethical, though not as dramatically.

    “When morally loaded cues stemming from the organization and its leaders are inconsistent, employees become skeptical about the organization’s ethical stance, integrity, and overall character,” says De Roeck. “Consequently, employees refrain from identifying with their employers, and as a result, significantly diminish their engagement in creating social and environmental good.”

    Companies as engines for positive social change

    Findings of the study, based on surveys of 359 employees at 35 companies in the manufacturing industry (consumer goods, automobile, and textile), could provide insight for companies failing to reap the substantial societal benefits of CSR.

    “This isn’t another story about how I can get my employees to work better to increase the bottom line, it’s more about how I can get employees to create social good,” says De Roeck, whose research focuses on the psychological mechanisms explaining employees’ reactions to, and engagement in, CSR. “Moreover, our measure of employees’ volunteer efforts consists of actions that extend well beyond the work environment, showing that organizations can be a strong engine for positive social change by fostering, through the mechanism of identification, a new and more sustainable way of life to their employees.”

    De Roeck says organizations wanting to boost their social performance by encouraging employee engagement in socially responsible behaviors need to ensure that employees perceive their ethical stance and societal engagement as authentic. To do so, and avoid any perception of greenwashing – the promotion of green-based initiatives despite not practicing them fully – organizations should strive to ensure consistency between CSR engagement and leaders’ ethical stance by training supervisors about social and ethical responsibility. Organizations should also be cautious in hiring and promoting individuals to leadership positions who fit with the company CSR strategy and ethical culture.

    “Organizations should not treat CSR as an add-on activity to their traditional business models, but rather as something that should be carefully planned and integrated into the company strategy, culture, and DNA,” says De Roeck. “Only then will employees positively perceive CSR as a strong identity cue that will trigger their identification with the organization and, as a result, foster their engagement in such activities through socially responsible behaviors.”


  5. Study suggests middle managers sometimes turn to unethical behavior to face unrealistic expectations

    October 15, 2017 by Ashley

    From the Penn State press release:

    While unethical behavior in organizations is often portrayed as flowing down from top management, or creeping up from low-level positions, a team of researchers suggest that middle management also can play a key role in promoting wide-spread unethical behavior among their subordinates.

    In a study of a large telecommunications company, researchers found that middle managers used a range of tactics to inflate their subordinates’ performance and deceive top management, according to Linda Treviño, distinguished professor of organizational behavior and ethics, Smeal College of Business, Penn State. The managers may have been motivated to engage in this behavior because leadership instituted performance targets that were unrealizable, she added.

    When creating a new unit, a company’s top management usually also sketches out the unit’s performance routines — for example, they set goals, develop incentives and designate certain responsibilities, according to the researchers. Middle managers are then tasked with carrying out these new directives. But, in the company studied by the researchers, this turned out to be impossible.

    “What we found in this particular case — but I think it happens a lot — is that there were obstacles in the way of achieving these goals set by top management,” said Treviño. “For a variety of reasons, the goals were unrealistic and unachievable. The workers didn’t have enough training. They didn’t feel competent. They didn’t know the products well enough. There weren’t enough customers and there wasn’t even enough time to get all the work done.”

    Facing these obstacles, middle management enacted a series of moves designed to deceive top management into believing that teams were actually meeting their goals, according to Treviño, who worked with Niki A. den Nieuwenboer, assistant professor of organizational behavior and business ethics, University of Kansas; and Joa?o Viera da Cunha, associate professor, IESEG School of Management.

    “It became clear to middle managers that there was no way their people could meet these goals,” said Treviño. “They got really creative because their bonuses are tied to what their people do, or because they didn’t want to lose their jobs. Middle managers exploited vulnerabilities they identified in the organization to come up with ways to make it look like their workers were achieving goals when they weren’t.”

    According to the researchers, these strategies included coopting sales from another unit, portraying orders as actual sales and ensuring that the flow of sales data reported in the company’s IT system looked normal. Middle managers created some of these behaviors on their own, but they also learned tactics from other managers, according to the researchers, who report their findings in Organization Science, online now.

    Middle managers also used a range of tactics to coerce their subordinates to keep up the ruse, including rewards for unethical behavior and public shaming for those who were reluctant to engage in the unethical tactics.

    “Interestingly, what we didn’t see is managers speaking up, we didn’t see them pushing back against the unrealistic goals,” said Treviño. “We know a lot about what we refer to as ‘voice’ in an organization and people are fearful and they tend to keep quiet for the most part.”

    The researchers suggested that the findings could offer insights into other scandals, such as the Wells Fargo and the U.S. Veteran Administration hospital misconduct. They added that top management in organizations should do more in-depth work to institute realistic goals and incentives.

    “Everybody has goals and goals are motivating, but there are nuances,” said Treviño. “What goal-setting theory says is that if you’re not committed to the goal because you think it’s unachievable, you’ll just throw your hands up and give up. Most front-line employees wanted to do that. But the managers intervened, coercing them to engage in the unethical behaviors.”

    This type of deception can harm an organization in several ways, including its bottom line through the awarding of bonuses based on this deceptive performance, but also because upper management made strategic decisions and allocated resources based on the unit’s feigned success.

    “How can you lead a company if the performance information you get is fake? You end up making bad decisions,” den Nieuwenboer said.

    One of the researchers gathered data for over a year as part of an ethnographic study, a type of study that requires researchers to immerse themselves in the culture and the lives of their subjects. In this case, the ethnographer studied the implementation of a new unit in the telecom company. As part of the data collection, the researcher spent 273 days shadowing workers, 20 days observing middle managers, listened to approximately 15 to 22 informal — lunch or watercooler — breaks between workers per week and conducted 105 formal interviews. Interactions on the phone, through email and in face-to-face meetings were observed and documented.

    “One of the advantages that this kind of data affords you is the opportunity to observe what’s going on across hierarchical levels,” said Treviño. “The middle management role is largely an invisible role. As a researcher, you just don’t get to see that role very often.”


  6. Study looks at how disliked classes affect incidence of college student cheating

    October 13, 2017 by Ashley

    From the Ohio State University press release:

    One of the tactics that discourages student cheating may not work as well in courses that college students particularly dislike, a new study has found.

    Previous research suggests instructors who emphasize mastering the content in their classes encounter less student cheating than those who push students to get good grades.

    But this new study found emphasizing mastery isn’t related as strongly to lower rates of cheating in classes that students list as their most disliked. Students in disliked classes were equally as likely to cheat, regardless of whether the instructors emphasized mastery or good grades.

    The factor that best predicted whether a student would cheat in a disliked class was a personality trait: a high need for sensation, said Eric Anderman, co-author of the study and professor of educational psychology at The Ohio State University.

    People with a high need for sensation are risk-takers, Anderman said.

    “If you enjoy taking risks, and you don’t like the class, you may think ‘why not cheat.’ You don’t feel you have as much to lose,” he said.

    Anderman conducted the study with Sungjun Won, a graduate student in educational psychology at Ohio State. It appears online in the journal Ethics & Behavior and will be published in a future print edition.

    The study is the first to look at how academic misconduct might differ in classes that students particularly dislike.

    “You could understand why students might be less motivated in classes they don’t like and that could affect whether they were willing to cheat,” Anderman said.

    The researchers surveyed 409 students from two large research universities in different parts of the country.

    The students were asked to answer questions about the class in college that they liked the least.

    Participants were asked if they took part in any of 22 cheating behaviors in that class, including plagiarism and copying test answers from another student. The survey also asked students their beliefs about the ethics of cheating, their perceptions of how much the instructor emphasized mastery and test scores, and a variety of demographic questions, as well as a measure of sensation-seeking.

    A majority of the students (57 percent) reported a math or science course as their most disliked. Large classes were not popular: Nearly half (45 percent) said their least favorite class had more than 50 students enrolled, while two-thirds (65 percent) said the course they disliked was required for their major.

    The most interesting finding was that an emphasis on mastery or on test scores did not predict cheating in disliked classes, Anderman said.

    In 20 years of research on cheating, Anderman said he and his colleagues have consistently found that students cheated less — and believed cheating was less acceptable — in classes where the goals were intrinsic: learning and mastering the content. They were more likely to cheat in classes where they felt the emphasis was on extrinsic goals, such as successful test-taking and getting good grades.

    This study was different, Anderman said.

    In classes that emphasized mastery, some students still believed cheating was wrong, even in their most-disliked class. But when classes are disliked, the new findings suggest a focus on mastery no longer directly protects against cheating behaviors. Nevertheless, there is still a positive relation between actual cheating and the belief that cheating is morally acceptable in those classes.

    “When you have students who are risk-takers in classes that they dislike, the benefits of a class that emphasizes learning over grades seems to disappear,” he said.

    But Anderman noted that this study reinforced results from earlier studies that refute many of the common beliefs about student cheating.

    “All of the things that people think are linked to cheating don’t really matter,” he said.

    “We examined gender, age, the size of classes, whether it was a required class, whether it was graded on a curve — and none of those were related to cheating once you took into account the need for sensation in this study,” he said. “And in other studies, the classroom goals were also important.”

    The good news is that the factors that cause cheating are controllable in some measure, Anderman said. Classes can be designed to emphasize mastery and interventions could be developed to help risk-taking students.

    “We can find ways to help minimize cheating,” he said.


  7. Study suggests even open-label placebos work, if they are explained

    October 4, 2017 by Ashley

    From the Universität Basel press release:

    For some medical complaints, open-label placebos work just as well as deceptive ones. As psychologists from the University of Basel and Harvard Medical School report in the journal Pain, the accompanying rationale plays an important role when administering a placebo.

    The successful treatment of certain physical and psychological complaints can be explained to a significant extent by the placebo effect. The crucial question in this matter is how this effect can be harnessed without deceiving the patients. Recent empirical studies have shown that placebos administered openly have clinically significant effects on physical complaints such as chronic back pain, irritable bowel syndrome, episodic migraine and rhinitis.

    Cream for pain relief

    For the first time, researchers from the University of Basel, along with colleagues from Harvard Medical School, have compared the effects of administering open-label and deceptive placebos. The team conducted an experimental study with 160 healthy volunteers who were exposed to increasing heat on their forearm via a heating plate. The participants were asked to manually stop the temperature rise as soon as they could no longer stand the heat. After that, they were given a cream to relieve the pain.

    Some of the participants were deceived during the experiment: they were told that they were given a pain relief cream with the active ingredient lidocaine, although it was actually a placebo. Other participants received a cream that was clearly labeled as a placebo; they were also given fifteen minutes of explanations about the placebo effect, its occurrence and its effect mechanisms. A third group received an open-label placebo without any further explanation.

    The subjects of the first two groups reported a significant decrease in pain intensity and unpleasantness after the experiment. “The previous assumption that placebos only work when they are administered by deception needs to be reconsidered,” says Dr. Cosima Locher, a member of the University of Basel’s Faculty of Psychology and first author of the study.

    Stronger pain when no rationale is given

    When detailed explanations of the placebo effect were absent — as in the third group — the subjects reported significantly more intense and unpleasant pain. This suggests the crucial role of the accompanying rationale and communication when administering a placebo; the researchers speak of a narrative. The ethically problematic aspect of placebos, the deception, thus does not appear all that different from a transparent and convincing narrative. “Openly administering a placebo offers new possibilities for using the placebo effect in an ethically justifiable way,” says co-author Professor Jens Gaab, Head of the Division of Clinical Psychology and Psychotherapy at the University of Basel.


  8. High moral reasoning associated with increased activity in the human brain’s reward system

    September 10, 2017 by Ashley

    From the University of Pennsylvania School of Medicine press release:

    Individuals who have a high level of moral reasoning show increased activity in the brain’s frontostriatal reward system, both during periods of rest and while performing a sequential risk taking and decision making task according to a new study from researchers at the Perelman School of Medicine, the Wharton School of the University of Pennsylvania, Shanghai International Studies University in Shanghai, China and Charité Universitätsmediz in Berlin, Germany. The findings from the study, published this month in Scientific Reports, may help researchers to understand how brain function differs in individuals at different stages of moral reasoning and why some individuals who reach a high level of moral reasoning are more likely to engage in certain “prosocial” behaviors — such as performing community service or giving to charity — based on more advanced principles and ethical rules.

    The study refers to Lawrence Kohlberg’s stages of moral development theory which proposes that individuals go through different stages of moral reasoning as their cognitive abilities mature. According to the researchers, Kohlberg’s theory implies that individuals at a lower level of moral reasoning are more prone to judge moral issues primarily based on personal interests or adherence to laws and rules, whereas individuals with higher levels of moral reasoning judge moral issues based on deeper principles and shared ideals.

    The researchers’ previous work found an association between high levels of moral reasoning and gray matter volume, establishing a critical link between moral reasoning and brain structure. This more recent study sought to discover whether a link exists between moral reasoning and brain function.

    In this study, the researchers aimed to investigate whether the development of morality is associated with measurable aspects of brain function. To answer this question, they tested moral reasoning in a large sample of more than 700 Wharton MBA students, and looked at the brain reward system activity in a subset of 64 students, both with and without doing a task. According to Hengyi Rao, PhD, a research assistant professor of Cognitive Neuroimaging in Neurology and Psychiatry in the Perelman School of Medicine and senior author of the study, the team observed considerable individual differences in moral development levels and brain function in this relatively homogeneous and well-educated MBA group of subjects.

    “It is well established in the literature that the brain reward system is involved in moral judgment, decision making, and prosocial behavior. However, it remains unknown whether brain reward system function can be affected by stages of moral development,” Rao said. “To our knowledge, this study is the first to demonstrate the modulation effect of moral reasoning level on human brain reward system activity. Findings from our study provide new insights into the potential neural basis and underlying psychological processing mechanism of individual differences in moral development. ”

    The finding of increased brain reward system activity in individuals at a high level of moral reasoning suggests the importance of positive motivations towards others in moral reasoning development, rather than selfish motives. These findings also support Kohlberg’s theory that higher levels of moral reasoning tend to be promotion and other-focused (do it because it is right) rather than prevention or self-focused (do not do it because it is wrong).

    “Our study documents brain function differences associated with higher and lower levels of moral reasoning. It is still unclear whether the observed brain function differences are the cause or the result of differential levels of moral reasoning,” explained Diana Robertson, PhD, a James T. Riady professor of Legal Studies and Business Ethics at the Wharton School and a co-author of the study. “However, we believe that both factors of nurture, such as education, parental socialization and life experience, and factors of nature, like biological or evolutionary basis, the innate capacities of the mind, and the genetic basis may contribute to individual differences in moral development.”

    The researchers say future studies could expand on this work by assessing to what extent individual differences in moral reasoning development depend on in-born differences or learned experience, and whether education can further promote moral reasoning stage in individuals even past the age at which structural and functional brain maturation is complete.


  9. Expecting the worst: People’s perceived morality is biased towards negativity

    July 31, 2017 by Ashley

    From the University of Surrey press release:

    People who are believed to be immoral are unable to reverse individuals’ perception of them, potentially resulting in difficulties in the workplace and barriers in accessing fair and equal treatment in the legal system, a new study in PLOS One reports.

    Researchers from the University of Surrey and University of Milano-Bicocca (Italy) collected data from more than 400 participants on behavioural expectations of people described as ‘moral’ and ‘immoral’. Participants were asked to estimate the probability that an individual possessing a characteristic (i.e. honesty) would act in an inconsistent manner (i.e. dishonestly).

    It found participants perceived that those with a ‘good’ moral disposition were more likely to act out of character (i.e. immorally) than an immoral person to engage in inconsistent (i.e. moral) behaviours. For example, ‘covering for somebody’ was considered by participants to be a behaviour that could be displayed by a sincere person whereas an insincere person was less likely to be associated with behaviours such as ‘telling the truth’.

    Such a finding shows that those who are perceived to have immoral traits will have difficulty in changing how they are viewed by others as they are deemed to be less likely to change than a person classed as moral. This is particularly damaging for those who have a questionable character or are facing legal proceedings and highlights the obstacles they face in reversing perceptions of them.

    Lead author Dr Patrice Rusconi from the University of Surrey, Social Emotions and Equality in Relations (SEER) research group, said, “Popular television shows like Breaking Bad show that those viewed as morally ‘good’ are seen as more likely to act out of character and behave immorally on occasions.

    “However what we have found is that those perceived to be immoral are pigeon-holed, and are viewed as more likely to act in certain ways i.e. unjust and unfairly, and therefore unable to act morally on occasions.

    “How an individual is perceived is incredibly important, as if you are viewed negatively it can impact on your treatment in the workplace and in the legal system as you are judged on your past misdemeanours.”

    The study also examined whether these findings translated across different cultures. Researchers collected data from over 200 Italian and American participants, who were asked a series of questions including “How likely do you consider it that a righteous person would behave in an unrighteous fashion?”

    They discovered that despite cultural and lifestyle differences, Italian and American participants had similar perceptions that moral people are more likely act immorally than immoral people would act morally, highlighting that this is a problem encountered globally.


  10. Study examines nature of self-righteousness

    July 30, 2017 by Ashley

    From the University of Chicago Booth School of Business press release:

    Research finds that people tend to believe that they are more likely than others to donate blood, give to charity, treat another person fairly, and give up one’s own seat in a crowded bus for a pregnant woman. A widely cited U.S. News and World Report survey asked 1,000 Americans to indicate the likelihood that they and a long list of celebrities would go to heaven. The vast majority of respondents believed they were more likely to go to heaven than any of these celebrities, including the selfless nun Mother Teresa.

    However, a new study from the University of Chicago Booth School of Business titled “Less Evil Than You: Bounded Self-Righteousness in Character Inferences, Emotional Reactions, and Behavioral Extremes,” to be published in the forthcoming Personality and Social Psychology Bulletin, Nicholas Epley and Nadav Klein ask whether the extensive research on self-righteousness overlooks an important ambiguity: When people say they are more moral than others, do they mean they are more like a saint than others or lesslike a sinner? In other words, do people chronically believe they are “holier” than others or “less evil” than them?

    Klein and Epley conducted four experiments that explored how people judge themselves compared to other people in a variety of contexts. All of the experiments revealed that self-righteousness is asymmetric. Participants believed that they are less evil than others, but no more moral than them. Specifically, participants were less likely to make negative character inferences from their own unethical behavior than from others’ unethical behavior, believed they would feel worse after an unethical action than others would, and believed they were less capable of extreme unethical behavior compared to others. In contrast, these self-other differences were much weaker in evaluations of ethical actions.

    Klein and Epley call this finding “asymmetric self-righteousness,” reflecting the asym-metry that people do not believe they are more moral than others but rather less evil than them.

    One of the causes of asymmetric self-righteousness is that “people evaluate them-selves by adopting an ‘inside perspective’ focused heavily on evaluations of mental states such as intentions and motives, but evaluate others based on an ‘outside per-spective’ that focuses on observed behavior for which intentions and motives are then inferred,” the researches said. Accordingly, the researchers find that people who are more likely to ascribe cynical motives to their own behavior exhibit a smaller asymmetry in self-righteousness.

    The researchers noted that it remains to be seen whether bounded self-righteousness looks the same around the world. Basic moral norms of kindness and respect for others seem to be fairly universal sentiments, but future research will be needed to determine how culture-specific contexts could modify people’s tendency to feel moral superiority.

    “In countries where corruption is more common, the asymmetry in self-righteousness might be more pronounced because people will be more likely to observe unethical behavior committed by other people,” they said.

    Klein and Epley believe that this research has notable implications for the promotion of ethics policies and procedures within organizations. Specifically, people may be especially likely to resist policies aimed at preventing their own unethical behavior, simply because they don’t believe they would ever do anything unethical. This suggests that framing policies as promoting ethical behavior rather than discouraging unethical behavior might be more effective in increasing policy support. “Understanding asymmetric self-righteousness could help foster support for policies that can create more ethical people, and more ethical organizations,” they said.