1. Study suggests willingness to support corporate social responsibility initiatives contingent on perception of boss’ ethics

    November 14, 2017 by Ashley

    From the University of Vermont press release:

    A new study shows that people who perceive their employer as committed to environmental and community-based causes will, in turn, engage in green behavior and local volunteerism, with one caveat: their boss must display similarly ethical behavior.

    The forthcoming study in the Journal of Business Ethics by Kenneth De Roeck, assistant professor at the University of Vermont, and Omer Farooq of UAE University, shows that people who work for socially and environmentally responsible companies tend to identify more strongly with their employer, and as a result, increase their engagement in green and socially responsible behaviors like community volunteerism.

    “When you identify with a group, you tend to adopt its values and goals as your own,” says De Roeck. “For example, if you are a fan who identifies with the New England Patriots, their objective to win the Super Bowl becomes your objective too. If they win it, you will say ‘we,’ rather than ‘they,’ won the Super Bowl, because being a fan of the New England Patriots became part of your own identity.”

    That loyalty goes out the window, however, if employees don’t perceive their immediate supervisor as ethical, defined as conduct that shows concern for how their decisions affect others’ well-being. Results show that the propensity for the company’s environmental initiatives to foster employees’ green behaviors disappears if they think their boss has poor ethics. Employees’ engagement in volunteer efforts in support of their company’s community-based initiatives also declines if they believe their boss is not ethical, though not as dramatically.

    “When morally loaded cues stemming from the organization and its leaders are inconsistent, employees become skeptical about the organization’s ethical stance, integrity, and overall character,” says De Roeck. “Consequently, employees refrain from identifying with their employers, and as a result, significantly diminish their engagement in creating social and environmental good.”

    Companies as engines for positive social change

    Findings of the study, based on surveys of 359 employees at 35 companies in the manufacturing industry (consumer goods, automobile, and textile), could provide insight for companies failing to reap the substantial societal benefits of CSR.

    “This isn’t another story about how I can get my employees to work better to increase the bottom line, it’s more about how I can get employees to create social good,” says De Roeck, whose research focuses on the psychological mechanisms explaining employees’ reactions to, and engagement in, CSR. “Moreover, our measure of employees’ volunteer efforts consists of actions that extend well beyond the work environment, showing that organizations can be a strong engine for positive social change by fostering, through the mechanism of identification, a new and more sustainable way of life to their employees.”

    De Roeck says organizations wanting to boost their social performance by encouraging employee engagement in socially responsible behaviors need to ensure that employees perceive their ethical stance and societal engagement as authentic. To do so, and avoid any perception of greenwashing – the promotion of green-based initiatives despite not practicing them fully – organizations should strive to ensure consistency between CSR engagement and leaders’ ethical stance by training supervisors about social and ethical responsibility. Organizations should also be cautious in hiring and promoting individuals to leadership positions who fit with the company CSR strategy and ethical culture.

    “Organizations should not treat CSR as an add-on activity to their traditional business models, but rather as something that should be carefully planned and integrated into the company strategy, culture, and DNA,” says De Roeck. “Only then will employees positively perceive CSR as a strong identity cue that will trigger their identification with the organization and, as a result, foster their engagement in such activities through socially responsible behaviors.”


  2. Study suggests middle managers sometimes turn to unethical behavior to face unrealistic expectations

    October 15, 2017 by Ashley

    From the Penn State press release:

    While unethical behavior in organizations is often portrayed as flowing down from top management, or creeping up from low-level positions, a team of researchers suggest that middle management also can play a key role in promoting wide-spread unethical behavior among their subordinates.

    In a study of a large telecommunications company, researchers found that middle managers used a range of tactics to inflate their subordinates’ performance and deceive top management, according to Linda Treviño, distinguished professor of organizational behavior and ethics, Smeal College of Business, Penn State. The managers may have been motivated to engage in this behavior because leadership instituted performance targets that were unrealizable, she added.

    When creating a new unit, a company’s top management usually also sketches out the unit’s performance routines — for example, they set goals, develop incentives and designate certain responsibilities, according to the researchers. Middle managers are then tasked with carrying out these new directives. But, in the company studied by the researchers, this turned out to be impossible.

    “What we found in this particular case — but I think it happens a lot — is that there were obstacles in the way of achieving these goals set by top management,” said Treviño. “For a variety of reasons, the goals were unrealistic and unachievable. The workers didn’t have enough training. They didn’t feel competent. They didn’t know the products well enough. There weren’t enough customers and there wasn’t even enough time to get all the work done.”

    Facing these obstacles, middle management enacted a series of moves designed to deceive top management into believing that teams were actually meeting their goals, according to Treviño, who worked with Niki A. den Nieuwenboer, assistant professor of organizational behavior and business ethics, University of Kansas; and Joa?o Viera da Cunha, associate professor, IESEG School of Management.

    “It became clear to middle managers that there was no way their people could meet these goals,” said Treviño. “They got really creative because their bonuses are tied to what their people do, or because they didn’t want to lose their jobs. Middle managers exploited vulnerabilities they identified in the organization to come up with ways to make it look like their workers were achieving goals when they weren’t.”

    According to the researchers, these strategies included coopting sales from another unit, portraying orders as actual sales and ensuring that the flow of sales data reported in the company’s IT system looked normal. Middle managers created some of these behaviors on their own, but they also learned tactics from other managers, according to the researchers, who report their findings in Organization Science, online now.

    Middle managers also used a range of tactics to coerce their subordinates to keep up the ruse, including rewards for unethical behavior and public shaming for those who were reluctant to engage in the unethical tactics.

    “Interestingly, what we didn’t see is managers speaking up, we didn’t see them pushing back against the unrealistic goals,” said Treviño. “We know a lot about what we refer to as ‘voice’ in an organization and people are fearful and they tend to keep quiet for the most part.”

    The researchers suggested that the findings could offer insights into other scandals, such as the Wells Fargo and the U.S. Veteran Administration hospital misconduct. They added that top management in organizations should do more in-depth work to institute realistic goals and incentives.

    “Everybody has goals and goals are motivating, but there are nuances,” said Treviño. “What goal-setting theory says is that if you’re not committed to the goal because you think it’s unachievable, you’ll just throw your hands up and give up. Most front-line employees wanted to do that. But the managers intervened, coercing them to engage in the unethical behaviors.”

    This type of deception can harm an organization in several ways, including its bottom line through the awarding of bonuses based on this deceptive performance, but also because upper management made strategic decisions and allocated resources based on the unit’s feigned success.

    “How can you lead a company if the performance information you get is fake? You end up making bad decisions,” den Nieuwenboer said.

    One of the researchers gathered data for over a year as part of an ethnographic study, a type of study that requires researchers to immerse themselves in the culture and the lives of their subjects. In this case, the ethnographer studied the implementation of a new unit in the telecom company. As part of the data collection, the researcher spent 273 days shadowing workers, 20 days observing middle managers, listened to approximately 15 to 22 informal — lunch or watercooler — breaks between workers per week and conducted 105 formal interviews. Interactions on the phone, through email and in face-to-face meetings were observed and documented.

    “One of the advantages that this kind of data affords you is the opportunity to observe what’s going on across hierarchical levels,” said Treviño. “The middle management role is largely an invisible role. As a researcher, you just don’t get to see that role very often.”


  3. Study looks at how disliked classes affect incidence of college student cheating

    October 13, 2017 by Ashley

    From the Ohio State University press release:

    One of the tactics that discourages student cheating may not work as well in courses that college students particularly dislike, a new study has found.

    Previous research suggests instructors who emphasize mastering the content in their classes encounter less student cheating than those who push students to get good grades.

    But this new study found emphasizing mastery isn’t related as strongly to lower rates of cheating in classes that students list as their most disliked. Students in disliked classes were equally as likely to cheat, regardless of whether the instructors emphasized mastery or good grades.

    The factor that best predicted whether a student would cheat in a disliked class was a personality trait: a high need for sensation, said Eric Anderman, co-author of the study and professor of educational psychology at The Ohio State University.

    People with a high need for sensation are risk-takers, Anderman said.

    “If you enjoy taking risks, and you don’t like the class, you may think ‘why not cheat.’ You don’t feel you have as much to lose,” he said.

    Anderman conducted the study with Sungjun Won, a graduate student in educational psychology at Ohio State. It appears online in the journal Ethics & Behavior and will be published in a future print edition.

    The study is the first to look at how academic misconduct might differ in classes that students particularly dislike.

    “You could understand why students might be less motivated in classes they don’t like and that could affect whether they were willing to cheat,” Anderman said.

    The researchers surveyed 409 students from two large research universities in different parts of the country.

    The students were asked to answer questions about the class in college that they liked the least.

    Participants were asked if they took part in any of 22 cheating behaviors in that class, including plagiarism and copying test answers from another student. The survey also asked students their beliefs about the ethics of cheating, their perceptions of how much the instructor emphasized mastery and test scores, and a variety of demographic questions, as well as a measure of sensation-seeking.

    A majority of the students (57 percent) reported a math or science course as their most disliked. Large classes were not popular: Nearly half (45 percent) said their least favorite class had more than 50 students enrolled, while two-thirds (65 percent) said the course they disliked was required for their major.

    The most interesting finding was that an emphasis on mastery or on test scores did not predict cheating in disliked classes, Anderman said.

    In 20 years of research on cheating, Anderman said he and his colleagues have consistently found that students cheated less — and believed cheating was less acceptable — in classes where the goals were intrinsic: learning and mastering the content. They were more likely to cheat in classes where they felt the emphasis was on extrinsic goals, such as successful test-taking and getting good grades.

    This study was different, Anderman said.

    In classes that emphasized mastery, some students still believed cheating was wrong, even in their most-disliked class. But when classes are disliked, the new findings suggest a focus on mastery no longer directly protects against cheating behaviors. Nevertheless, there is still a positive relation between actual cheating and the belief that cheating is morally acceptable in those classes.

    “When you have students who are risk-takers in classes that they dislike, the benefits of a class that emphasizes learning over grades seems to disappear,” he said.

    But Anderman noted that this study reinforced results from earlier studies that refute many of the common beliefs about student cheating.

    “All of the things that people think are linked to cheating don’t really matter,” he said.

    “We examined gender, age, the size of classes, whether it was a required class, whether it was graded on a curve — and none of those were related to cheating once you took into account the need for sensation in this study,” he said. “And in other studies, the classroom goals were also important.”

    The good news is that the factors that cause cheating are controllable in some measure, Anderman said. Classes can be designed to emphasize mastery and interventions could be developed to help risk-taking students.

    “We can find ways to help minimize cheating,” he said.


  4. Study suggests even open-label placebos work, if they are explained

    October 4, 2017 by Ashley

    From the Universität Basel press release:

    For some medical complaints, open-label placebos work just as well as deceptive ones. As psychologists from the University of Basel and Harvard Medical School report in the journal Pain, the accompanying rationale plays an important role when administering a placebo.

    The successful treatment of certain physical and psychological complaints can be explained to a significant extent by the placebo effect. The crucial question in this matter is how this effect can be harnessed without deceiving the patients. Recent empirical studies have shown that placebos administered openly have clinically significant effects on physical complaints such as chronic back pain, irritable bowel syndrome, episodic migraine and rhinitis.

    Cream for pain relief

    For the first time, researchers from the University of Basel, along with colleagues from Harvard Medical School, have compared the effects of administering open-label and deceptive placebos. The team conducted an experimental study with 160 healthy volunteers who were exposed to increasing heat on their forearm via a heating plate. The participants were asked to manually stop the temperature rise as soon as they could no longer stand the heat. After that, they were given a cream to relieve the pain.

    Some of the participants were deceived during the experiment: they were told that they were given a pain relief cream with the active ingredient lidocaine, although it was actually a placebo. Other participants received a cream that was clearly labeled as a placebo; they were also given fifteen minutes of explanations about the placebo effect, its occurrence and its effect mechanisms. A third group received an open-label placebo without any further explanation.

    The subjects of the first two groups reported a significant decrease in pain intensity and unpleasantness after the experiment. “The previous assumption that placebos only work when they are administered by deception needs to be reconsidered,” says Dr. Cosima Locher, a member of the University of Basel’s Faculty of Psychology and first author of the study.

    Stronger pain when no rationale is given

    When detailed explanations of the placebo effect were absent — as in the third group — the subjects reported significantly more intense and unpleasant pain. This suggests the crucial role of the accompanying rationale and communication when administering a placebo; the researchers speak of a narrative. The ethically problematic aspect of placebos, the deception, thus does not appear all that different from a transparent and convincing narrative. “Openly administering a placebo offers new possibilities for using the placebo effect in an ethically justifiable way,” says co-author Professor Jens Gaab, Head of the Division of Clinical Psychology and Psychotherapy at the University of Basel.


  5. High moral reasoning associated with increased activity in the human brain’s reward system

    September 10, 2017 by Ashley

    From the University of Pennsylvania School of Medicine press release:

    Individuals who have a high level of moral reasoning show increased activity in the brain’s frontostriatal reward system, both during periods of rest and while performing a sequential risk taking and decision making task according to a new study from researchers at the Perelman School of Medicine, the Wharton School of the University of Pennsylvania, Shanghai International Studies University in Shanghai, China and Charité Universitätsmediz in Berlin, Germany. The findings from the study, published this month in Scientific Reports, may help researchers to understand how brain function differs in individuals at different stages of moral reasoning and why some individuals who reach a high level of moral reasoning are more likely to engage in certain “prosocial” behaviors — such as performing community service or giving to charity — based on more advanced principles and ethical rules.

    The study refers to Lawrence Kohlberg’s stages of moral development theory which proposes that individuals go through different stages of moral reasoning as their cognitive abilities mature. According to the researchers, Kohlberg’s theory implies that individuals at a lower level of moral reasoning are more prone to judge moral issues primarily based on personal interests or adherence to laws and rules, whereas individuals with higher levels of moral reasoning judge moral issues based on deeper principles and shared ideals.

    The researchers’ previous work found an association between high levels of moral reasoning and gray matter volume, establishing a critical link between moral reasoning and brain structure. This more recent study sought to discover whether a link exists between moral reasoning and brain function.

    In this study, the researchers aimed to investigate whether the development of morality is associated with measurable aspects of brain function. To answer this question, they tested moral reasoning in a large sample of more than 700 Wharton MBA students, and looked at the brain reward system activity in a subset of 64 students, both with and without doing a task. According to Hengyi Rao, PhD, a research assistant professor of Cognitive Neuroimaging in Neurology and Psychiatry in the Perelman School of Medicine and senior author of the study, the team observed considerable individual differences in moral development levels and brain function in this relatively homogeneous and well-educated MBA group of subjects.

    “It is well established in the literature that the brain reward system is involved in moral judgment, decision making, and prosocial behavior. However, it remains unknown whether brain reward system function can be affected by stages of moral development,” Rao said. “To our knowledge, this study is the first to demonstrate the modulation effect of moral reasoning level on human brain reward system activity. Findings from our study provide new insights into the potential neural basis and underlying psychological processing mechanism of individual differences in moral development. ”

    The finding of increased brain reward system activity in individuals at a high level of moral reasoning suggests the importance of positive motivations towards others in moral reasoning development, rather than selfish motives. These findings also support Kohlberg’s theory that higher levels of moral reasoning tend to be promotion and other-focused (do it because it is right) rather than prevention or self-focused (do not do it because it is wrong).

    “Our study documents brain function differences associated with higher and lower levels of moral reasoning. It is still unclear whether the observed brain function differences are the cause or the result of differential levels of moral reasoning,” explained Diana Robertson, PhD, a James T. Riady professor of Legal Studies and Business Ethics at the Wharton School and a co-author of the study. “However, we believe that both factors of nurture, such as education, parental socialization and life experience, and factors of nature, like biological or evolutionary basis, the innate capacities of the mind, and the genetic basis may contribute to individual differences in moral development.”

    The researchers say future studies could expand on this work by assessing to what extent individual differences in moral reasoning development depend on in-born differences or learned experience, and whether education can further promote moral reasoning stage in individuals even past the age at which structural and functional brain maturation is complete.


  6. Expecting the worst: People’s perceived morality is biased towards negativity

    July 31, 2017 by Ashley

    From the University of Surrey press release:

    People who are believed to be immoral are unable to reverse individuals’ perception of them, potentially resulting in difficulties in the workplace and barriers in accessing fair and equal treatment in the legal system, a new study in PLOS One reports.

    Researchers from the University of Surrey and University of Milano-Bicocca (Italy) collected data from more than 400 participants on behavioural expectations of people described as ‘moral’ and ‘immoral’. Participants were asked to estimate the probability that an individual possessing a characteristic (i.e. honesty) would act in an inconsistent manner (i.e. dishonestly).

    It found participants perceived that those with a ‘good’ moral disposition were more likely to act out of character (i.e. immorally) than an immoral person to engage in inconsistent (i.e. moral) behaviours. For example, ‘covering for somebody’ was considered by participants to be a behaviour that could be displayed by a sincere person whereas an insincere person was less likely to be associated with behaviours such as ‘telling the truth’.

    Such a finding shows that those who are perceived to have immoral traits will have difficulty in changing how they are viewed by others as they are deemed to be less likely to change than a person classed as moral. This is particularly damaging for those who have a questionable character or are facing legal proceedings and highlights the obstacles they face in reversing perceptions of them.

    Lead author Dr Patrice Rusconi from the University of Surrey, Social Emotions and Equality in Relations (SEER) research group, said, “Popular television shows like Breaking Bad show that those viewed as morally ‘good’ are seen as more likely to act out of character and behave immorally on occasions.

    “However what we have found is that those perceived to be immoral are pigeon-holed, and are viewed as more likely to act in certain ways i.e. unjust and unfairly, and therefore unable to act morally on occasions.

    “How an individual is perceived is incredibly important, as if you are viewed negatively it can impact on your treatment in the workplace and in the legal system as you are judged on your past misdemeanours.”

    The study also examined whether these findings translated across different cultures. Researchers collected data from over 200 Italian and American participants, who were asked a series of questions including “How likely do you consider it that a righteous person would behave in an unrighteous fashion?”

    They discovered that despite cultural and lifestyle differences, Italian and American participants had similar perceptions that moral people are more likely act immorally than immoral people would act morally, highlighting that this is a problem encountered globally.


  7. Study examines nature of self-righteousness

    July 30, 2017 by Ashley

    From the University of Chicago Booth School of Business press release:

    Research finds that people tend to believe that they are more likely than others to donate blood, give to charity, treat another person fairly, and give up one’s own seat in a crowded bus for a pregnant woman. A widely cited U.S. News and World Report survey asked 1,000 Americans to indicate the likelihood that they and a long list of celebrities would go to heaven. The vast majority of respondents believed they were more likely to go to heaven than any of these celebrities, including the selfless nun Mother Teresa.

    However, a new study from the University of Chicago Booth School of Business titled “Less Evil Than You: Bounded Self-Righteousness in Character Inferences, Emotional Reactions, and Behavioral Extremes,” to be published in the forthcoming Personality and Social Psychology Bulletin, Nicholas Epley and Nadav Klein ask whether the extensive research on self-righteousness overlooks an important ambiguity: When people say they are more moral than others, do they mean they are more like a saint than others or lesslike a sinner? In other words, do people chronically believe they are “holier” than others or “less evil” than them?

    Klein and Epley conducted four experiments that explored how people judge themselves compared to other people in a variety of contexts. All of the experiments revealed that self-righteousness is asymmetric. Participants believed that they are less evil than others, but no more moral than them. Specifically, participants were less likely to make negative character inferences from their own unethical behavior than from others’ unethical behavior, believed they would feel worse after an unethical action than others would, and believed they were less capable of extreme unethical behavior compared to others. In contrast, these self-other differences were much weaker in evaluations of ethical actions.

    Klein and Epley call this finding “asymmetric self-righteousness,” reflecting the asym-metry that people do not believe they are more moral than others but rather less evil than them.

    One of the causes of asymmetric self-righteousness is that “people evaluate them-selves by adopting an ‘inside perspective’ focused heavily on evaluations of mental states such as intentions and motives, but evaluate others based on an ‘outside per-spective’ that focuses on observed behavior for which intentions and motives are then inferred,” the researches said. Accordingly, the researchers find that people who are more likely to ascribe cynical motives to their own behavior exhibit a smaller asymmetry in self-righteousness.

    The researchers noted that it remains to be seen whether bounded self-righteousness looks the same around the world. Basic moral norms of kindness and respect for others seem to be fairly universal sentiments, but future research will be needed to determine how culture-specific contexts could modify people’s tendency to feel moral superiority.

    “In countries where corruption is more common, the asymmetry in self-righteousness might be more pronounced because people will be more likely to observe unethical behavior committed by other people,” they said.

    Klein and Epley believe that this research has notable implications for the promotion of ethics policies and procedures within organizations. Specifically, people may be especially likely to resist policies aimed at preventing their own unethical behavior, simply because they don’t believe they would ever do anything unethical. This suggests that framing policies as promoting ethical behavior rather than discouraging unethical behavior might be more effective in increasing policy support. “Understanding asymmetric self-righteousness could help foster support for policies that can create more ethical people, and more ethical organizations,” they said.


  8. Study suggests tennis cheats may be predicted by their moral standards

    July 6, 2017 by Ashley

    From the Frontiers press release:

    When top athletes cheat it makes headline news. Retaliating badly to a foul, faking an injury, or deliberately harming an opponent can all result in a loss of credibility and respect. In some cases, it can lead to a loss of sponsorship and even long-term disqualification.

    So why do some athletes engage in immoral sporting conduct, when there is so much to lose?

    Previous research has discussed how the motivation for playing sport in the first place, as well as overall sporting morals and values, can lead to cheating behavior. A new study, published in the open-access journal Frontiers in Psychology, examines these personal characteristics and links them to direct observations of cheating during tennis matches.

    “We find that tennis players who use sport to boost their egos, and view success as their ability to outperform others by winning at all costs, tend to condone cheating and other dubious methods of gaining an advantage during match play. However, those players who strive to be the best they can be, and interpret success through their own personal improvement, tend to respect sporting rules and social conventions,” says Fabio Lucidi, Professor of Psychometrics at Sapienza University of Rome, Italy and lead author of the study. “Our most important discovery is that these attitudes can have a direct influence on whether a player cheats during a tennis match.”

    Lucidi and his colleagues surveyed hundreds players participating in the 2012 LemonBowl in Rome — one of the most important international tennis tournaments for young competitive players. They asked questions that could be used to measure a player’s sporting values, ultimate sporting goals (self-improvement or increased status), and attitudes towards cheating and dubious match play. Rather than ask the players directly whether they cheated or acted immorally during a match, a number of independent observers were used to record this behavior.

    “We used trained observers to assess cheating and dubious behavior during competitive matches. By doing this, we could rule out any bias from answers given by the players themselves. Cheating behavior is generally viewed as socially and culturally undesirable and their answers would be likely to reflect this,” says Lucidi.

    The answers given by the players revealed a connection between certain moral values and the acceptance of cheating behavior. These characteristics were then directly linked to the observation of cheating behavior during match play. While previous research has discussed the possible link between cheating and moral attitudes, this is the first study to have linked them directly.

    “Parents often encourage sporting activities for their children, assuming that it will help them develop a correct sense of morality. While this assumption might hold true in some cases, our study suggests that playing sport may actually elicit behavior that is ethically or morally inappropriate” says Lucidi.

    He continues, “Indirectly, our study highlights ways that parents or coaches can promote good sporting behavior in children. For instance, coaches should promote values of personal success and achievement, rather than those of ego and status, in order to decrease the risk of antisocial or ethically inappropriate behavior in sport.”

    It is hoped that this research will act as a basis for future studies clarifying the psychology of cheating in sport. “More direct observations of cheating across different sporting disciplines and professional levels are needed. It would be interesting to see how sporting values, moral standards and views on cheating develop over an athlete’s sporting career,” concludes Lucidi.


  9. Study finds ‘moral enhancement’ technologies neither feasible nor wise

    May 25, 2017 by Ashley

    From the North Carolina State University press release:

    A recent study by researchers at North Carolina State University and the Montreal Clinical Research Institute (IRCM) finds that “moral enhancement technologies” — which are discussed as ways of improving human behavior — are neither feasible nor wise, based on an assessment of existing research into these technologies.

    The idea behind moral enhancement technologies is to use biomedical techniques to make people more moral. For example, using drugs or surgical techniques to treat criminals who have exhibited moral defects.

    “There are existing ways that people have explored to manipulate morality, but the question we address in this paper is whether manipulating morality actually improves it,” says Veljko Dubljevic, lead author of the paper and an assistant professor of philosophy at NC State who studies the ethics of neuroscience and technology.

    Dubljevic and co-author Eric Racine of the IRCM reviewed the existing research on moral enhancement technologies that have been used in humans to assess the effects of these technologies and how they may apply in real-world circumstances.

    Specifically, the researchers looked at four types of pharmaceutical interventions and three neurostimulation techniques:

    • Oxytocin is a neuropeptide that plays a critical role in social cognition, bonding and affiliative behaviors, sometimes called “the moral molecule”;
    • Selective serotonin reuptake inhibitors (SSRIs) are often prescribed for depression, but have also been found to make people less aggressive;
    • Amphetamines, which some have argued can be used to enhance motivation to take action;
    • Beta blockers are often prescribed to treat high blood pressure, but have also been found to decrease implicit racist responses;
    • Transcranial magnetic stimulation (TMS) is a type of neurostimulation that has been used to treat depression, but has also been reported as changing the way people respond to moral dilemmas;
    • Transcranial direct current stimulation (TDCS) is an experimental form of neurostimulation that has also been reported as making people more utilitarian; and
    • Deep brain stimulation is a neurosurgical intervention that some have hypothesized as having the potential to enhance motivation.

    “What we found is that, yes, many of these techniques do have some effects,” Dubljevic says. “But these techniques are all blunt instruments, rather than finely tuned technologies that could be helpful. So, moral enhancement is really a bad idea.

    “In short, moral enhancement is not feasible — and even if it were, history shows us that using science to in an attempt to manipulate morality is not wise,” Dubljevic says.

    The researchers found different problems for each of the pharmaceutical approaches.

    Oxytocin does promote trust, but only in the in-group,” Dubljevic notes. “And it can decrease cooperation with out-group members of society, such as racial minorities, and selectively promote ethnocentrism, favoritism, and parochialism.”

    The researchers also found that amphetamines boost motivation for all types of behavior, not just moral behavior. Moreover, there are significant risks of addiction associated with amphetamines. Beta blockers were found not only to decrease racism, but to blunt all emotional response which puts their usefulness into doubt. SSRIs reduce aggression, but have serious side-effects, including an increased risk of suicide.

    In addition to physical side effects, the researchers also found a common problem with using either TMS or TCDS technologies.

    “Even if we could find a way to make these technologies work consistently, there are significant questions about whether being more utilitarian in one’s decision-making actually makes one more moral,” Dubljevic says.

    Lastly, the researchers found no evidence that deep brain stimulation had any effect whatsoever on moral behavior.

    “Our goal here is to share a cautionary note with those who are discussing different techniques for moral enhancement,” Dubljevic says. “I am in favor of research that is done responsibly, but against dangerous social experiments.”


  10. Judging moral character: A matter of principle, not good deeds

    May 9, 2017 by Ashley

    From the UC Berkeley press release:

    People may instinctively know right from wrong, but determining if someone has good moral character is not a black and white endeavor.

    According to new research by Berkeley-Haas Assoc. Prof. Clayton Critcher, people evaluate others’ moral character — being honest, principled, and virtuous — not simply by their deeds, but also by the context that determines how such decisions are made. Furthermore, the research found that what differentiates the characteristics of moral character (from positive yet nonmoral attributes) is that such qualities are non-negotiable in social relationships.

    “Judgments about moral character are ultimately judgments about whether we trust and would be willing to invest in a person,” says Critcher.

    Critcher, who studies social psychology in the Haas Marketing Group, writes about his findings in a recent book chapter, “What Do We Evaluate When We Evaluate Moral Character?” co-authored with Erik Helzer of the Johns Hopkins Carey Business School. The chapter will soon be published in the Atlas of Moral Psychology, from Guilford Press.

    But how do people detect whether good moral character is present? The findings suggest that people can do what is considered the wrong thing but actually be judged more moral for that decision. How?

    Imagine a social media company with access to its clients’ personal information and interactions. The government wants access to the user database for terrorist surveillance purposes, but it is up to the CEO to decide whether to violate the company’s privacy code. Is he considered a more moral person by complying with the request, or by refusing it? Critcher’s work shows that even people who think the CEO should hand over the data to the government consider the CEO to have better moral character if he does the opposite and adheres to the privacy policy.

    “For the CEO who sticks to a moral rule — even when we think a deviation could be justified — we are more confident he will behave in sensible, principled ways in the future,” says Critcher.

    In one experiment, Critcher asked 186 undergraduates to evaluate 40 positive personality traits by rating them on two dimensions: 1) how much each trait reflected moral character, and 2) whether the participants would or would not be willing to have a social relationship with someone who lacked that quality.

    “The two dimensions were correlated at .87, which means the two are almost the same thing. It is about the highest correlation I have ever seen in psychological research,” Critcher says. “What makes moral traits special is that their absence is a deal breaker, even when compared to qualities that the participants deemed just as positive.”

    But did people see these traits as essential because they were seen to be moral? The research team answered that question by leading people to construe the exact same trait as either moral or nonmoral. Research participants were shown 13 traits that the researchers deemed ambiguously moral (e.g., reasonable). Some participants were first exposed to traits that were clearly non-moral (e.g., imaginative); afterward, they found the ambiguous traits morally relevant. In contrast, other participants who first saw traits that were clearly moral (e.g., honorable) deemed the ambiguous traits as non-moral.

    Inducing people to see these 13 ambiguous qualities as more moral also caused them to deem these qualities as more essential for their social relationships. In short, participants considered good moral character to be synonymous with justifying a social investment.

    But here’s the conundrum: If people don’t want to invest in others who lack moral character, how do they ever learn whether new potential relationship partners have that requisite character? Perhaps people escape this dilemma by assuming the best about an individual’s moral character until they learn otherwise.

    “When we first meet someone, we can directly observe their attractiveness, and a short conversation can reveal a lot about their basic social graces, but typically their moral character is not on direct display. In fact, learning if someone is trustworthy often requires us to trust them first,” says Critcher.

    To that end, a third experiment revealed how optimism about an individual’s moral character helps people avoid this conundrum.

    “When people first meet someone, they tend to give them the benefit of the doubt when it comes to morality. People don’t start with the same optimism about their sense of humor, musical, or intellectual ability,” says Critcher. “It’s an adaptive optimism — one that encourages us to operate on enough faith that we can at least learn whether they are worthy of a social investment — until they prove us wrong.”

    See the paper: http://haas.org/2oVz0qS