1. Study suggests people whose minds wander are less likely to stick to long-term goals

    June 24, 2017 by Ashley

    From the University of Waterloo press release:

    People whose minds tend to wander are less likely to stick to their long-term goals, according to new research led by the University of Waterloo.

    The research found that those who could sustain focus in day-to-day life were more likely to report maintaining perseverance and passion in their long-term objectives.

    “Those who often can’t keep their minds on their tasks — such as thinking about weekend plans instead of listening to the lecturer in class — tend to have more fleeting aspirations,” said Brandon Ralph, the study’s lead author and a PhD candidate in psychology at Waterloo. “We’ve shown that maintaining concentration over hours and days predicts passion over longer periods.”

    The researchers’ findings resulted from three separate studies. In the first two studies, surveys measured the mind wandering, inattention and grittiness of 280 participants. In the third study, 105 post-secondary students were asked to report on their mind-wandering habits during class and then fill out questionnaires to measure their grittiness.

    Grit is a personality trait involving sustained interest and effort toward long-term goals and is purported to predict success in careers and education independent of other traits, including intelligence.

    Next steps in the research involve determining if people who would like to mitigate the impacts of mind wandering can do so with mindfulness training exercises, such as meditation.

    “It’s clear that mind wandering is related to the ability to focus in the moment as well as on long-term goals,” said Ralph. “As we move forward in this work, we’d like to see if practices such as meditation can assist people in achieving their goals.”

    The study, done in cooperation with researchers at Sheridan College, appears in the Canadian Journal of Experimental Psychology.


  2. Study tests value of perspective-taking training in understanding other people’s mental states

    May 29, 2017 by Ashley

    From the Springer press release:

    Through targeted training, people can be guided to develop a better inner awareness about their own mental states, and to have a better understanding of the mental state of others. This is because the better people understand themselves, the more easily they can think themselves in other people’s shoes. Such training therefore ultimately helps us deal with current global challenges, says Anne Böckler of the Max Planck Institute for Human Cognitive and Brain Science and Julius Maximilians University Würzburg in Germany. She is the lead author of a study in Springer’s Journal of Cognitive Enhancement which looked at the influence a three-month contemplative training course had on a group of adults.

    During the three months, various methods were used to teach two groups of 80 and 81 participants, aged between20 and 55 years, how to develop their perspective taking skills. The training was inspired by the Internal Family Systems model which views the self as being composed of different complex inner parts or subpersonalities, each with their own defining set of behaviours, thoughts and emotions. Participants were taught to identify and classify their own inner parts, as well as those of others. They explored how being identified with different inner parts such as their caring, managing or pleasure parts affects their everyday experiences.

    The results revealed that after the training, the participants could easily identify prototypical inner parts such as “the inner manager” or “the inner child” in their own personalities. The degree to which participants improved their understanding of themselves — as reflected in the number of different inner parts they could identify — went hand in hand with how much participants improved in terms of their own flexibility and being able to accurately infer and understand the mental state of others. The more negative inner parts they could identify, the better their awareness of other people’s frame of mind became thanks to the training.

    “There is a close link between getting better in understanding oneself and improvement in social intelligence,” says Böckler.

    The realisation that people who learn to better identify negative aspects of themselves are better able to understand others has interesting implications for the ever-changing world we live in. “This insight could prove important in an increasingly complex and interconnected world where taking the view of others, especially those from different cultures or with different religious backgrounds, becomes ever more difficult — and ever more necessary,” adds Böckler.

    The study suggests that work on inner parts and training to flexibly take perspective on self-related inner mental states holds promise in therapeutic as well as non-clinical settings aiming to foster psychological health and social intelligence. It also has value for fundamental research in the fields of personality and social psychology and social neurosciences.


  3. Study suggests moviegoers’ taste in movies is idiosyncratic and at odds with critics’ preferences

    May 19, 2017 by Ashley

    From the New York University press release:

    Our taste in movies is notably idiosyncratic, and not linked to the demographic traits that studios target, finds new study on film preferences. The work also shows that moviegoers’ ratings are not necessarily in line with those of critics.

    “What we find enjoyable in movies is strikingly subjective — so much so that the industry’s targeting of film goers by broad demographic categories seems off the mark,” says Pascal Wallisch, a clinical assistant professor in New York University’s Department of Psychology and the senior author of the study, which appears in the journal Projections.

    “Critics may be adept at evaluating films, but that doesn’t mean their assessments will accurately predict how much the public will like what they see,” adds co-author Jake Whritner, a graduate of the Cinema Studies Program at NYU’s Tisch School of the Arts and currently part of the Cognitive and Data Science Lab at Rutgers University-Newark.

    Over the past century, filmmakers have sought to control viewers’ attention through different editing and shooting techniques, with the assumption that the audience will respond in the same way.

    However, while neural and attentional processing has been extensively examined, the level of agreement in the appraisal of movies among viewers has not been studied. Similarly, while past research has analyzed the relationship between reviews and box-office success as well as agreement among critics for films, none have explored agreement between critics and the general public.

    To address these questions, the researchers considered more than 200 major motion pictures, taking into account popularity, financial success, and critics’ reviews. They then surveyed over 3,000 participants, asking them to give a rating of how much they liked each of the films in the sample that they had previously seen. The researchers also asked participants to provide demographic information (e.g., age, gender) and whether they consider movie critics’ recommendations in choosing which movies to see.

    Finally, Wallisch and Whritner gathered reviews from 42 publicly accessible critics or rating sites (e.g., IMDb) for each of the films in the sample.

    The results generally showed low levels of correlation in movie preferences among study participants. However, there were some patterns. As the number of jointly seen films increased, so did the correlation of the ratings for such films — at least up to a point. When the number of ratings for a given film reached between 100 and 120, correlation grew to its highest point — but as this number continued to increase, correlation for that film’s ratings began to dip, before spiking up again at around 180 commonly seen films.

    Looking at demographics, the survey showed greater agreement in film ratings among male participants than among females — and this difference between genders was statistically significant. However, agreement among both men and women in films was relatively low. There was also little correlation between movie ratings and age — however, because the overall sample skewed younger, the significance of this result is limited. In general, neither gender nor age had much of an effect on inter-subjective agreement. Overall, the low inter-subjective agreement could account for all the vehement disagreements between people on whether or not a given movie was good: on average, one could expect a 1.25-star difference in disagreement on a scale from 0 to 4 stars.

    Turning to correlations with movie critics, the connection between the ratings of critics and any given participant was no better than the average correlation between participants. Even a critic as well regarded as the late Roger Ebert did no better in predicting how well someone would like a movie than a randomly picked participant in the sample. In contrast, critics agreed with each other relatively strongly. In fact, the best predictor of a critic’s response to a film was that of other critics while the best predictor of a non-critics’ response were the aggregated evaluations of other non-critics such as those on the Internet Movie Database (IMDB) — but not the aggregated ratings of critics such as Rotten Tomatoes. So it is not the aggregation of ratings per se that improves predictability, but aggregation of non-critic ratings.

    “Something about being a critic seems to make the recommendations of critics unsuitable for predicting the movie taste of regular people,” the authors conclude. “This study is the first to quantify this in an adequately powered fashion, and it helps to explain why people often perceive critics to be out of touch.

    “There are some people in our sample who are ‘superpredictors’ — they perform as well as the best aggregated non-critic ratings when it comes to predicting average non-critics will like. Short of these exceptional predictors, if someone seeks recommendations about what to see, their best bet is to either consult sites that aggregate individual judgements, or to find other individuals or critics with similar tastes.”


  4. Study looks at how personalities change under the influence of alcohol

    by Ashley

    From the Association for Psychological Science press release:

    People typically report substantive changes to their personality when they become intoxicated, but observations from outsiders suggest less drastic differences between “sober” and “drunk” personalities, according to research published in Clinical Psychological Science, a journal of the Association for Psychological Science.

    “We were surprised to find such a discrepancy between drinkers’ perceptions of their own alcohol-induced personalities and how observers perceived them,” says psychological scientist Rachel Winograd of the University of Missouri, St. Louis — Missouri Institute of Mental Health. “Participants reported experiencing differences in all factors of the Five Factor Model of personality, but extraversion was the only factor robustly perceived to be different across participants in alcohol and sober conditions.”

    Winograd and colleagues speculate that this discrepancy may come down to inherent differences in point of view:

    “We believe both the participants and raters were both accurate and inaccurate — the raters reliably reported what was visible to them and the participants experienced internal changes that were real to them but imperceptible to observers,” she explains.

    The idea that we transform into different people when we’re under the influence is a popular one. And systematic differences in an individual’s sober behavior and their drunken behaviors can even inform clinical determinations about whether someone has a drinking problem. But the science on “drunk personality” as a concept is less clear. In Winograd’s previous studies, participants reliably reported that their personality changes when they imbibe, but experimental evidence for this kind of global change was lacking.

    Winograd and colleagues decided to bring the question into the lab, where they could carefully calibrate alcohol consumption and closely monitor individual behavior. They recruited 156 participants, who completed an initial survey gauging their typical alcohol consumption and their perceptions of their own “typical sober” personality and “typical drunk” personality.

    Later, the participants came to the lab in friend groups of 3 or 4, where the researchers administered a baseline breathalyzer test and measured the participants’ height and weight. Over the course of about 15 minutes, each participant consumed beverages — some drank Sprite, while others consumed individually-tailored vodka and Sprite cocktails designed to produce a blood alcohol content of about .09.

    After a 15-minute absorption period, the friends worked through a series of fun group activities — including discussion questions and logic puzzles — intended to elicit a variety of personality traits and behaviors.

    The participants completed personality measures at two points during the lab session. And outside observers used video recordings to complete standardized assessments of each individual’s personality traits.

    As expected, participants’ ratings indicated change in all five of the major personality factors. After drinking, participants reported lower levels of conscientiousness, openness to experience, and agreeableness, and they reported higher levels of extraversion and emotional stability (the inverse of neuroticism).

    The observers, on the other hand, noted fewer differences across the sober and intoxicated participants’ personality traits. In fact, observer ratings indicated reliable differences in only one personality factor: extraversion. Specifically, participants who had consumed alcohol were rated higher on three facets of extraversion: gregariousness, assertiveness, and levels of activity.

    Given that extraversion is the most outwardly visible personality factor, it makes sense that both parties noted differences in this trait, the researchers argue.

    They acknowledge, however, that they cannot rule out other influences — such as participants’ own expectations of their drunk personality — that may have contributed to the discrepancy in ratings.

    “Of course, we also would love to see these findings replicated outside of the lab — in bars, at parties, and in homes where people actually do their drinking,” says Winograd.

    “Most importantly, we need to see how this work is most relevant in the clinical realm and can be effectively included in interventions to help reduce any negative impact of alcohol on peoples’ lives,” she concludes.


  5. Personality factors are best defense against losing your job to a robot

    May 15, 2017 by Ashley

    From the University of Houston press release:

    Worried robots will take your job? Researchers say people who are more intelligent and who showed an interest in the arts and sciences during high school are less likely to fall victim to automation.

    Later educational attainment mattered, but researchers said the findings highlight the importance of personality traits, intelligence and vocational interests in determining how well people fare in a changing labor market. The work was published this week in the European Journal of Personality.

    “Robots can’t perform as well as humans when it comes to complex social interactions,” said Rodica Damian, assistant professor of social and personality psychology at the University of Houston and lead author of the study. “Humans also outperform machines when it comes to tasks that require creativity and a high degree of complexity that is not routine. As soon as you require flexibility, the human does better.”

    Researchers used a dataset of 346,660 people from the American Institutes of Research, which tracked a representative sample of Americans over 50 years, looking at personality traits and vocational interests in adolescence, along with intelligence and socioeconomic status. It is the first study to look at how a variety of personality and background factors predict whether a person will select jobs that are more (or less) likely to be automated in the future.

    “We found that regardless of social background, people with higher levels of intelligence, higher levels of maturity and extraversion, higher interests in arts and sciences … tended to select (or be selected) into less computerizable jobs 11 and 50 years later,” they wrote.

    In addition to Damian, the researchers included Marion Spengler of the University of Tuebingen and Brent W. Roberts of the University of Illinois at Urbana-Champaign.

    Damian said the findings suggest traditional education may not be fully equipped to address upcoming changes in the labor market, although she acknowledged the educational system has changed since the research subjects were in school in the 1960s.

    “Perhaps we should consider training personality characteristics that will help prepare people for future jobs,” she said.

    The researchers found that every 15-point increase in IQ predicted a 7 percent drop in the probability of one’s job being computerized, the equivalent of saving 10.19 million people from losing their future careers to computerization if it were extrapolated across the entire U.S. population. Similarly, an increase of one standard deviation in maturity or in scientific interests — equal to an increase of 1 point on a 5-point scale, such as moving from being indifferent to scientific activities to liking them fairly well — across the U.S. population would each be equivalent to 2.9 million people avoiding a job loss to computerization.

    While IQ is not easily changed, a solution could be to find effective interventions to increase some personality traits — doing well in social interactions, for example, or being industrious — or interest in activities related to the arts and sciences, Damian said.

    Machine learning and big data will allow the number of tasks that machines can perform better than humans to increase so rapidly that merely increasing educational levels won’t be enough to keep up with job automation, she said. “The edge is in unique human skills.”

    Still, that can correlate with more education, and the researchers say an across-the-board increase in U.S. education levels could mean millions fewer jobs at risk. Targeting at-risk groups would yield significant benefits, she said.

    And while skeptics question whether the labor market will be able to absorb millions of higher skilled workers, Damian looks at it differently.

    “By preparing more people, at least more people will have a fighting chance,” she said.


  6. Study looks at how we perceive fictional characters

    May 14, 2017 by Ashley

    From the University at Buffalo press release:

    The Sopranos‘ Tony Soprano and Walter White from Breaking Bad rank among recent television drama’s most notorious protagonists, each of questionable morality. So, here’s the question: Do you like them?

    If you answered “yes” as many people do, a recently published paper by a University at Buffalo researcher suggests why people might feel good about characters who do bad things.

    The research has implications ranging from better understanding our relationship to fictional narratives, to possibly improving the effectiveness of character-based public service campaigns, according to lead author Matthew Grizzard, an assistant professor in UB’s Department of Communication and an expert on the cognitive, emotional and psychobiological effects of media entertainment.

    The results, with co-authors Jialing Huang, Kaitlin Fitzgerald, Changhyun Ahn and Haoran Chu, all UB graduate students, appear in the journal Communication Research.

    Grizzard says the reasons an audience may like or dislike a character has been a key question for media researchers since the 1970s.

    Morality matters. Viewers tend to like the good guys and dislike the bad guys. But Grizzard’s study, which builds on previous research, also suggests that we don’t necessarily need to see behavior to make a distinction between the hero and the villain.

    Grizzard’s study manipulated what characters looked like and measured audience perceptions. They hoped to find out whether simple differences in appearance — for example black clothes compared to lighter colors — would be enough for viewers to categorize a character as a hero or villain.

    Earlier research, meantime, had found that heroes and villains differ in morality but not competence.

    “Villains aren’t just immoral. They’re good at being bad,” according to Grizzard.

    The previous research gave Grizzard’s research team a pattern for determining whether they were activating perceptions of heroes and villains or whether they were simply creating biases unrelated to narrative characters.

    “If our data had come out where the heroic-looking character was both more moral and more competent than the villain, then we probably just created a bias. But because the hero was more moral than the villain but equally competent, we’re more confident that visuals can activate perceptions of heroic and villainous characters,” says Grizzard.

    Beyond an understanding of how visuals can influence perceptions of character, the findings indicate that judgments of characters depend on comparisons audiences make between characters — and the order of introduction plays a role.

    Heroes were judged to be more heroic when they appeared after a villain and villains were judged to be more villainous when they appeared after a hero.

    “What’s happening here is that we’re not making isolated judgements about these characters using some objective standard of morality,” says Grizzard. “We’re constantly making comparisons about these characters and the forces they face.”

    With Walter White, for example, the audience sees the evolution of a character whose ethics progressively spiral downward from season to season, and yet audiences remained loyal to him. That loyalty surprised Breaking Bad creator Vince Gilligan.

    Gilligan said in an interview that at some point he thought the audience would turn against the character. But Grizzard points out that the show’s antagonists concurrently get worse along with White.

    “We see a trend where White is not decreasing by himself. There’s likely to be a constant comparison with other characters,” says Grizzard. “So White is probably benefiting from the comparisons the audience is making.”

    Grizzard says it’s a polarization effect. The hero seems better when compared to the villain’s darkness while the villain seems worse when viewed in the hero’s light.

    But being bad isn’t, well, all bad, at least in the world of fictional narrative. It comes with certain advantages, according to Grizzard.

    “We find that characters who are perceived as villains get a bigger boost from the good actions or apparent altruism than heroes, like the Severus Snape character from the Harry Potter books and films.”

    These findings could be informative for public service campaigns or tele-novellas that try to promote certain behaviors.

    “It shows the importance of how these characters are framed,” he says.


  7. Study suggests perceived knowledge of others’ “true selves” is inherently tied to perceptions of their morality

    May 8, 2017 by Ashley

    From the Texas A&M University press release:

    How can you ever really know someone? Researchers at Texas A&M University found our perceived knowledge of others’ “true selves” is inherently tied to perceptions of their morality.

    “We found evidence consistent with a strong psychological link between morality and identity,” says Andrew Christy, a researcher in Texas A&M’s Department of Psychology who specializes in social and personality psychology.

    For “The Reciprocal Relationship Between Perceptions of Moral Goodness and Knowledge of Others’ True Selves,” published in the journal Social Psychological and Personality Science, Christy and his co-authors hypothesized that people would report knowing the most about others who were portrayed as morally good. A secondary hypothesis was that this effect would work both ways, such that others who were described as being easily knowable would also be perceived as more moral than those described as being unknowable.

    The researchers presented participants with texts describing a person’s morality and competency, then asked them to measure how well they thought they knew the person’s “true self.” In another study, participants read a passage in which the author describes their roommate as either being readily knowable or almost completely unknowable. Afterwards they answered questions about their impressions of the target, including how moral vs. immoral they perceived the target to be.

    The researchers found a strong psychological link between morality and identity. “We found that participants reported knowing the most about targets when they were described as moral, compared to targets described in different ways (e.g. competent),” explains Christy. “Our results also suggested that this is a bidirectional relationship — participants perceived the knowable target as more moral than the unknowable target.”

    Christy says these findings show that most people operate on the default assumption that others are fundamentally good. “Thus when someone is kind to us or otherwise provides evidence of goodness, this effectively confirms our pre-existing assumptions about them and leads us to feel that we really do know them.”

    So when it comes to friendships and other relationships, he concludes, “people will feel most familiar with others who make morally good first impressions.”


  8. New study suggests positive side to worrying

    May 2, 2017 by Ashley

    From the University of California – Riverside press release:

    Worry — it does a body good. And, the mind as well. A new paper by Kate Sweeny, psychology professor at the University of California, Riverside, argues there’s an upside to worrying.

    “Despite its negative reputation, not all worry is destructive or even futile,” Sweeny said. “It has motivational benefits, and it acts as an emotional buffer.”

    In her latest article, “The Surprising Upsides of Worry,” published in Social and Personality Psychology Compass, Sweeny breaks down the role of worry in motivating preventive and protective behavior, and how it leads people to avoid unpleasant events. Sweeny finds worry is associated with recovery from traumatic events, adaptive preparation and planning, recovery from depression, and partaking in activities that promote health, and prevent illness. Furthermore, people who report greater worry may perform better — in school or at the workplace — seek more information in response to stressful events, and engage in more successful problem solving.

    Worry as a Motivator

    The motivational power of worry has been studied and linked to preventive health behavior, like seatbelt use. In a nationally representative sample of Americans, feelings of worry about skin cancer predicted sunscreen use. And participants who reported higher levels of cancer-related worries also conducted breast self-examinations, underwent regular mammograms, and sought clinical breast examinations.

    “Interestingly enough, there are examples of a more nuanced relationship between worry and preventive behavior as well,” Sweeny said. “Women who reported moderate amounts of worry, compared to women reporting relatively low or high levels of worry, are more likely to get screened for cancer. It seems that both too much and too little worry can interfere with motivation, but the right amount of worry can motivate without paralyzing.”

    In the paper, Sweeny noted three explanations for worry’s motivating effects.

    1. Worry serves as a cue that the situation is serious and requires action. People use their emotions as a source of information when making judgements and decisions.

    2. Worrying about a stressor keeps the stressor at the front of one’s mind and prompts people toward action.

    3. The unpleasant feeling of worry motivates people to find ways to reduce their worry.

    “Even in circumstances when efforts to prevent undesirable outcomes are futile, worry can motivate proactive efforts to assemble a ready-made set of responses in the case of bad news,” Sweeny said. “In this instance, worrying pays off because one is actively thinking of a ‘plan B.'”

    Worry as a Buffer

    Worry can also benefit one’s emotional state by serving as an emotional bench-mark. Compared to the state of worry, any other feeling is pleasurable by contrast. In other words, the pleasure that comes from a good experience is heightened if preceded by a bad experience.

    “If people’s feelings of worry over a future outcome are sufficiently intense and unpleasant, their emotional response to the outcome they ultimately experience will seem more pleasurable in comparison to their previous, worried state,” Sweeny said.

    Research on bracing for the worst provides indirect evidence for the role of worry as an emotional buffer, according to Sweeny. As people brace for the worst, they embrace a pessimistic outlook to mitigate potential disappointment, boosting excitement if the news is good. Therefore, both bracing and worrying have an emotional payoff following the moment of truth.

    “Extreme levels of worry are harmful to one’s health. I do not intend to advocate for excessive worrying. Instead, I hope to provide reassurance to the helpless worrier — planning and preventive action is not a bad thing,” Sweeny said. “Worrying the right amount is far better than not worrying at all.”


  9. How dogs interact with others plays a role in decision-making

    May 1, 2017 by Ashley

    From the Canisius College press release:

    Researchers at Canisius College found that the relationship between two dogs living in the same household may impact how much influence they have on each other’s behavior. Dogs who showed little to no aggression towards their housemates were more likely to automatically follow each other than dogs who viewed their canine housemates as rivals. The study results are reported in the latest issue of Animal Cognition.

    To conduct this study, Christy Hoffman, Ph.D., and Malini Suchak, Ph.D., assistant professors of Animal Behavior, Ecology, and Conservation, traveled to 37 multi-dog households to test the dogs in their own homes. This was an unusual approach, since most studies test dogs in the laboratory and often pair them with other dogs they don’t know. “We really wanted to look at the impact of the relationship between the dogs on their behavior, and doing that in a setting natural to the dogs, with dogs they already know, is really important,” Suchak says.

    To classify how competitive household dogs were with each other, the owners were asked to fill out a survey known as the Canine Behavioral Assessment and Research Questionnaire (C-BARQ). Dogs that were low in rivalry never or rarely displayed aggressive behavior toward the other dog. Dogs that were high in rivalry displayed some degree of aggression around valuable resources, suggesting they have a more competitive nature. After their owners completed the C-BARQ, the dogs participated in a simple task. A research assistant placed two plates containing food in front of both dogs. One dog was allowed to approach the plates and eat the food from one plate before being walked out of the room. At that point, the second dog was allowed to make a choice. If the second dog followed the first dog, he arrived at an empty plate. If he didn’t follow the first dog, he went straight to the plate that still contained food.

    Dogs that were low in rivalry were more likely to follow the first dog and frequently ended up at the empty plate. What surprised the researchers was that low rivalry dogs only blindly followed the demonstrator when allowed to make their choice immediately. “Low and high rivalry dogs only differed in the choices they made when there was no delay,” Hoffman says. “When they had to wait 5 seconds before making their choice, all dogs tended to go directly to the full plate.”

    Suchak adds, “This suggests that the low rivalry dogs may have been automatically following their housemates. When we forced the dogs to wait, it was as if the low rivalry dogs actually took the time to think about the situation, and they went straight for the food.”

    The researchers also tested the dogs in a condition where a human removed the food from one plate before the dog made a choice. Interestingly, low rivalry dogs were more likely to follow the human demonstrator when there was no delay, a finding that paralleled what happened with dog demonstrators. Hoffman suggests this may have to do with the personality of low rivalry dogs, “Since the tendency of the low rivalry dogs to follow was seen when the demonstrator was both another dog and a human, competitiveness may be a characteristic of the individual that extends beyond their relationship with other dogs.”

    This means that if owners have a high rivalry dog “that dog may be more likely to think for himself, and less likely to blindly follow, than a dog that is less competitive,” says Hoffman. “On the whole, our findings show there is variation in the ways dogs make decisions and that how dogs interact with others plays a big role in how they respond under conditions that require quick thinking.”


  10. Narcissism and social networking

    April 23, 2017 by Ashley

    From the University of Würzburg press release:

    Social media sites such as Facebook and Twitter have become an important part of the lives of many people worldwide. Around two billion users were active on Facebook at the end of 2016; 500 million regularly post photos on Instagram and more than 300 million communicate via Twitter.

    Various studies conducted over the past years have investigated to what extent the use of social media is associated with narcissistic tendencies — with contradictory results. Some studies supported a positive relationship between the use of Facebook, Twitter and the likes, whereas others confirmed only weak or even negative effects.

    Most comprehensive meta-analysis so far

    Fresh findings are now presented by scientists from the Leibniz Institute for Educational Trajectories Bamberg and the University of Würzburg. They were able to show that there is a weak to moderate link between a certain form of narcissism and social media activity. When taking a differentiated look at specific forms of behaviour or at the participants’ cultural background, the effect is even pronounced in some cases.

    The study is managed by Professor Markus Appel, who holds the Chair of Media Communication at the University of Würzburg, and Dr. Timo Gnambs, head of the Educational Measurement section at the Leibniz Institute for Educational Trajectories, Bamberg. For their meta-analysis, the scientists summarized the results of 57 studies comprising more than 25,000 participants in total. They have now published their findings in the Journal of Personality.

    Forms of narcissism

    They think of themselves as being exceptionally talented, remarkable and successful. They love to present themselves to other people and seek approval from them: This is how psychologists describe the typical behaviour of people commonly referred to as narcissists. “Accordingly, social networks such as Facebook are believed to be an ideal platform for these people,” says Markus Appel.

    The network gives them easy access to a large audience and allows them to selectively post information for the purpose of self-promotion. Moreover, they can meticulously cultivate their image. Therefore, researchers have suspected social networking sites to be an ideal breeding ground for narcissists from early on.

    Three hypotheses

    The recently published meta-analysis shows that the situation does not seem to be as bad as feared. The scientists examined the truth behind three hypotheses. Firstly, the assumption that grandiose narcissists frequent social networking sites more often than representatives of another form of narcissism, the “vulnerable narcissists.” Vulnerable narcissism is associated with insecurity, fragile self-esteem, and social withdrawal.

    Secondly, they assumed that the link between narcissism and the number of friends and certain self-promoting activities is much more pronounced compared to other activities possible on social networking sites.

    Thirdly, the researchers hypothesized that the link between narcissism and the social networking behaviour is subject to cultural influences. In collectivistic cultures where the focus is on the community rather than the individual or where rigid roles prevail, social media give narcissists the opportunity to escape from prevalent constraints and present themselves in a way that would be impossible in public.

    The results

    The meta-analysis of the 57 studies did in fact confirm the scientists’ assumptions. Grandiose narcissists are encountered more frequently in social networks than vulnerable narcissists. Moreover, a link has been found between the number of friends a person has and how many photos they upload and the prevalence of traits associated with narcissism. The gender and age of users is not relevant in this respect. Typical narcissists spend more time in social networks than average users and they exhibit specific behavioural patterns.

    A mixed result was found for the influence of the cultural background on the usage behaviour. “In countries where distinct social hierarchies and unequal power division are generally more accepted such as India or Malaysia, there is a stronger correlation between narcissism and the behaviour in social media than in countries like Austria or the USA,” says Markus Appel.

    However, the analysis of the data from 16 countries on four continents does not show a comparable influence of the “individualism” factor.

    Generation Me

    So is the frequently cited “Generation Me” a product of social media such as Facebook and Instagram because they promote narcissistic tendencies? Or do these sites simply provide the ideal environment for narcissists? The two scientists were not able to finally answer these questions.

    “We suggest that the link between narcissism and the behaviour in social media follows the pattern of a self-reinforcing spiral,” Markus Appel says. An individual disposition controls the social media activities and these activities in turn reinforce the disposition. To finally resolve this question, more research has to be conducted over longer periods.