1. Study suggests music is an universal language

    February 17, 2018 by Ashley

    From the Cell Press press release:

    Every culture enjoys music and song, and those songs serve many different purposes: accompanying a dance, soothing an infant, or expressing love. Now, after analyzing recordings from all around the world, researchers reporting in Current Biology on January 25 show that vocal songs sharing one of those many functions tend to sound similar to one another, no matter which culture they come from. As a result, people listening to those songs in any one of 60 countries could make accurate inferences about them, even after hearing only a quick 14-second sampling.

    The findings are consistent with the existence of universal links between form and function in vocal music, the researchers say.

    “Despite the staggering diversity of music influenced by countless cultures and readily available to the modern listener, our shared human nature may underlie basic musical structures that transcend cultural differences,” says Samuel Mehr (@samuelmehr) at Harvard University.

    “We show that our shared psychology produces fundamental patterns in song that transcend our profound cultural differences,” adds co-first author of the study Manvir Singh, also at Harvard. “This suggests that our emotional and behavioral responses to aesthetic stimuli are remarkably similar across widely diverging populations.”

    Across the animal kingdom, there are links between form and function in vocalization. For instance, when a lion roars or an eagle screeches, it sounds hostile to naive human listeners. But it wasn’t clear whether the same concept held in human song.

    Many people believe that music is mostly shaped by culture, leading them to question the relation between form and function in music, Singh says. “We wanted to find out if that was the case or not.”

    In their first experiment, Mehr and Singh’s team asked 750 internet users in 60 countries to listen to brief, 14-second excerpts of songs. The songs were selected pseudo-randomly from 86 predominantly small-scale societies, including hunter-gatherers, pastoralists, and subsistence farmers. Those songs also spanned a wide array of geographic areas designed to reflect a broad sampling of human cultures.

    After listening to each excerpt, participants answered six questions indicating their perceptions of the function of each song on a six-point scale. Those questions evaluated the degree to which listeners believed that each song was used (1) for dancing, (2) to soothe a baby, (3) to heal illness, (4) to express love for another person, (5) to mourn the dead, and (6) to tell a story. (In fact, none of the songs were used in mourning or to tell a story. Those answers were included to discourage listeners from an assumption that only four song types were actually present.)

    In total, participants listened to more than 26,000 excerpts and provided more than 150,000 ratings (six per song). The data show that, despite participants’ unfamiliarity with the societies represented, the random sampling of each excerpt, their very short duration, and the enormous diversity of this music, the ratings demonstrated accurate and cross-culturally reliable inferences about song functions on the basis of song forms alone.

    In a second, follow-up experiment designed to explore possible ways in which people made those determinations about song function, the researchers asked 1,000 internet users in the United States and India to rate the excerpts for three “contextual” features: (1) number of singers, (2) gender of singer(s), and (3) number of instruments. They also rated them for seven subjective musical features: (1) melodic complexity, (2) rhythmic complexity, (3) tempo, (4) steady beat, (5) arousal, (6) valence, and (7) pleasantness.

    An analysis of those data showed that there was some relationship between those various features and song function. But it wasn’t enough to explain the way people were able to so reliably detect a song’s function.

    Mehr and Singh say that one of the most intriguing findings relates to the relationship between lullabies and dance songs. “Not only were users best at identifying songs used for those functions, but their musical features seem to oppose each other in many ways,” Mehr says. Dance songs were generally faster, rhythmically and melodically complex, and perceived by participants as “happier” and “more exciting”; lullabies, on the other hand, were slower, rhythmically and melodically simple, and perceived as “sadder” and “less exciting.”

    The researchers say they are now conducting these tests in listeners who live in isolated, small-scale societies and have never heard music other than that of their own cultures. They are also further analyzing the music of many cultures to try to figure out how their particular features relate to function and whether those features themselves might be universal.


  2. Study suggests virtual reality makes journalism immersive, realism makes it credible

    December 17, 2017 by Ashley

    From the Penn State press release:

    Virtual reality technology may help journalists pull an audience into their stories, but they should avoid being too flashy, or their credibility could suffer, according to a team of researchers.

    In a study, participants indicated that stories experienced in virtual reality — VR — significantly outperformed text-based articles in several categories, such as giving them a sense of presence, or the feeling of being there, and increasing their empathy for the story’s characters, said S. Shyam Sundar, distinguished professor of communications and co-director of the Media Effects Research Laboratory. Using a cardboard VR viewer for experiencing 360-degree videos was better than interacting with the same videos on a computer screen, he added.

    “VR stories provide a better sense of being right in the midst of the story than text with pictures and even 360-degree video on a computer screen,” said Sundar. “This is remarkable given that we used two stories from the New York Times Magazine, which were high quality and rich in imagery even in the text version.”

    Although virtual reality outperformed text and video, the researchers cautioned that relying on some of the flashier design elements of virtual reality may affect credibility and cause the audience to have less trust in the story. They discovered that evoking a higher sense of “being there” was associated with lower trustworthiness ratings of the New York Times.

    “What really makes people trust VR more is that it creates a greater sense of realism compared to text and that creates the trustworthiness,” said Sundar. “But, if it doesn’t give that sense of realism, it can affect credibility. If developers try to gamify it or make it more fantasy-like, for example, people may begin to wonder about the credibility of what they’re seeing.”

    That said, the immersive quality of virtual reality and 360-degree video may make the content more shareable, according to the researchers, who report their findings in the current issue of Cyberpsychology, Behavior and Social Networking.

    “Virtual reality is often called an empathy machine,” said Sundar. “And, consistent with that thought, we did find that participants in both the VR and 360-degree video conditions were more empathetic toward the story characters than their counterparts in the text condition and they also reported higher intention to share the story with others.”

    Journalists on a tighter budget may consider using 360-degree videos. These videos, which allow users to rotate their view, are more immersive than the text-based story. However, these videos were unable to match virtual reality’s ability to make the audience feel like they are in the story, said Sundar.

    “On many things 360-degree video on a computer does as well as viewing it on a VR viewer, so you might not need to go through the trouble of putting together the cardboard viewer and slipping in the phone to experience it,” he added. “But, for being transported to the scene of the action, the VR viewer beats it.”

    The researchers noted that VR and 360-degree video demand more attention, which can hurt readers’ recall of story details.

    “We found some evidence to suggest that memory was affected by all the interaction with immersive journalism, but more research is needed to fully understand this effect,” said Sundar.

    The researchers recruited 129 participants and asked them to either read two stories in a magazine, watch the stories using 360-degree video, or use a cardboard virtual-reality reader provided by the newspaper company to view the stories.

    They asked volunteers to read two stories with different emotional intensity. The more emotional story — “The Displaced” — focused on the lives of three refugees. The other story — “The Click Effect” — examined marine biologists’ efforts to understand the vocalizations of dolphins. In general, the effects of immersive journalism were more pronounced with the less emotional story.

    The virtual reality stories were accessed through the newspaper’s mobile app.


  3. Study suggests people depend more on their right ear when trying to hear in a demanding environment

    December 14, 2017 by Ashley

    From the Acoustical Society of America press release:

    Listening is a complicated task. It requires sensitive hearing and the ability to process information into cohesive meaning. Add everyday background noise and constant interruptions by other people, and the ability to comprehend what is heard becomes that much more difficult.

    Audiology researchers at Auburn University in Alabama have found that in such demanding environments, both children and adults depend more on their right ear for processing and retaining what they hear.

    Danielle Sacchinelli will present this research with her colleagues at the 174th Meeting of the Acoustical Society of America, which will be held in New Orleans, Louisiana, Dec. 4-8.

    “The more we know about listening in demanding environments, and listening effort in general, the better diagnostic tools, auditory management (including hearing aids) and auditory training will become,” Sacchinelli said.

    The research team’s work is based on dichotic listening tests, used to diagnose, among other conditions, auditory processing disorders in which the brain has difficulty processing what is heard.

    In a standard dichotic test, listeners receive different auditory inputs delivered to each ear simultaneously. The items are usually sentences (e.g., “She wore the red dress”), words or digits. Listeners either pay attention to the items delivered in one ear while dismissing the words in the other (i.e., separation), or are required to repeat all words heard (i.e., integration).

    According to the researchers, children understand and remember what is being said much better when they listen with their right ear.

    Sounds entering the right ear are processed by the left side of the brain, which controls speech, language development, and portions of memory. Each ear hears separate pieces of information, which is then combined during processing throughout the auditory system.

    However, young children’s auditory systems cannot sort and separate the simultaneous information from both ears. As a result, they rely heavily on their right ear to capture sounds and language because the pathway is more efficient.

    What is less understood is whether this right-ear dominance is maintained through adulthood. To find out, Sacchinelli’s research team asked 41 participants ages 19-28 to complete both dichotic separation and integration listening tasks.

    With each subsequent test, the researchers increased the number of items by one. They found no significant differences between left and right ear performance at or below an individual’s simple memory capacity. However, when the item lists went above an individual’s memory span, participants’ performance improved an average of 8 percent (some individuals’ up to 40 percent) when they focused on their right ear.

    “Conventional research shows that right-ear advantage diminishes around age 13, but our results indicate this is related to the demand of the task. Traditional tests include four-to-six pieces of information,” said Aurora Weaver, assistant professor at Auburn University and member of the research team. “As we age, we have better control of our attention for processing information as a result of maturation and our experience.”

    In essence, ear differences in processing abilities are lost on tests using four items because our auditory system can handle more information.

    “Cognitive skills, of course, are subject to decline with advance aging, disease, or trauma,” Weaver said. “Therefore, we need to better understand the impact of cognitive demands on listening.”


  4. Study suggests nodding raises likability and approachability

    December 8, 2017 by Ashley

    From the Hokkaido University press release:

    In many countries, nodding is a communicative signal that means approval, and head shaking is a gesture of denial. Hokkaido University Associate Professor Jun-ichiro Kawahara and Yamagata University Associate Professor Takayuki Osugi previously demonstrated that the bowing motion of computer-generated, three-dimensional figures enhanced their perceived attractiveness. In their latest research, the team conducted experiments to rate how simple nodding and head shaking affects perceived trait impressions.

    Short video clips of computer-generated figures nodding, shaking their head or staying motionless were shown to 49 Japanese men and women aged 18 years or older, who then rated the figures’ attractiveness, likability and approachability on a scale of 0 to 100.

    The researchers found the likability and the approachability of the nodding figures was about 30 percent higher and 40 percent higher respectively than that of figures shaking their heads or staying motionless. The results were similar for both the male and female observers. The head shaking motion did not influence the ratings for likability and approachability. “Our study also demonstrated that nodding primarily increased likability attributable to personality traits, rather than to physical appearance,” Kawahara explained.

    The study provides a useful empirical contribution to this field as it is the first to show that merely observing another’s subtle head motions produced perceived positive attitudes. Their findings will likely be helpful in providing instructions about manners and hospitality as well as the evaluation of web-based avatars and humanoid robots. Kawahara emphasizes, however, “Generalizing these results requires a degree of caution because computer-generated female faces were used to manipulate head motions in our experiments. Further study involving male figures, real faces and observers from different cultural backgrounds, is needed to apply these findings to real-world situations.”


  5. Study suggests microblogging may help reduce negative emotions for people with social anxiety

    December 5, 2017 by Ashley

    From the Society for Consumer Psychology press release:

    Have you ever wanted to tell someone about a tough day at work or scary medical news, but felt nervous about calling a friend to share what’s going on?

    Findings from a new study suggest that people who feel apprehensive about one-on-one interactions are taking advantage of a new form of communication that may help regulate emotions during times of need: online social networks. The study is available online in the Journal of Consumer Psychology.

    “When people feel badly, they have a need to reach out to others because this can help reduce negative emotions and restore a sense of well-being,” says Eva Buechel, a professor in the business school at the University of South Carolina. “But talking to someone face-to-face or on the phone might feel daunting because people may worry that they are bothering them. Sharing a status update on Facebook or tweet on Twitter allows people to reach out to a large audience in a more undirected manner.”

    Sharing short messages to an audience on a social network, called microblogging, allows people to reach out without imposing unwanted communication on someone who might feel obligated to respond. Responses on online social networks are more voluntary. To test whether people are more likely to microblog when they feel socially apprehensive, Buechel asked participants in one group to write about a time when they had no one to talk to at a party, while the control group wrote about office products.

    Then she asked the participants who had an online social network account to log in and spend two minutes on their preferred social network. When the time ended, she asked people if they had microblogged. The results showed that those who had been led to feel socially apprehensive were more likely to microblog.

    To explore who is more likely to microblog, Buechel conducted another experiment in which one group of participants watched a clip from the movie “Silence of the Lambs,” while the control group watched clips of pictures from space. Then they answered questions about how likely they were to express themselves in three different forms of communication: microblogging, in person or direct message (a private online message to an individual). Finally, she asked people to answer a series of questions that measured their level of social anxiety in a variety of situations.

    Buechel discovered that people who were higher on the social apprehension scale were more likely to microblog after they had experienced negative emotions (as a result of watching the “Silence of The Lambs” clip). People who were low on the social apprehension scale, however, were more interested in sharing face-to-face or via direct message after watching the scary clip.

    “There is a lot of research showing that sharing online is less ideal than having communication in person, but these social networks could be an important communication channel for certain individuals who would otherwise stay isolated,” she says.

    She acknowledges that there is a danger for those who start to rely on social media as their only form of communication, but when used wisely, microblogging can be a valuable means of buffering negative emotions though social interaction.


  6. Study suggests major life events shared on social media revive dormant connections

    by Ashley

    From the University of Notre Dame press release:

    Online social networking has revolutionized the way people communicate and interact with one another, despite idiosyncrasies we all love to hate — think top-10 lists of the most annoying people and habits on social media.

    However, there are specific advantages to using social media, beyond the simple joys — and occasional annoyances — of reconnecting and gossiping with old friends about babies, birthdays and baptisms.

    New research from the University of Notre Dame’s Mendoza College of Business examines the impact of major life events, such as getting married or graduating from college, on social network evolution, which, the study shows, has important implications for business practices, such as in marketing.

    “Who Cares About Your Big Day? Impact of Life Events on Dynamics of Social Networks,” forthcoming in Decision Sciences by Hong Guo, associate professor of business analytics, and Sarv Devaraj, professor of business, (along with Arati Srinivasan of Providence College), shows that major life events not only get more social media attention overall, but also bring long dormant connections back into social interaction.

    The researchers specifically focus on two key characteristics of individuals’ social networks: indegree of ties and relational embeddedness. Indegree is the number of ties directed to an individual. Those with high indegree centrality are assumed to be the most popular, prestigious and powerful people in a network due to the many connections that they have with others.

    “We find that the indegree of ties increases significantly following a major life event, and that this impact is stronger for more active users in the network,” Guo says. “Interestingly, we find that the broadcast of major life events helps to revive dormant ties as reflected by a decrease in embeddedness following a life event.”

    Relational embeddedness is the extent to which a user communicates with only a subset of partners. Social networking sites allow users to manage a larger network of weak ties and at the same time provide a mechanism for the very rapid dissemination of information pertaining to important life events such as engagements, weddings or births.

    “We show that major events provide an opportunity for users to revive communication with their dormant ties while simultaneously eliciting responses or communication from a user’s passive or weak ties,” Guo says. “Increased communication with weak ties thereby reduces the extent of embeddedness. We also find that one-time life events, such as weddings, have a greater impact than recurring life events like birthdays on the evolution of individuals’ social networks.”

    So why does this matter outside of our social media circles?

    “Knowing this, advertisers may better target their ads to major life events. For example, a travel agent marketing a honeymoon package can target a user who has shared that they just got married,” Guo says. “From the social networking sites’ perspective, various design features may be set up to enable and entice users to better share their life events, like how Facebook helps friends promote birthdays.”


  7. Study suggests Twitter can reveal our shared mood

    by Ashley

    From the University of Bristol press release:

    In the largest study of its kind, researchers from the University of Bristol have analysed mood indicators in text from 800 million anonymous messages posted on Twitter. These tweets were found to reflect strong patterns of positive and negative moods over the 24-hour day.

    Circadian rhythms, widely referred to as the ‘body clock’, allows people’s bodies to predict their needs over the dark and light periods of the day. Most of this circadian activity is regulated by a small region in the hypothalamus of the brain called the suprachiasmatic nucleus, which is particularly sensitive to light changes at dawn and dusk, and sends signals through nerves and hormones to every tissue in the body.

    The research team looked at the use of words relating to positive and negative emotions, sadness, anger, and fatigue in Twitter over the course of four years. The public expressions of affect and fatigue were linked to the time they appeared on the social platform to reveal changes within the 24-hours. Whilst previous studies have shown a circadian variation for positive and negative emotions the current study was able to differentiate specific aspects of anger, sadness, and fatigue.

    Lead author and machine learning researcher Dr Fabon Dzogang, in collaboration with neuroscientist and current British Neuroscience Association President, Professor Stafford Lightman from Bristol Medical School: THS, and Nello Cristianini, Professor of Artificial Intelligence from the Department of Engineering Mathematics, have found distinct patterns of positive emotions and sadness between the weekends and the weekdays, and evidence of variation of these patterns across the seasons.

    Dr Fabon Dzogang, research associate in the Department of Computer Science, said: “Our research revealed strong circadian patterns for both positive and negative moods. The profiles of anger and fatigue were found remarkably stable across the seasons or between the weekdays/weekend. The patterns that our research revealed for the positive emotions and sadness showed more variability in response to these changing conditions, and higher levels of interaction with the onset of sunlight exposure. These techniques that we demonstrated on the social media provide valuable tools for the study of our emotions, and for the understanding of their interaction within the circadian rhythm.”

    Stafford Lightman, Professor of Medicine and co-author, added: “Since many mental health disorders are affected by circadian rhythms, we hope that this study will encourage others to use social media to help in our understanding of the brain and mental health disorders.”


  8. Study suggests babies understand when words are related

    December 1, 2017 by Ashley

    From the Duke University press release:

    The meaning behind infants’ screeches, squeals and wails may frustrate and confound sleep-deprived new parents. But at an age when babies cannot yet speak to us in words, they are already avid students of language.

    Even though there aren’t many overt signals of language knowledge in babies, language is definitely developing furiously under the surface,” said Elika Bergelson, assistant professor of psychology and neuroscience at Duke University.

    Bergelson is the author of a surprising 2012 study showing that six- to nine-month-olds already have a basic understanding of words for food and body parts. In a new report, her team used eye-tracking software to show that babies also recognize that the meanings of some words, like car and stroller, are more alike than others, like car and juice.

    By analyzing home recordings, the team found that babies’ word knowledge correlated with the proportion of time they heard people talking about objects in their immediate surroundings.

    “Even in the very early stages of comprehension, babies seem to know something about how words relate to each other,” Bergelson said. “And already by six months, measurable aspects of their home environment predict how much of this early level of knowledge they have. There are clear follow-ups for potential intervention work with children who might be at-risk for language delays or deficits.”

    The study appears the week of Nov. 20 in the Proceedings of the National Academy of Sciences.

    To gauge word comprehension, Bergelson invited babies and their caregivers into a lab equipped with a computer screen and few other infant distractions. The babies were shown pairs of images that were related, like a foot and a hand, or unrelated, like a foot and a carton of milk. For each pair, the caregiver (who couldn’t see the screen) was prompted to name one of the images while an eye-tracking device followed the baby’s gaze.

    Bergelson found that babies spent more time looking at the image that was named when the two images were unrelated than when they were related.

    “They may not know the full-fledged adult meaning of a word, but they seem to recognize that there is something more similar about the meaning of these words than those words,” Bergelson said.

    Bergelson then wanted to investigate how babies’ performance in the lab might be linked to the speech they hear at home. To peek into the daily life of the infants, she sent each caregiver home with a colorful baby vest rigged with a small audio recorder and asked them to use the vest to record day-long audio of the infant. She also used tiny hats fitted with lipstick-sized video recorders to collect hour-long video of each baby interacting with his or her caregivers.

    Combing through the recordings, Bergelson and her team categorized and tabulated different aspects of speech the babies were exposed to, including the objects named, what kinds of phrases they occurred in, who said them, and whether or not objects named were present and attended to.

    “It turned out that the proportion of the time that parents talked about something when it was actually there to be seen and learned from correlated with the babies’ overall comprehension,” Bergelson said.

    For instance, Bergelson said, if a parent says, “here is my favorite pen,” while holding up a pen, the baby might learn something about pens based on what they can see. In contrast, if a parent says, “tomorrow we are going to see the lions at the zoo,” the baby might not have any immediate clues to help them understand what lion means.

    “This study is an exciting first step in identifying how early infants learn words, how their initial lexicon is organized, and how it is shaped or influenced by the language that they hear in the world that surrounds them,” said Sandra Waxman, a professor of psychology at Northwestern University who was not involved in the study.

    But, Waxman cautions, it is too early in the research to draw any conclusions about how caregivers should be speaking to their infants.

    “Before anyone says ‘this is what parents need to be doing,’ we need further studies to tease apart how culture, context and the age of the infant can affect their learning,” Waxman said.

    “My take-home to parents always is, the more you can talk to your kid, the better,” Bergelson said. “Because they are listening and learning from what you say, even if it doesn’t appear to be so.”

     


  9. Study suggests punctuation in text messages helps replace cues found in face-to-face conversations

    November 25, 2017 by Ashley

    From the Binghamton University press release:

    Emoticons, irregular spellings and exclamation points in text messages aren’t sloppy or a sign that written language is going down the tubes — these “textisms” help convey meaning and intent in the absence of spoken conversation, according to newly published research from Binghamton University, State University of New York.

    “In contrast with face-to-face conversation, texters can’t rely on extra-linguistic cues such as tone of voice and pauses, or non-linguistic cues such as facial expressions and hand gestures,” said Binghamton University Professor of Psychology Celia Klin. “In a spoken conversation, the cues aren’t simply add-ons to our words; they convey critical information. A facial expression or a rise in the pitch of our voices can entirely change the meaning of our words.”

    “It’s been suggested that one way that texters add meaning to their words is by using “textisms” — things like emoticons, irregular spellings (sooooo) and irregular use of punctuation (!!!).”

    A 2016 study led by Klin found that text messages that end with a period are seen as less sincere than text messages that do not end with a period. Klin pursued this subject further, conducting experiments to see if people reading texts understand textisms, asking how people’s understanding of a single-word text (e.g., yeah, nope, maybe) as a response to an invitation is influenced by the inclusion, or absence, of a period.

    “In formal writing, such as what you’d find in a novel or an essay, the period is almost always used grammatically to indicate that a sentence is complete. With texts, we found that the period can also be used rhetorically to add meaning,” said Klin. “Specifically, when one texter asked a question (e.g., I got a new dog. Wanna come over?), and it was answered with a single word (e.g., yeah), readers understood the response somewhat differently depending if it ended with a period (yeah.) or did not end with a period (yeah). This was true if the response was positive (yeah, yup), negative (nope, nah) or more ambiguous (maybe, alright). We concluded that although periods no doubt can serve a grammatical function in texts just as they can with more formal writing — for example, when a period is at the end of a sentence — periods can also serve as textisms, changing the meaning of the text.”

    Klin said that this research is motivated by an interest in taking advantage of a unique moment in time when scientists can observe language evolving in real time.

    “What we are seeing with electronic communication is that, as with any unmet language need, new language constructions are emerging to fill the gap between what people want to express and what they are able to express with the tools they have available to them,” said Klin. “The findings indicate that our understanding of written language varies across contexts. We read text messages in a slightly different way than we read a novel or an essay. Further, all the elements of our texts — the punctuation we choose, the way that words are spelled, a smiley face — can change the meaning. The hope, of course, is that the meaning that is understood is the one we intended. Certainly, it’s not uncommon for those of us in the lab to take an extra second or two before we send texts. We wonder: How might this be interpreted? ‘Hmmm, period or no period? That sounds a little harsh; maybe I should soften it with a “lol” or a winky-face-tongue-out emoji.'”

    With trillions of text messages sent each year, we can expect the evolution of textisms, and of the language of texting more generally, to continue at a rapid rate, wrote the researchers. Texters are likely to continue to rely on current textisms, as well to as create new textisms, to take the place of the extra-linguistic and nonverbal cues available in spoken conversations. The rate of change for “talk-writing” is likely to continue to outpace the changes in other forms of English.

    “The results of the current experiments reinforce the claim that the divergence from formal written English that is found in digital communication is neither arbitrary nor sloppy,” said Klin. “It wasn’t too long ago that people began using email, instant messaging and text messaging on a regular basis. Because these forms of communication provide limited ways to communicate nuanced meaning, especially compared to face-to-face conversations, people have found other tools.”

    Klin believes that this subject could be studied further.

    “An important extension would be to have a situation that more closely mimics actual texting, but in the lab where we can observe how different cues, such as punctuation, abbreviations and emojis, contribute to texters’ understanding,” she said. “We might also examine some of the factors that should influence texters’ understanding, such as the social relationship between the two texters or the topic of the text exchange, as their level of formality should influence the role of things like punctuation.”


  10. Study suggests commonplace jokes may normalize experiences of sexual misconduct

    November 20, 2017 by Ashley

    From the Taylor & Francis press release:

    Commonplace suggestive jokes, such as “that’s what she said,” normalize and dismiss the horror of sexual misconduct experiences, experts suggest in a new essay published in Communication and Critical/Cultural Studies, a National Communication Association publication.

    The recent wave of sexual assault and harassment allegations against prominent actors, politicians, media figures, and others highlights the need to condemn inappropriate and misogynistic behavior, and to provide support and encouragement to victims.

    Communication scholars Matthew R. Meier of West Chester University of Pennsylvania and Christopher A. Medjesky of the University of Findlay argue that off-hand, common remarks such as the “that’s what she said” joke are deeply entrenched in modern society, and contribute to humorizing and legitimizing sexual misconduct.

    The first notable “that’s what she said” joke occurred during a scene in the 1992 film Wayne’s World; however, it became a running joke in the hit television show The Office, leading to “dozens of internet memes, video compilations, and even fansites dedicated to cataloguing occurrences and creating new versions of the joke.” After analyzing multiple examples of the joke used in the show, the authors argue that the “that’s what she said” joke serves as an analog to the rhetoric of rape culture.

    By discrediting and even silencing victims, this type of humor conditions audiences to ignore — and worse, to laugh at — inappropriate sexual behavior.

    Furthermore, the authors suggest that these types of comments contribute to dangerous societal and cultural norms by ultimately reinforcing the oppressive ideologies they represent, despite the intentions or naivete of the people making the jokes.

    The authors argue that the “that’s what she said” joke cycle is part of a larger discourse that not only becomes culturally commonplace, but also reinforces dangerous ideologies that are so entrenched in contemporary life that we end up laughing at something that isn’t funny at all.