1. Musical scales may have developed to accommodate vocal limitations

    March 19, 2017 by Ashley

    From the University at Buffalo press release:

    For singers and their audiences, being “in tune” might not be as important as we think. The fact that singers fail to consistently hit the right notes may have implications for the development of musical scales as well.

    At issue is not whether singers hit the right or wrong note, but how close are they to any note. It’s what researchers call microtuning, according to Peter Pfordresher, a UB psychologist and the paper’s lead author of a new paper with Steven Brown of McMaster University published in the Journal of Cognitive Psychology.

    The findings not only suggest a different approach to the aesthetics of singing but could have a role in understanding the evolutionary development of the scales, as well as applications to childhood singing development and speech production for tone languages.

    There is a long-standing belief that musical scales arose from simple harmonic ratios. The Greek mathematician Pythagoras found that plucking a string at certain points produced pleasing steps similar to the progression heard in musical scales — Do-Re-Mi-Fa-So-La-Ti-Do. Scales came about as a way of getting as close as possible to Pythagoras’ pure tuning.

    Or maybe not.

    Pfordresher says there are at least three problems with trying to match Pythagoras’ pure tuning. First, scales are not purely tuned, which has been known for a long time. It’s also not clear to what extent all of the world’s musical scales tie into the kinds of principles Pythagoras pioneered. Pfordresher cites Indonesian musical scales as an example that does not align itself with Pythagorean pure tones.

    The third problems rests with Pythagoras basing his theory on instruments, first strings and later pipes.

    “This is where Steve and I came up with our evolutionary idea,” says Pfordresher. “Probably the best starting point to think about what we call music is to look at singing, not instruments.”

    The researchers studied three groups of singers of varying abilities: professionals, untrained singers who tend sing in tune and the untrained who tend not to sing in tune. They weren’t listening for whether the singers were hitting the right notes, but rather how close they were to any note.

    Pfordresher and Brown found that the groups did not differ in terms of microtuning, although they were very different aesthetically.

    “Our proposal is, maybe scales were designed as a way to accommodate how out of tune, how variable singers are,” says Pfordresher. “We suggest that the starting point for scales and tuning for scales was probably not the tuning of musical instruments, but the mistuning of the human voice.”

    To set up a kind of musical grammar requires rules that allow for songs to be understood, remembered and reproduced. To accomplish these goals, that system needs pitches spaced widely enough to accommodate inconsistencies from person to person.

    The space between Do and Re, for instance, is heard by playing two adjacent white keys on a piano keyboard and provides that kind of liberal spacing.

    “When you look around the world, you find there are a couple of properties for scales,” says Pfordresher. “There’s a tendency to have notes that are spaced somewhat broadly, much more broadly than the fine gradations in pitch that our ears can pick up.”

    This broad spacing helps all kinds of singers, including the nightingale wren, a bird whose virtuosity has been the province of poets since antiquity. Pfordresher says earlier research by Marcelo Araya-Salas found that flexibly tuned instruments like violins and trombones were more in tune that the wren’s song.

    And though not part of the published study, Pfordresher also analyzed an excerpt of a studio version of Frank Sinatra singing “The Best is Yet to Come.”

    “It’s a wonderful recording and a challenging song to sing, but when acoustically analyzed using several measurements, the pitches are not purely tuned,” says Pfordresher. “Although he’s close enough for our ears.”


  2. The making of music

    March 17, 2017 by Ashley

    From the Harvard University press release:

    These days, it’s a territory mostly dominated by the likes of Raffi and the Wiggles, but there’s new evidence that lullabies, play songs, and other music for babies and toddlers may have some deep evolutionary roots.

    A new theory paper, co-authored by Graduate School of Education doctoral student Samuel Mehr and Assistant Professor of Psychology Max Krasnow, proposes that infant-directed song evolved as a way for parents to signal to children that their needs are being met, while still freeing up parents to perform other tasks, like foraging for food, or caring for other offspring. Infant-directed song might later have evolved into the more complex forms of music we hear in our modern world. The theory is described in an open-access paper in the journal Evolution and Human Behavior.

    Music is a tricky topic for evolutionary science: it turns up in many cultures around the world in many different contexts, but no one knows why humans are the only musical species. Noting that it has no known connection to reproductive success, Professor of Psychology Steven Pinker, described it as “auditory cheesecake” in his book How the Mind Works.

    “There has been a lot of attention paid to the question of where music came from, but none of the theories have been very successful in predicting the features of music or musical behavior,” Krasnow said. “What we are trying to do with this paper is develop a theory of music that is grounded in evolutionary biology, human life history and the basic features of mammalian ecology.”

    At the core of their theory, Krasnow said, is the notion that parents and infants are engaged in an “arms race” over an invaluable resource — attention.

    “Particularly in an ancestral world, where there are predators and other people that pose a risk, and infants don’t know which foods are poisonous and what activities are hazardous, an infant can be kept safe by an attentive parent,” he said. “But attention is a limited resource.”

    While there is some cooperation in the battle for that resource — parents want to satisfy infants appetite for attention because their cries might attract predators, while children need to ensure parents have time for other activities like foraging for food — that mutual interest only goes so far.

    Attention, however, isn’t the only resource to cause such disagreements.

    The theory of parent-offspring conflict was first put forth over forty years ago by the evolutionary biologist Robert Trivers, then an Assistant Professor at Harvard. Trivers predicted that infants and parents aren’t on the same page when it comes to the distribution of resources.

    “His theory covers everything that can be classified as parental investment,” Krasnow said. “It’s anything that a parent could give to an offspring to help them, or that they may want to hold back for themselves and other offspring.”

    Sexual reproduction means that every person gets half of their genes from each parent, but which genes in particular can differ even across full siblings.

    Krasnow explains, “A gene in baby has only a fifty percent chance of being found in siblings by virtue of sharing two parents. That means that from the baby’s genetic perspective, she’ll want a more self-favoring division of resources, for example, than her mom or her sister wants, from their genetic perspectives.”

    Mehr and Krasnow took the idea of parent-offspring conflict and applied it attention. They predict that children should ‘want’ a greater share of their parents’ attention than their parents ‘want’ to give them. But how does the child know it is has her parent’s attention? The solution, Krasnow said, is that parents were forced to develop some method of signaling to their offspring that their desire for attention was being met.

    “I could simply look at my children, and they might have some assurance that I’m attending to them,” Krasnow said. “But I could be looking at them and thinking of something else, or looking at them and focusing on my cell phone, and not really attending to them at all. They should want a better signal than that.”

    Why should that signal take the form of a song?

    What makes such signals more honest, Mehr and Krasnow think, is the cost associated with them — meaning that by sending a signal to an infant, a parent cannot be sending it to someone else, sending it but lying about it, etc. “Infant directed song has a lot of these costs built in. I can’t be singing to you and be talking to someone else,” Krasnow said. “It’s unlikely I’m running away, because I need to control my voice to sing. You can tell the orientation of my head, even without looking at me, you can tell how far away I am, even without looking.”

    Mehr notes that infant-directed song provides lots of opportunities for parents to signal their attention to infants: “Parents adjust their singing in real time, by altering the melody, rhythm, tempo, timbre, of their singing, adding hand motions, bouncing, touching, and facial expressions, and so on. All of these features can be finely tuned to the baby’s affective state — or not. The match or mismatch between baby behavior and parent singing could be informative for whether or not the parent is paying attention to the infant.”

    Indeed, it would be pretty odd to sing a happy, bubbly song to a wailing, sleep-deprived infant.

    Krasnow agrees. “All these things make something like an infant directed vocalization a good cue of attention,” he continued. “And when you put that into this co-evolutionary arms race, you might end up getting something like infant-directed song. It could begin with something like primitive vocalizations, which gradually become more infant directed, and are elaborated into melodies.”

    “If a mutation develops in parents that allows them to do that quicker and better, then they have more residual budget to spend on something else, and that would spread,” he said. “Infants would then be able to get even choosier, forcing parents to get better, and so on. This is the same kind of process that starts with drab birds and results in extravagant peacocks and choosy peahens.” And as signals go, Krasnow said, those melodies can prove to be enormously powerful.

    “The idea we lay out with this paper is that infant-directed song and things that share its characteristics should be very good at calming a fussy infant — and there is some evidence of that,” he said. “We’re not talking about going from this type of selection to Rock-a-Bye Baby; this theory says nothing about the words to songs or the specific melodies, it’s saying that the acoustic properties of infant directed song should make it better at calming an infant than other music.”

    But, could music really be in our genes?

    “A good comparison to make is to language,” Krasnow said. “We would say there’s a strong genetic component to language — we have a capability for language built into our genes — and we think the same thing is going to be true for music.”

    What about other kinds of music? Mehr is optimistic that this work could be informative for this question down the road.

    “Let’s assume for a moment that the theory is right. How, then, did we get from lullabies to Duke Ellington?” he asked. “The evolution of music must be a complex, multi-step process, with different features developing for different reasons. Our theory raises the possibility that infant-directed song is the starting point for all that, with other musical behaviors either developing directly via natural selection, as byproducts of infant-directed song, or as byproducts of other adaptations.”

    For Pinker, the paper differs in one important way from other theories of how music evolves in that it makes evolutionary sense.

    “In the past, people have been so eager to come up with an adaptive explanation for music that they have advanced glib and circular theories, such as that music evolved to bond the group,” he said. “This is the first explanation that at least makes evolutionary sense — it shows how the features of music could cause an advantage in fitness. That by itself doesn’t prove that it’s true, but at least it makes sense!”


  3. Study notes how musical cues trigger different autobiographical memories

    March 16, 2017 by Ashley

    From the Springer press release:

    Happy memories spring to mind much faster than sad, scary or peaceful ones. Moreover, if you listen to happy or peaceful music, you recall positive memories, whereas if you listen to emotionally scary or sad music, you recall largely negative memories from your past. Those are two of the findings from an experiment in which study participants accessed autobiographical memories after listening to unknown pieces of music varying in intensity or emotional content. It was conducted by Signy Sheldon and Julia Donahue of McGill University in Canada, and is reported in the journal Memory & Cognition, published by Springer.

    The experiment tested how musical retrieval cues that differ on two dimensions of emotion — valence (positive and negative) and arousal (high and low) — influence the way that people recall autobiographical memories. A total of 48 participants had 30 seconds to listen to 32 newly composed piano pieces not known to them. The pieces were grouped into four retrieval cues of music: happy (positive, high arousal), peaceful (positive, low arousal), scary (negative, high arousal) and sad (negative, low arousal).

    Participants had to recall events in which they were personally involved, that were specific in place and time, and that lasted less than a day. As soon as a memory came to mind, participants pressed a computer key and typed in their accessed memory. The researchers noted how long it took participants to access a memory, how vivid it was, and the emotions associated with it. The type of event coming to mind was also considered, and whether for instance it was quite unique or connected with an energetic or social setting.

    Memories were found to be accessed most quickly based on musical cues that were highly arousing and positive in emotion, and could therefore be classified as happy. A relationship between the type of musical cue and whether it triggered the remembrance of a positive or a negative memory was also noted. The nature of the event recalled was influenced by whether the cue was positive or negative and whether it was high or low in arousal.

    “High cue arousal led to lower memory vividness and uniqueness ratings, but both high arousal and positive cues were associated with memories rated as more social and energetic,” explains Sheldon.

    During the experiment, the piano pieces were played to one half of the participants in no particular order, while for the rest the music was grouped together based on whether these were peaceful, happy, sad or scary pieces. This led to the finding that the way in which cues are presented influences how quickly and specifically memories are accessed. Cue valence also affects the vividness of a memory.

    More specifically, the researchers found that a greater proportion of clear memories were recalled when highly arousing positive cues were played in a blocked fashion. Positive cues also elicited more vivid memories than negative cues. In the randomized condition, negative cues were associated more vividly than positive cues.

    “It is possible that when cues were presented in a random fashion, the emotional content of the cue directed retrieval to a similar memory via shared emotional information,” notes Donahue.


  4. Machine learning writes songs that elicits emotions from its listeners

    March 14, 2017 by Ashley

    From the Osaka University press release:

    Music, more than any art, is a beautiful mix of science and emotion. It follows a set of patterns almost mathematically to extract feelings from its audience. Machines that make music focus on these patterns, but give little consideration to the emotional response of their audience. An international research team led by Osaka University together with Tokyo Metropolitan University, imec in Belgium and Crimson Technology has released a new machine-learning device that detects the emotional state of its listeners to produce new songs that elicit new feelings.

    “Most machine songs depend on an automatic composition system,” says Masayuki Numao, professor at Osaka University. “They are preprogrammed with songs but can only make similar songs.”

    Numao and his team of scientists wanted to enhance the interactive experience by feeding to the machine the user’s emotional state. Users listened to music while wearing wireless headphones that contained brain wave sensors. These sensors detected EEG readings, which the robot used to make music.

    “We preprogrammed the robot with songs, but added the brain waves of the listener to make new music.” Numao found that users were more engaged with the music when the system could detect their brain patterns.

    Numao envisions a number of societal benefits to a human-machine interface that considers emotions. “We can use it in health care to motivate people to exercise or cheer them up.”

    The device was on display at the 3rd Wearable Expo in Tokyo Japan last January.


  5. Music therapy increases effectiveness of pulmonary rehabilitation for COPD patients

    February 10, 2016 by Ashley

    From the The Mount Sinai Hospital / Mount Sinai School of Medicine media release:

    Brain MusicPatients with Chronic Obstructive Pulmonary Disease (COPD) and other chronic respiratory disorders who received music therapy in conjunction with standard rehabilitation saw an improvement in symptoms, psychological well-being and quality of life compared to patients receiving rehabilitation alone, according to a new study by researchers at The Louis Armstrong Center of Music and Medicine at Mount Sinai Beth Israel (MSBI).

    Study findings were published this week in Respiratory Medicine and suggest that music therapy may be an effective addition to traditional treatment.

    COPD is the fourth leading cause of death in the United States with symptoms including shortness of breath, wheezing, an ongoing cough, frequent colds or flu, and chest tightness. Patients with COPD are often socially isolated, unable to get to medical services and underserved in rehabilitation programs, making effective treatment difficult.

    The 68 study participants were diagnosed with chronic disabling respiratory diseases, including COPD. Over the course of six weeks, a randomized group of these patients attended weekly music therapy sessions. Each session included live music, visualizations, wind instrument playing and singing, which incorporated breath control techniques. Certified music therapists provided active music-psychotherapy. The music therapy sessions incorporated patients’ preferred music, which encouraged self-expression, increased engagement in therapeutic activities and an opportunity to cope with the challenges of a chronic disease.

    The care of chronic illness is purposefully shifting away from strict traditional assessments that once focused primarily on diagnosis, morbidity, and mortality rates,” said Joanne Loewy, DA, Director of the Louis Armstrong Center for Music and Medicine at MSBI, where the study was conducted. “Instead, the care of the chronically ill is moving toward methods that aim to preserve and enhance quality of life of our patients and activities of daily living through identification of their culture, motivation, caregiver/home trends and perceptions of daily wellness routines.”

    “Music therapy has emerged as an essential component to an integrated approach in the management of chronic respiratory disease,” said Jonathan Raskin, MD, co-author of the study and Director of the Alice Lawrence Center for Health and Rehabilitation at MSBI. “The results of this study provide a comprehensive foundation for the establishment of music therapy intervention as part of pulmonary rehabilitation care.”

     


  6. Study suggests listening to music while driving may not affect driver performance

    June 18, 2013 by Ashley

    From the University of Groningen press release via ScienceDaily:

    Young DriverMost drivers enjoy listening to the radio or their favourite CD while driving. Many of them switch on the radio without thinking. But is this safe?

    Experiments carried out by environment and traffic psychologist Ayça Berfu Ünal suggest that it makes very little difference. In fact the effects that were measured turned out to be positive. Music helps drivers to focus, particularly on long, monotonous roads. Ünal will be awarded a PhD by the University of Groningen on 10 June 2013.

    Experienced motorists between 25 and 35 years of age are perfectly capable of focusing on the road while listening to music or the radio, even when driving in busy urban traffic. Ünal makes short shrift of the commonly held idea that motorists who listen to music drive too fast or ignore the traffic regulations. Ünal: ‘I found nothing to support this view in my research. On the contrary, our test subjects enjoyed listening to the music and did their utmost to be responsible drivers. They sometimes drove better while listening to music.’ Ünal did not try to find out whether there was a difference between listening to music or talk shows on the radio.

    Monotonous roads

    Although this is not the first piece of research that examines the influence of listening to music or the radio on driving performance, Ünal is the first person to use different traffic situations for her experiments with the simulator. ‘For example, we asked participants to drive behind another vehicle for half an hour on a quiet road. As you would expect, it became very tedious. But the people who listened to music were more focused on driving and performed better than those without music. It’s fairly logical: people need a certain degree of ‘arousal’ (a state of being alert caused by external stimulation of the brain) to stop themselves getting bored. In monotonous traffic situations, music is a good distraction that helps you keep your mind on the road.”

    Safety first

    Motorists need to concentrate harder in busy urban traffic than on quiet roads. Ünal: ‘A motorist’s natural reaction is to turn the sound down or even switch the radio off. This was not allowed during the experiments. As a result, we noted that the participants focused more on the traffic and didn’t remember what had been on the radio afterwards. Safety comes first at moments like this and the participants were able to block out the distraction (in this case the music or radio). This also occurs when drivers are asked to perform a special manoeuvre, such as reversing into a parking space. Our findings do not indicate that people listening to music drive less well in busy traffic. The research showed that background music can actually help motorists to concentrate, both in busy and quiet traffic.’

    No difference in types of music

    Ünal initially wanted to find out whether the type of music made any difference. ‘This wasn’t realistic. Participants forced to listen to music they didn’t like just wanted to get the experiment over and done with. In reality, you only listen to music that you enjoy, so we left the choice to them.’ She did not study whether there was a difference between listening to music and listening to talk shows on the radio. ‘People can listen to music in the background, while they tend to concentrate on the news and put more mental effort into it, particularly if they find an item interesting. That’s why making phone calls in the car is so dangerous: talking on the phone while driving makes huge mental demands on motorists. I’d quite like to study the effect of music and making phone calls while on the bike. The Netherlands is full of cyclists so this would be a highly relevant research subject.’

    Follow-up study of older and younger motorists

    Ünal’s main conclusion is that when people take account of the traffic situation and their own driving skills, music makes very little difference to their performance as drivers: ‘It’s important to know your limits. Some people are much more affected by loud music than others. I’d also be interested to see whether older motorists of seventy and above, and young people learning to drive, cope with the distraction of the radio in the same way. I could imagine that music might be too distracting while you’re just learning to drive. And at the other end of the scale, people’s cognitive capacities diminish as they get older so I’m curious to know how they react to the mental demands of driving at the same time as the listening to music.’


  7. Study suggests perfect pitch may not be absolute after all

    June 14, 2013 by Ashley

    From the University of Chicago press release by William Harms via EurekAlert!:

    Brain MusicPeople who think they have perfect pitch may not be as in tune as they think, according to a new University of Chicago study in which people failed to notice a gradual change in pitch while listening to music.

    When tested afterward, people with perfect, or absolute pitch, thought notes made out of tune at the end of a song were in tune, while notes that were in tune at the beginning sounded out of tune.

    About one out of 10,000 people has absolute pitch, which means they can accurately identify a note by hearing it. They are frequently able, for instance, to replicate a song on a piano by simply hearing it. Absolute pitch has been “idealized in popular culture as a rare and desirable musical endowment, partly because several well-known composers, such as Mozart, Beethoven, Chopin and Handel, have been assumed to possess absolute pitch,” the researchers write in “Absolute Pitch May Not Be So Absolute,” in the current issue of Psychological Science.

    The study showed that exposure to music influences how people identify notes from their sound, rather than having a rare, absolute ability at an early age. The research also demonstrates the malleability of the brain—that abilities thought to be stable late in life can change with even a small amount of experience and learning.

    One of the researchers, Stephen Hedger, a graduate student in psychology at UChicago, has absolute pitch, as determined by objective tests. Joining him in the study were postdoctoral scholar Shannon Heald and Howard Nusbaum, professor in psychology at UChicago.

    Hedger and Heald decided to pursue the study after a session in which Heald tricked Hedger by covertly adjusting pitch on an electronic keyboard.

    “Steve and I have talked about absolute pitch, and I thought it might be more malleable than people have thought,” Heald said. While in the lab, Hedger began to play a tune, and Heald secretly changed the pitch with a wheel at the side of the keyboard.

    Heald changed the tuning to make the music a third of a note flatter than it was at the beginning of the song. Hedger never noticed the change, which was gradual, and was later surprised to discover the music he was playing was actually out of tune at the end.

    I was astounded that I didn’t notice the change,” Hedger said. Working with Nusbaum, an expert on brain plasticity, they devised experiments to see if other people with absolute pitch would make the same mistake as Hedger.

    The researchers recruited 27 people who were identified as having absolute pitch by standard tests and assigned them to two groups for two experiments. The subjects were tested on identifying notes at the beginning of the experiments, and each was able to correctly identify an in-tune note.

    One group then listened to Johannes Brahms’ Symphony No. 1 in C Minor. In another experiment, a second group listened to music played on a French horn to determine whether individual instruments impact the ability to detect music going out of tune.

    As the people listened to the symphony, the music was detuned during the first movement (about 15 minutes) to become flatter at the rate of two cents a minute. (The tonal distance between two notes, such as an A and G sharp, is measured as 100 cents).

    By the end of the movement, the pitch had been detuned by 33 cents, a change none of the listeners detected, much like Hedger. The symphony was then played out of tune for the next three movements.

    The listeners were then tested after listening to the detuned music, as they had been at the beginning of the session. They identified out-of-tune notes from the newly detuned music as being in tune, while reporting notes they heard in the pre-test were slightly out of tune.

    Another test composed of only five notes, called phase music, equally found that people with absolute pitch changed what they determined was in tune after listening to the slightly out-of-tune music. That included notes not actually heard as detuned during the musical exposure.

    In both tests, only the specific instruments used in the compositions were affected by the detuned nature of the listening experience. Neither the Brahms’ Symphony nor the five-note compositions used a piano or French horn, and after listening to detuned music, notes played on these instruments were unaffected.

    Listening to detuned music significantly shifted the perceived intonation and generalized to notes that had not been heard in the detuned music,” said Nusbaum. The researchers are now experimenting with people who have more limited pitch identification ability and are finding that their pitch identification can be improved.

    “This is further evidence of how adaptable even the adult mind is for learning new skills. We are finding out more and more about how our brains are equipped to learn new things at any age and not limited by abilities previously thought to be available only from the time of birth,” Nusbaum said.

     


  8. Study examines personality traits that predict affinity for musical training

    June 3, 2013 by Ashley

    From the U of T Mississauga press release via MedicalXpress:

    Brain MusicResearch by U of T Mississauga psychology professor Glenn Schellenberg reveals that two key personality traits – openness-to-experience and conscientiousness—predict better than IQ who will take music lessons and continue for longer periods.

    Another intriguing finding is that when personality traits and demographic factors like parents’ education are considered, the link between cognitive ability and music training disappears.

    Schellenberg’s study calls into question the widely held view that music makes kids smarter. “The prevailing bias is that music training causes improvements in intelligence. But you can’t infer causation simply because children with music training have higher IQs than children who haven’t had music training,” says Schellenberg, a cognitive psychology specialist in the Department of Psychology.

    The research paper, “Music training, cognition and personality,” is published in a recent issue of Frontiers in Psychology.

    Ever since a 1993 University of California study claimed that people performed better on tests of spatial abilities after listening to music composed by Mozart, the idea that music makes you smarter has been embedded in the public consciousness. Moms have been playing “Baby Mozart” CDs to give their kids an intellectual edge, and researchers have been studying and reporting positive associations between music and intelligence.

    But Schellenberg’s previous research has shown that the “Mozart effect” is a myth. People do just as well on spatial tests after listening to a narrated story as a Mozart sonata, and in both cases better than after sitting in silence. “Their performance is better because the music and story are more arousing and enjoyable, and the effect has little to do with Mozart in particular or music in general,” he explains.

    In this study, Schellenberg gave the theory that music training makes kids smarter a reality check by asking whether pre-existing differences in personality could explain why musically trained children have substantially higher IQs and perform better in school than other kids. “I wanted to stop this madness of making exaggerated claims about the intellectual benefits of music training,” he says.

    In separate groups of 167 10- to 12-year-olds and 118 university undergraduates, he looked at how individual differences in cognitive ability and personality predict who takes music lessons and for how long. The study measured the Big Five personality dimensions: openness-to-experience, conscientiousness, agreeableness, extraversion and neuroticism.

    Among the children, openness-to-experience and conscientiousness predicted the likelihood of taking music lessons and persisting, while openness-to-experience was the best predictor of involvement in music lessons. Those personality traits also helped to explain why musically trained children tend to earn higher grades in school than peers without music training, and do better academically than would be expected from their IQ scores.

    Among undergraduates, those with higher levels of openness-to-experience studied music longer during childhood and adolescence.

    Schellenberg’s findings highlight that there are significant pre-existing personality differences between kids who take music lessons and those who do not. “The differences in personality are at least as important as cognitive variables among adults, and even more important among children in predicting who is likely to take music lessons and for how long,” he says.

    His research raises questions about virtually all previously reported research correlations between music training and cognitive abilities that failed to account for differences in personality traits. “Much previous research may have overestimated the effects of music training and underestimated the role of pre-existing differences between children who do and do not take music lessons,” he says.

    “Children who take music lessons may have relatively high levels of curiosity, motivation, persistence, concentration, selective attention, self-discipline and organization,” says Schellenberg. While he acknowledges that music training may offer a slight cognitive benefit, he suggests that ambitious parents should not make kids learn a musical instrument solely for any expected intelligence benefits.

    Learning a musical instrument is worthwhile for the musical skills and knowledge that a child will develop, and for the enjoyment of playing music. “Nobody says you need to study biology because it increases your reading ability. Like all the arts, music is one of those things that makes us human and is worth doing in its own right,” says Schellenberg.

     


  9. Study examines link between music and colours

    May 21, 2013 by Ashley

    From the UC Berkeley press release via EurekAlert!:

    Brain MusicWhether we’re listening to Bach or the blues, our brains are wired to make music-color connections depending on how the melodies make us feel, according to new research from the University of California, Berkeley.

    For instance, Mozart’s jaunty Flute Concerto No. 1 in G major is most often associated with bright yellow and orange, whereas his dour Requiem in D minor is more likely to be linked to dark, bluish gray.

    Moreover, people in both the United States and Mexico linked the same pieces of classical orchestral music with the same colors. This suggests that humans share a common emotional palette – when it comes to music and color – that appears to be intuitive and can cross cultural barriers, UC Berkeley researchers said.

    “The results were remarkably strong and consistent across individuals and cultures and clearly pointed to the powerful role that emotions play in how the human brain maps from hearing music to seeing colors,” said UC Berkeley vision scientist Stephen Palmer, lead author of a paper published this week in the journal Proceedings of the National Academy of Sciences.

    Using a 37-color palette, the UC Berkeley study found that people tend to pair faster-paced music in a major key with lighter, more vivid, yellow colors, whereas slower-paced music in a minor key is more likely to be teamed up with darker, grayer, bluer colors.

    “Surprisingly, we can predict with 95 percent accuracy how happy or sad the colors people pick will be based on how happy or sad the music is that they are listening to,” said Palmer, who will present these and related findings at the International Association of Colour conference at the University of Newcastle in the U.K. on July 8.  At the conference, a color light show will accompany a performance by the Northern Sinfonia orchestra to demonstrate “the patterns aroused by music and color converging on the neural circuits that register emotion,” he said.

    The findings may have implications for creative therapies, advertising and even music player gadgetry. For example, they could be used to create more emotionally engaging electronic music visualizers, computer software that generates animated imagery synchronized to the music being played. Right now, the colors and patterns appear to be randomly generated and do not take emotion into account, researchers said.

    They may also provide insight into synesthesia, a neurological condition in which the stimulation of one perceptual pathway, such as hearing music, leads to automatic, involuntary experiences in a different perceptual pathway, such as seeing colors.  An example of sound-to-color synesthesia was portrayed in the 2009 movie The Soloist when cellist Nathaniel Ayers experiences a mesmerizing interplay of swirling colors while listening to the Los Angeles symphony. Artists such as Wassily Kandinksky and Paul Klee may have used music-to-color synesthesia in their creative endeavors.

    Nearly 100 men and women participated in the UC Berkeley music-color study, of which half resided in the San Francisco Bay Area and the other half in Guadalajara, Mexico. In three experiments, they listened to 18 classical music pieces by composers Johann Sebastian Bach, Wolfgang Amadeus Mozart and Johannes Brahms that varied in tempo (slow, medium, fast) and in major versus minor keys.

    In the first experiment, participants were asked to pick five of the 37 colors that best matched the music to which they were listening. The palette consisted of vivid, light, medium, and dark shades of red, orange, yellow, green, yellow-green, green, blue-green, blue, and purple.

    Participants consistently picked bright, vivid, warm colors to go with upbeat music and dark, dull, cool colors to match the more tearful or somber pieces. Separately, they rated each piece of music on a scale of happy to sad, strong to weak, lively to dreary and angry to calm.

    Two subsequent experiments studying music-to-face and face-to-color associations supported the researchers’ hypothesis that “common emotions are responsible for music-to-color associations,” said Karen Schloss, a postdoctoral researchers at UC Berkeley and co-author of the paper.

    For example, the same pattern occurred when participants chose the facial expressions that “went best” with the music selections, Schloss said. Upbeat music in major keys was consistently paired with happy-looking faces while subdued music in minor keys was paired with sad-looking faces. Similarly, happy faces were paired with yellow and other bright colors and angry faces with dark red hues.

    Next, Palmer and his research team plan to study participants in Turkey where traditional music employs a wider range of scales than just major and minor. “We know that in Mexico and the U.S. the responses are very similar,” he said. “But we don’t yet know about China or Turkey.

    Other co-authors of the study are Zoe Xu of UC Berkeley and Lilia Prado-Leon of the University of Guadalajara, Mexico.


  10. Study suggests cheery music can help people improve their moods

    May 15, 2013 by Ashley

    From the University of Missouri press release via EurekAlert!:

    MusicThe song, “Get Happy,” famously performed by Judy Garland, has encouraged people to improve their mood for decades. Recent research at the University of Missouri discovered that an individual can indeed successfully try to be happier, especially when cheery music aids the process. This research points to ways that people can actively improve their moods and corroborates earlier MU research.

    “Our work provides support for what many people already do – listen to music to improve their moods,” said lead author Yuna Ferguson, who performed the study while she was an MU doctoral student in psychological science. “Although pursuing personal happiness may be thought of as a self-centered venture, research suggests that happiness relates to a higher probability of socially beneficial behavior, better physical health, higher income and greater relationship satisfaction.

    In two studies by Ferguson, participants successfully improved their moods in the short term and boosted their overall happiness over a two week period. During the first study, participants improved their mood after being instructed to attempt to do so, but only if they listened to the upbeat music of Copland, as opposed to the more somber Stravinsky. Other participants, who simply listened to the music without attempting to change their mood, also didn’t report a change in happiness. In the second study, participants reported higher levels of happiness after two weeks of lab sessions in which they listened to positive music while trying to feel happier, compared to control participants who only listened to music.

    However, Ferguson noted that for people to put her research into practice, they must be wary of too much introspection into their mood or constantly asking, “Am I happy yet?”

    “Rather than focusing on how much happiness they’ve gained and engaging in that kind of mental calculation, people could focus more on enjoying their experience of the journey towards happiness and not get hung up on the destination,” said Ferguson.

    Ferguson’s work corroborated earlier findings by Ferguson’s doctoral advisor and co-author of the current study, Kennon Sheldon, professor of psychological science in MU’s College of Arts and Science.

    The Hedonic Adaptation Prevention model, developed in my earlier research, says that we can stay in the upper half of our ‘set range’ of potential happiness as long as we keep having positive experiences, and avoid wanting too much more than we have,” said Sheldon. “Yuna’s research suggests that we can intentionally seek to make mental changes leading to new positive experiences of life. The fact that we’re aware we’re doing this, has no detrimental effect.”

    Ferguson is now assistant professor of psychology at Pennsylvania State University Shenango. The study, “Trying to Be Happier Really Can Work: Two Experimental Studies,” was published in The Journal of Positive Psychology.