(Original Link - http://www.chicagotribune.com/health/sc-health-1201-music-20101201,0,3647950.story)
On her last night at the hospital after undergoing a series of spine surgeries, Susan Mandel lay in bed listening to Pachelbel's Canon in D.
For days, Mandel's positive attitude had kept any anxiety at bay, so she was surprised when she noticed her face was wet, and then her pillow, which slowly soaked through. She sobbed silently, listening to the familiar violins, until the tears stopped coming. Then she felt peace.
"It wasn't a cry of anguish, it was a cry of relief," Mandel said, recalling the night more than 20 years ago. "It's very tender, evocative music, and I think it gave me permission to release the pent-up emotions."
Philosophers for millenniums have marveled at the power of music to speak to our souls, to inspire joy, melancholy, aggression or calm with visceral insight beyond the grasp of our rational minds. Thanks to advances in neuroscience, researchers are beginning to understand what it is about music that touches us so deeply, and how to harness that power to soothe, uplift, comfort and heal — to use music as medicine for emotional and physical health.
Mandel, a music therapist and research consultant at Lake Health Wellness Institute in Cleveland, this month released "Manage Your Stress and Pain Through Music," (Berklee Press Publications, $29.99), with co-author Suzanne Hanser, chairwoman of the music therapy department at Berklee College of Music in Boston. The book explains how to choose and use music to cope with challenges in your life.
Not what you'd guess
It can seem obvious which songs would bring you up and which might bring you down. And indeed, there are structural components to songs that are meant to communicate joy, such as a fast tempo in major mode, or sadness, such as a slower tempo in minor mode. But there's a difference between the emotion communicated through music and the emotion actually induced in the listener. Our memories, personal preferences and mood at the time can have a heavier influence than the intent of the musical structure in how music makes us feel.
"You could have a really positive emotional experience with a song that structurally communicates sadness," said Meagan Curtis, assistant professor of psychology at State University of New York at Purchase, who does research in music psychology.
What matters most in reaping the health benefits of music, from pain reduction to stress relief, is that you listen to music you enjoy, research shows. In a study on cardiac rehabilitation patients, Mandel found that the patients who liked a therapeutic music CD she put together experienced a reduction in blood pressure and reported feeling calmer, while patients who didn't like the music actually felt worse.
While there are structural components that convey soothing, such as consonant harmonies and a narrow pitch range, whatever music has the most positive associations to the individual will have the most positive emotional and physiological response. It activates the parasympathetic nervous system, which calms heart rate, lowers blood pressure and relaxes muscles.
"I have found people who love punk rock and find that it helps them to sleep," Hanser said. "It's likely that they have learned it truly speaks to them and expresses a part of who they are."
Music and pain
Music also has been found to help people tolerate pain longer and make the pain less painful.
Studies using a cold pressor task, which simulates chronic pain by submerging subjects' hands in a bucket of freezing cold water, found that people were able to leave their hands in the water longer when they were listening to music they enjoyed, Curtis said.
That could be because people take comfort in the familiar, or because it distracts them. Between recalling memories, tapping our fingers, conjuring up images and other tasks, our brain releases so many chemicals to process music that they interfere with our perception of pain.
How the brain processes
There's some evidence that we feel music viscerally because it goes straight to the amygdala, the part of the limbic system that manages our emotions, and the hippocampus, where long-term memories are stored, Hanser said.
Music that gives people chills or shivers up the spine has been found to activate the same reward areas of the brain stimulated by food, sex and certain types of recreational drugs, Curtis said. While different people get chills from different songs, often those shiver-producing songs have an unexpected tonal structure, like a chord that isn't part of the harmonic progression, she said.
Impact of lyrics
While structure is less important than personal experience in a song's ability to induce emotion, lyrics may be even less important than structure, Curtis said. We don't need to consciously attend to structure to process its emotion, but we do have to pay attention to lyrics, making the impact of structure stronger and less difficult to process.
People are usually very intuitive about what songs are useful to them and often choose music appropriate for the state they're in, Curtis said. That explains one of the great ironies of human behavior: that many people like to listen to sad music when they're sad.
We might like the affirmation, as we create a bond with the singer or composer because they, too, have felt what we feel, Curtis said. Another theory is that wallowing is a kind of emotional catharsis, helping us fully experience the sadness so that we go through the stages of grief more quickly.
And it can be a healthy thing. A central tenet of music therapy is to meet people where they are, called the ISO principal. So if people are very depressed and lonely, you would start them with music that matches their mood before introducing something more uplifting.
"You first affirm and allow the person to reflect, and then move on to more positive things and hopeful outlooks," Hanser said.
Some researchers hope to nail down the precise combination of pitch, tone, tempo, rhythm, timbre, melody and lyrics that makes a piece of music ideal for regulating people's moods or helping to reduce pain. A study under way at Glasgow Caledonian University aims to develop a "comprehensive mathematic model" that identifies how music communicates emotions, which eventually could help doctors prescribe music.
Hanser is skeptical that a sweeping formula exists, and if it does, "I hope we don't find it," she said. "I don't know anyone who is the mean, the normal. If we can recognize our own unique characteristics and what makes us each respond so differently, that I think is really fascinating and what humanity is all about."
aelejalderuiz@tribune.com
Emotional impact
While a person's emotional reaction to a song is based largely on his or her history with the song, the song's structure also can communicate emotions, mostly through mode (major or minor chords) and tempo, said Meagan Curtis, assistant professor of psychology at State University of New York at Purchase.
A fast tempo (up to 120 beats per minute) tends to heighten physiological arousal, while slower tempos (down to 60 beats per minute) tend to reduce arousal. Major chords tend to evoke positive emotions, such as joy and contentment, and minor chords negative emotions, like fear, anger or sadness.
Curtis offered some examples:
•Major mode, fast tempo Example: "Shiny Happy People," by R.E.M. Emotion conveyed: happy.
•Major mode, slow tempo Example: "Sitting on the Dock of the Bay," by Otis Redding. Emotion conveyed: soothing, tenderness.
•Minor mode, fast tempo Example: "Smells Like Teen Spirit," by Nirvana. Emotion conveyed: angst, anger.
•Minor mode, slow tempo Example: "Eleanor Rigby," by the Beatles. Emotion conveyed: sadness.
Showing posts with label sad music. Show all posts
Showing posts with label sad music. Show all posts
Thursday, December 2, 2010
Tuesday, June 22, 2010
Music and Speech in the Minor Third
Original Link - http://www.scientificamerican.com/blog/post.cfm?id=music-and-speech-share-a-code-for-c-2010-06-17
Here's a little experiment. You know "Greensleeves"—the famous English folk song? Go ahead and hum it to yourself. Now choose the emotion you think the song best conveys: (a) happiness, (b) sadness, (c) anger or (d) fear.
Almost everyone thinks "Greensleeves" is a sad song—but why? Apart from the melancholy lyrics, it's because the melody prominently features a musical construct called the minor third, which musicians have used to express sadness since at least the 17th century. The minor third's emotional sway is closely related to the popular idea that, at least for Western music, songs written in a major key (like "Happy Birthday") are generally upbeat, while those in a minor key (think of The Beatles' "Eleanor Rigby") tend towards the doleful.
The tangible relationship between music and emotion is no surprise to anyone, but a study in the June issue of Emotion suggests the minor third isn't a facet of musical communication alone—it's how we convey sadness in speech too. When it comes to sorrow, music and human speech might speak the same language.
In the study, Meagan Curtis of Tufts University's Music Cognition Lab recorded undergraduate actors reading two-syllable lines—like "let's go" and "come here"—with different emotional intonations: anger, happiness, pleasantness and sadness (listen to the recordings here). She then used a computer program to analyze the recorded speech and determine how the pitch changed between syllables. Since the minor third is defined as a specific measurable distance between pitches (a ratio of frequencies), Curtis was able to identify when the actors' speech relied on the minor third. What she found is that the actors consistently used the minor third to express sadness.
"Historically, people haven't thought of pitch patterns as conveying emotion in human speech like they do in music," Curtis said. "Yet for sad speech there is a consistent pitch pattern. The aspects of music that allow us to identify whether that music is sad are also present in speech."
Curtis also synthesized musical intervals from the recorded phrases spoken by actors, stripping away the words, but preserving the change in pitch. So a sad "let's go" would become a sequence of two tones. She then asked participants to rate the degree of perceived anger, happiness, pleasantness and sadness in the intervals. Again, the minor third consistently was judged to convey sadness.
A possible explanation for why music and speech might share the same code for expressing emotion is the idea that both emerged from a common evolutionary predecessor, dubbed "musilanguage" by Steven Brown, a cognitive neuroscientist at Simon Fraser University in Burnaby (Vancouver), British Columbia. But Curtis points out that right now there is no effective means of empirically testing this hypothesis or determining whether music or language evolved first.
What also remains unclear is whether the minor third's influence spans cultures and languages, which is one of the questions that Curtis would like to explore next. Previous studies have shown that people can accurately interpret the emotional content of music from cultures different than their own, based on tempo and rhythm alone.
"I have only looked at speakers of American English, so it's an open question whether it's a phenomenon that exists specifically in American English or across cultures," Curtis explained. "Who knows if they are using the same intervals in, say, Hindi?"
Almost everyone thinks "Greensleeves" is a sad song—but why? Apart from the melancholy lyrics, it's because the melody prominently features a musical construct called the minor third, which musicians have used to express sadness since at least the 17th century. The minor third's emotional sway is closely related to the popular idea that, at least for Western music, songs written in a major key (like "Happy Birthday") are generally upbeat, while those in a minor key (think of The Beatles' "Eleanor Rigby") tend towards the doleful.
The tangible relationship between music and emotion is no surprise to anyone, but a study in the June issue of Emotion suggests the minor third isn't a facet of musical communication alone—it's how we convey sadness in speech too. When it comes to sorrow, music and human speech might speak the same language.
In the study, Meagan Curtis of Tufts University's Music Cognition Lab recorded undergraduate actors reading two-syllable lines—like "let's go" and "come here"—with different emotional intonations: anger, happiness, pleasantness and sadness (listen to the recordings here). She then used a computer program to analyze the recorded speech and determine how the pitch changed between syllables. Since the minor third is defined as a specific measurable distance between pitches (a ratio of frequencies), Curtis was able to identify when the actors' speech relied on the minor third. What she found is that the actors consistently used the minor third to express sadness.
"Historically, people haven't thought of pitch patterns as conveying emotion in human speech like they do in music," Curtis said. "Yet for sad speech there is a consistent pitch pattern. The aspects of music that allow us to identify whether that music is sad are also present in speech."
Curtis also synthesized musical intervals from the recorded phrases spoken by actors, stripping away the words, but preserving the change in pitch. So a sad "let's go" would become a sequence of two tones. She then asked participants to rate the degree of perceived anger, happiness, pleasantness and sadness in the intervals. Again, the minor third consistently was judged to convey sadness.
A possible explanation for why music and speech might share the same code for expressing emotion is the idea that both emerged from a common evolutionary predecessor, dubbed "musilanguage" by Steven Brown, a cognitive neuroscientist at Simon Fraser University in Burnaby (Vancouver), British Columbia. But Curtis points out that right now there is no effective means of empirically testing this hypothesis or determining whether music or language evolved first.
What also remains unclear is whether the minor third's influence spans cultures and languages, which is one of the questions that Curtis would like to explore next. Previous studies have shown that people can accurately interpret the emotional content of music from cultures different than their own, based on tempo and rhythm alone.
"I have only looked at speakers of American English, so it's an open question whether it's a phenomenon that exists specifically in American English or across cultures," Curtis explained. "Who knows if they are using the same intervals in, say, Hindi?"
Labels:
dj frobot,
frobot,
meagan curtis,
minor 3rd,
sad music,
steven brown,
tufts university
Subscribe to:
Posts (Atom)