Wednesday, September 29, 2010

Ultimate Ableton Live Guide Available NOW

(Original Link - http://www.musicradar.com/computermusic/ableton-live-the-ultimate-guide-available-now-279329)


Ableton Live: The Ultimate Guide brings you 132 pages of lavishly produced tutorials on Ableton's amazing music production/performance package, taken from the archives of Computer Music magazine and Computer Music Specials.

Divided into four main sections – 'Live Essentials', 'Live Masterclasses', 'Get Creative' and 'Quick Guides' – The Ultimate Guide covers a hugely diverse range of subjects, including using Live's built-in effects and instruments, getting started with Max For Live, meta-recording, live performance, sound design, arrangement, mixing and much, much more.

Also included is a DVD-ROM packed with exclusive royalty-free samples from some of the biggest names in the soundware industry, free plug-ins, tutorial files and audio examples.

Ableton Live: The Ultimate Guide is available in UK newsagents now, and can be ordered online at MyFavouriteMagazines. Overseas dates are roughly: USA + 4 weeks after UK / Australia +8 weeks / Europe +2 weeks / South Africa +6 weeks / Canada +4 weeks. Alternatively, order online at www.myfavouritemagazines.co.uk

Babies Are Born to Dance, New Research Shows


(Original Link - http://www.sciencedaily.com/releases/2010/03/100315161925.htm)

Researchers have discovered that infants respond to the rhythm and tempo of music and find it more engaging than speech.

The findings, based on the study of infants aged between five months and two years old, suggest that babies may be born with a predisposition to move rhythmically in response to music.

The research was conducted by Dr Marcel Zentner, from the University of York's Department of Psychology, and Dr Tuomas Eerola, from the Finnish Centre of Excellence in Interdisciplinary Music Research at the University of Jyvaskyla.

Dr Zentner said: "Our research suggests that it is the beat rather than other features of the music, such as the melody, that produces the response in infants.

"We also found that the better the children were able to synchronize their movements with the music the more they smiled.

"It remains to be understood why humans have developed this particular predisposition. One possibility is that it was a target of natural selection for music or that it has evolved for some other function that just happens to be relevant for music processing."

Infants listened to a variety of audio stimuli including classical music, rhythmic beats and speech. Their spontaneous movements were recorded by video and 3D motion-capture technology and compared across the different stimuli.

Professional ballet dancers were also used to analyse the extent to which the babies matched their movement to the music.

The findings are published March 15 in the journal Proceedings of the National Academy of Sciences Online Early Edition.

The research was part-funded by a grant from the Swiss National Science Foundation.

For Your Brain to Work, it Helps to Have a Beat

 This is an illustration of how brain rhythms organize distributed groups of neurons into functional cell assemblies. The colors represent different cell assemblies. Neurons in widely separated brain areas often need to work together without interfering with other, spatially overlapping groups. Each assembly is sensitive to different frequencies, producing independent patterns of coordinated neural activity, depicted as color traces to the right of each network. (Credit: Ryan Canolty, UC Berkeley)


(Original Link - http://www.sciencedaily.com/releases/2010/09/100920151806.htm)

When it comes to conducting complex tasks, it turns out that the brain needs rhythm, according to researchers at the University of California, Berkeley.

Specifically, cortical rhythms, or oscillations, can effectively rally groups of neurons in widely dispersed regions of the brain to engage in coordinated activity, much like a conductor will summon up various sections of an orchestra in a symphony.

Even the simple act of catching a ball necessitates an impressive coordination of multiple groups of neurons to perceive the object, judge its speed and trajectory, decide when it's time to catch it and then direct the muscles in the body to grasp it before it whizzes by or drops to the ground.

Until now, neuroscientists had not fully understood how these neuron groups in widely dispersed regions of the brain first get linked together so they can work in concert for such complex tasks.
The UC Berkeley findings are being published in the online early edition of the journal Proceedings of the National Academy of Sciences.

"One of the key problems in neuroscience right now is how you go from billions of diverse and independent neurons, on the one hand, to a unified brain able to act and survive in a complex world, on the other," said principal investigator Jose Carmena, UC Berkeley assistant professor at the Department of Electrical Engineering and Computer Sciences, the Program in Cognitive Science, and the Helen Wills Neuroscience Institute. "Evidence from this study supports the idea that neuronal oscillations are a critical mechanism for organizing the activity of individual neurons into larger functional groups."

The idea behind anatomically dispersed but functionally related groups of neurons is credited to neuroscientist Donald Hebb, who put forward the concept in his 1949 book "The Organization of Behavior."

"Hebb basically said that single neurons weren't the most important unit of brain operation, and that it's really the cell assembly that matters," said study lead author Ryan Canolty, a UC Berkeley postdoctoral fellow in the Carmena lab.

It took decades after Hebb's book for scientists to start unraveling how groups of neurons dynamically assemble. Not only do neuron groups need to work together for the task of perception -- such as following the course of a baseball as it makes its way through the air -- but they then need to join forces with groups of neurons in other parts of the brain, such as in regions responsible for cognition and body control.

At UC Berkeley, neuroscientists examined existing data recorded over the past four years from four macaque monkeys. Half of the subjects were engaged in brain-machine interface tasks, and the other half were participating in working memory tasks. The researchers looked at how the timing of electrical spikes -- or action potentials -- emitted by nerve cells was related to rhythms occurring in multiple areas across the brain.

Among the squiggly lines, patterns emerged that give literal meaning to the phrase "tuned in." The timing of when individual neurons spiked was synchronized with brain rhythms occurring in distinct frequency bands in other regions of the brain. For example, the high-beta band -- 25 to 40 hertz (cycles per second) -- was especially important for brain areas involved in motor control and planning.

"Many neurons are thought to respond to a receptive field, so that if I look at one motor neuron as I move my hand to the left, I'll see it fire more often, but if I move my hand to the right, the neuron fires less often," said Carmena. "What we've shown here is that, in addition to these traditional 'external' receptive fields, many neurons also respond to 'internal' receptive fields. Those internal fields focus on large-scale patterns of synchronization involving distinct cortical areas within a larger functional network."

The researchers expressed surprise that this spike dependence was not restricted to the neuron's local environment. It turns out that this local-to-global connection is vital for organizing spatially distributed neuronal groups.

"If neurons only cared about what was happening in their local environment, then it would be difficult to get neurons to work together if they happened to be in different cortical areas," said Canolty. "But when multiple neurons spread all over the brain are tuned in to a specific pattern of electrical activity at a specific frequency, then whenever that global activity pattern occurs, those neurons can act as a coordinated assembly."

The researchers pointed out that this mechanism of cell assembly formation via oscillatory phase coupling is selective. Two neurons that are sensitive to different frequencies or to different spatial coupling patterns will exhibit independent activity, no matter how close they are spatially, and will not be part of the same assembly. Conversely, two neurons that prefer a similar pattern of coupling will exhibit similar spiking activity over time, even if they are widely separated or in different brain areas.
"It is like the radio communication between emergency first responders at an earthquake," Canolty said. "You have many people spread out over a large area, and the police need to be able to talk to each other on the radio to coordinate their action without interfering with the firefighters, and the firefighters need to be able to communicate without disrupting the EMTs. So each group tunes into and uses a different radio frequency, providing each group with an independent channel of communication despite the fact that they are spatially spread out and overlapping."

The authors noted that this local-to-global relationship in brain activity may prove useful for improving the performance of brain-machine interfaces, or lead to novel strategies for regulating dysfunctional brain networks through electrical stimulation. Treatment of movement disorders through deep brain stimulation, for example, usually targets a single area. This study suggests that gentler rhythmic stimulation in several areas at once may also prove effective, the authors said.

Other co-authors of the study are Jonathan Wallis, UC Berkeley associate professor of psychology; Dr. Karunesh Ganguly, UC Berkeley post-doctoral fellow in the Carmena lab and staff scientist at the San Francisco Veterans Affairs Medical Center; Steven Kennerley, now a senior lecturer at University College London's Institute of Neurology; Charles Cadieu, UC Berkeley post-doctoral researcher in neuroscience; and Kilian Koepsell, UC Berkeley assistant researcher in neuroscience.

The National Institutes of Health, National Science Foundation, U.S. Department of Veterans Affairs, American Heart Association, Defense Advanced Research Projects Agency and the Multiscale System Center helped support this research.


Monday, September 20, 2010

What part of the brain interprets music?

Music lights up almost every area of the brain, which shouldn’t be a surprise since it makes people tap their feet, encourages the recollection of vivid memories and has the potential to lighten the mood.

Around the outside

1. Prefrontal cortex: This brain region plays a role in the creation, satisfaction and violation of expectations. It may react, for instance, when a beat goes missing. Recent work has shown that during improvisation a part of the prefrontal cortex involved in monitoring performance shuts down, while parts involved in self-initiated thoughts ramp up.

2. Motor cortex: Music is not independent of motion. Foot-tapping and dancing often accompany a good beat, meaning the motor cortex gets involved. And playing an instrument requires carefully timed physical movements. In some cases, this area of the brain is engaged when a person simply hears notes, suggesting a strong link to the auditory cortex.

3. Sensory cortex: Playing an instrument sends tactile messages to the sensory cortex, as keys are hit, for example.

4. Auditory cortex: Hearing any sound, including music, involves this region, which contains a map of pitches for the perception and analysis of tones.

5. Visual cortex: Reading music or watching a performer’s movements activates the visual cortex.

The inside track

 
6. Cerebellum: Movements such as foot-tapping and dancing activate this part of the brain. This could be because of the cerebellum’s role in timing and synchrony; it helps people track the beat. The cerebellum is also involved in the emotional side of music, lighting up with likable or familiar music, and appears to sense the difference between major and minor chords.

7. Hippocampus: Known to play a role in long-term memory, the hippocampus (part of which is shown) may help the brain retrieve memories that give a sound meaning or context. It also helps people link music they have heard before to an experience and to a given context, possibly explaining why it is activated during pleasant or emotionally charged music.

8. Amygdala: The amygdala seems to be involved in musical memories. It reacts differently to major and minor chords, and music that leads to chills tends to affect it. Studies suggest the skillful repetition heard in music is emotionally satisfying.

9. Nucleus accumbens: This brain structure is thought to be the center of the reward system. It reacts to emotional music, perhaps through the release of dopamine.

Whatever music is, it’s a basic part of being human



Scientists are increasingly interested in the nature and origins of music, as this special edition on music illustrates (see Page 17). As director of the Centre for Music and Science at the University of Cambridge in England, Ian Cross studies music perception and culture’s role in musical experience. A former professional classical guitarist who still performs occasionally, Cross is the only Cambridge music faculty member to have declined a chance to join iconic ’70s pop band the Bay City Rollers. He recently discussed music’s scientific standing with Science News writer Bruce Bower.

What is music?

I can’t give a good definition for music. The contemporary Western view of music is that it consists of complex, patterned sounds with a structure that we find pleasurable to listen to. But music is much more than that. All cultures have music, but many cultures don’t have a word for music. In traditional societies, there are musical performers, but music is primarily interactive, so everybody participates and it’s embedded in daily experience.

Music brings people together by having flexible meanings. Two people are unlikely to agree on precisely what a piece of music means because it triggers different sets of associations for each person. That makes music well-suited to uncertain social situations, such as funerary rites, circumcision rites and ceremonies for greetings and departures of visitors and group members.

How did music evolve?

People made sophisticated kinds of music long ago, as shown by 40,000-year-old flutes recently found in Germany (SN: 7/18/09, p. 13). Musical practices must have come out of Africa and predated humanity’s emergence around 200,000 years ago. I suspect music evolved along with speech, probably by the time of Homo heidelbergensis [around 600,000 years ago]. So Neandertals and the first humans would have had music.

Modern cultures separate music from language, but music and speech are probably the same thing. Speech can be very music-like. Think of a Southern Baptist preacher acting out his message in a musical way. And musical interactions typically involve vocal sounds, words and gestures. There is a rhythmic and emotional complexity to both music and speech. But language by itself is not as flexible in its meanings to different people as music is.

Musical behaviors of nonhuman animals may have contributed to musical evolution. Primates and other animals use musical sounds to communicate about whether to approach or avoid certain areas. They use tempos and pitch ranges for danger and safety that are associated with people’s emotional responses to music. In the famous shower scene from the movie Psycho, the soundtrack uses carefully fashioned violin phrases that viewers experience as screams, a clear danger sign for many animals.

What are the biggest public and scientific misconceptions about music?

People think that there are musicians and nonmusicians. Yet nearly everyone can finely distinguish between various musical genres and styles. Musical performers just engage with music in a more direct way.

No, wait. An even bigger misunderstanding is the assumption that music doesn’t matter. Music programs in schools are often the first casualties of economic recessions. Without these programs, we’re not enabling the expression of a deep, biologically grounded communication system.
I started playing the piano at age 6 and decided I didn’t like it. Then I tried the violin and didn’t like it. Then I took up the clarinet and hated it. I started with the guitar at age 9 and still play today. Kids need to be given the chance to try different instruments and find one that feels right.
Too many scientists think that Mozart is music but two kids singing a street chant is not music. In our culture, music has become a commodity that’s divorced from action. It’s thought of as entertainment, not as a fundamental communication system.

What are the prospects for a better scientific understanding of music?

Although there have been some fabulous experimental studies of music perception, music is a bit too wild to be trapped in the lab. I’ve worked with ethnomusicologists who play recorded music to members of non-Western groups and try to measure how they perceive and react to it. But these people don’t think of a recording as music. They’re bored by it. It makes no sense to them because it’s not interactive. Researchers need to devise better ways to study music across cultures and in real-life situations.

Monday, September 13, 2010

Summer Love II on Tango Hanto - The Story


The weekend started for Jamie, Natsuko & I early Friday morning. I think Jamie awoke to his cat licking his face before 6am...and he was up early...and we were on our way to Tango. We kind of knew the amount of work that was ahead of us...and we knew that it was going to take 2 days of full-on work to get everything set up in time for Saturday evening. Anyway, we cruised our way North...and stopped to get tons of food and beer.

When we finally reached Tango Hanto, it was about 12:30...and we immediately got to work. Our idea was to set up Jamie's stage creation in a different location than last time because the typhoon kicked up tons of sticks and garbage on the other site. First, we put together the DJ table and the VJ table. Then, Natsuko immediately got to work on making signs for the party, I started carrying old bamboo poles, about 4 meters in length, across the beach to the other site, while Jamie jumped on his laptop and started to use MAYA to draw up the design.


Jame is a genius, and through this idea came together in about an HOUR while sitting in the car (and ridiculously exhausted no less). We finally decided how much bamboo we needed, and went up to the old temple to get all the final pieces that we needed. After enduring countless mosquito bites from where the bamboo was, we brought it all down to the party site, and decided to celebrate our hard work with our FIRST meal of the day and a some chu-hais...while watching the "Dragon Sunset" (as Natsuko liked to call it).


Around this time, Matt showed up and started to set up the food stalls. Jamie and I realized that we still had a lot of work to do, and that we needed to set up lights and try to get the stage frame built by the end of the night or else we simply would not be finished in time for the party. So...thats what we did. We were really on our last bit of energy...but...with the alcohol flowing...we got to work and started to construct the frame. After setting it up (and then putting it back down to get the flags on), it was finally looking like a stage. At the end of the night (around 11pm), it looked something like this.


That was about it for the first day. We enjoyed some relaxation with some friends, and quickly passed out...until the morning.

The sun came up, and we were back at it around 8 or 9 in the morning. We still had tons of work to do...and really wanted to have the party place looking like a party by the time people started showing up. So, we started to get the final bamboo poles set up on the stage, and started to place the white cloth on the back for the projection. We also got all the sound set up, and tested. It was so nice to have tunes to listen to while working, and we listened to tons of OASIS DJ sets throughout the day. That really made it easier to work!



Everything was ready to go, and DJ Kamon started to spin some really old-school reggae tracks while we finished up. The sun was starting to set, another yet amazing sunset blessed us all.


One thing I forgot to mention was that one of our KEY team members was not there. The last "Summer Love" party we did...Jamie, Natsuko, myself, & RICK were dubbed "Team Hurry the Fuck Up". Well, Rick wasnt able to make it until later...so we decided, for him to hold his place in "Team Hurry the Fuck Up"...he would have to redeem himself. I think it was Natsuko and Jamie's idea...but we had decided to BURY Rick in the sand. But, to add insult to injury, we were going to make him dig his own hole, and tell him that it was for part of the stage.


Whats even funnier, is I clearly remember Rick saying at this point "Do you guys really need this hole". We were laughing harder than ever. Anyway, he dug a HUGE hole, big enough for him to sit in...and we buried him....all the while laughing our asses off! He became the Mer-man!


We also got this beautiful sunset shot.


So, the sun was going down...and Dan Kane, Darren, Tender, Dan Hart, & all their friends played an AMAZING acoustic set while the sun went down.


The sun went down on the acoustic set, and it was time for the DJs to start the night. I was the first DJ...and it was the first time the visuals were able to be seem in full force. The set up looked AMAZING!

The party was pumping...and people were dancing. FroBot (myself), Joey, and Nori all were lucky enough to lay down sets before the "RAIN MONSTER" came. It was crazy!!! It went a little sumthin like this.

Around 9:30pm, someone comes running into the party yelling "WERE ABOUT TO GET NAILED BY A RAIN STORM!!!!". The tone of voice is what got me....it was DEAD SERIOUS. I looked at Jamie and we both knew exactly what had to be done. Luckily, I had set up a tent for equipment, and immediately started getting out tarps and whatnot. Literally 1 minute after this, the sky lit up...and rain came down like jungle rain. There was no indication of this storm...and if it wasnt for someone using an iphone and looking at the local radar...we would have been up shit creek. We got the gear out of the rain JUST in time...and...of course...have no pictures to show for this. The last thing on our minds was "taking pictures". The rain didnt last long, but the radar showed a bunch more rain coming throughout the night....so we were hesitant to put up the expensive sound system again. After a little bit of drizzle...and some brainstorming between Jamie and I....we set up the tarps over the stage, put the speakers within them...and set up the ipod for about a half hour. This at least kept the music going, and people were ready for a party again.

However, Jamie and I made a BONEHEAD move. Remember, that if you STAPLE a tarp to bamboo poles, it NO LONGER works as a tarp is intended to! HAHAHA. Needless to say, water was kind of dripping into the stage...and we were worried about gear. Never-the-less, DOM PANG said he would risk his own gear....so we left the mixer only...and he set up. He started to pound an AWESOME techno set...and people were again, dancing and partying it up!

The party was supposed to end around 2am...but...well...it didnt. It went all night. Dom rocked out for a while, Asogi too, Alex aka Freebass, then Karla jumped behind to the wheel for the sunrise set (she had eagerly wanted!!!).


I fell asleep for like an hour around this point...and Tim spun for a while, then Karla again...and they made sure that this party wasnt gonna stop!!!

The weather was looking promising, so we set the gear back up...and the DJs who use CDJ's were able to get on. Hiran played a nice chill morning set...and kept it moving through morning.

At some point...Hiran looked me straight in the eyes and said "Tsukarata!" (im tired). I couldnt agree with him more. I was running on 1 hour of sleep, and most other people were at about the same point. I needed to find a DJ or SOMEONE to go on to make sure that this party didnt stop....and then, one of the locals showed up with his FLUTE and did a small performance for us. It was FUNNY to say the least.


When he finished up, I was unable to find any DJs that were still awake who hadent played already...so I jumped back on for a long 2 hour set. It was at this point people started to take pictures again...(sorry to the DJs who there are no pictures for...it seems everyone dropped out of picture mode from like 12pm-9am).


I was dead tired...so Masamune jumped behind the decks for the final set of the party. People were still dancing away until about noon.

When Masamune finished... Jamie, Natsuko, and I were feeling pooped...but knew there was still a ton of work ahead. We decided to tear down everything while we still had help from everyone else. We started to tear down...and it happened in a fraction of the time it took to put up. 


We packed it all up...enjoyed some swimming for a bit...and then got on the road. I passed out within a few seconds of leaving...only to awake to some ice cream (yea, team hurry the fuck up LOVES ice cream).

With everything finished, Summer Love II was a major success. The energy was great, the people were so friendly, and Love & Peace was in the air. This party would not have been possible without everyones help. Matt, Jamie, Natsuko, Joey, & all the artists, performers, & food stall friends....thank you so much for all your hard work. I hope we can continue to do parties like this in the future....where LOVE & PEACE are the only things that matter!




Why do DJ's with No Music Skills Understand Music?



Simply Listening To Music Affects One’s Musicality

Researchers at the University of Amsterdam (UvA) have demonstrated how much the brain can learn simply through active exposure to many different kinds of music. “More and more labs are showing that people have the sensitivity for skills that we thought were only expert skills,” Henkjan Honing (UvA) explains.

“It turns out that mere exposure makes an enormous contribution to how musical competence develops.”* The results were recently presented at the Music & Language conference, organized by Tufts University in Boston, and will be published in an upcoming issue of the Journal of Experimental Psychology: Human Performance and Perception.
The common view among music scientists is that musical abilities are shaped mostly by intense musical training, and that they remain rather rough in untrained listeners, the so-called Expertise hypothesis.

However, the UvA-study shows that listeners without formal musical training, but with sufficient exposure to a certain musical idiom (the Exposure hypothesis), perform similarly in a musical task when compared to formally trained listeners.
Furthermore, the results show that listeners generally do better in their preferred musical genre. As such the study provides evidence for the idea that some musical capabilities are acquired through mere exposure to music. Just listen and learn!

In addition, the study is one of the first that takes advance of the possibilities of online listening experiments comparing musicians and non-musicians of all ages.

Exploring The Sounds Of Silence



Silence in music is not really silent. Research by a University of Arkansas music theorist, Elizabeth Hellmuth Margulis, reveals how context affects listeners’ experience of silence in music.

“The same acoustic silence, embedded in two different excerpts, can be perceived dramatically differently,” Margulis wrote in an article in Music Perception that explores the transformation from acoustic silence to perceived silence.

Silence offers “an opportunity to study the active participatory nature of musical engagement,” Margulis wrote. There has been little experimental study of musical silence up to now.

“Silent periods could provide a unique chance to study the way that past musical events shape expectations about future ones, and the way that underacknowledged, often taken for granted musical elements (such as rests) are actually suffused with the full extent of ‘musical’ listening,” she wrote.

Silence in music communicates in a similar manner to silence in speaking, Margulis said. Sometimes the duration of the pause indicates the importance of the segment. In written language, a pause at the end of a paragraph is longer than the pause at the end of a sentence. Pauses in language are also used for expressive effect, Margulis explained:

“For example, I could say ‘You know what happened?’ Pause. ‘He called her.’ And that pause in the right context is really tense, and you get everyone leaning forward. Music can do something similar.”
When a listener encounters silence in a musical work, Margulis wrote, “Impressions of the music that preceded the silence seep into the gap, as do expectations about what may follow.”

Listeners’ impressions and expectations can have a powerful effect on how they hear a silence, to the extent that identical acoustical silences may come to “sound” quite different. For example, Margulis found that musical context can cause two silences of the same duration “to seem like they occupy different lengths of time or carry different amounts of musical tension.”

Margulis’ research involved two experiments, one using musical excerpts from commercially available recordings. The second experiment used simpler musical excerpts produced specifically for the study with carefully measured and controlled silences.

Participants without musical training were selected for both experiments, so that their responses would reflect reactions to the music they were hearing rather than assessments based on formal musical training. They proved to be “highly sensitive” to the subtleties of silence in its musical context.
“I’m interested in showing how listeners without any special training know more than they think they know,” Margulis said, “You don’t need courses and lectures to understand music; it’s meant to naturally speak to you.”

Margulis is an assistant professor of music in the J. William Fulbright College of Arts and Sciences at the University of Arkansas. Her article “Silences in Music Are Musical Not Silent: An Exploratory Study of Context Effects on the Experience of Musical Pauses” appears in the June 2007 issue of Music Perception.

How Music 'Moves' Us: Listeners' Brains Second-Guess the Composer



Have you ever accidentally pulled your headphone socket out while listening to music? What happens when the music stops? Psychologists believe that our brains continuously predict what is going to happen next in a piece of music. So, when the music stops, your brain may still have expectations about what should happen next.

A new paper published in NeuroImage predicts that these expectations should be different for people with different musical experience and sheds light on the brain mechanisms involved.

Research by Marcus Pearce Geraint Wiggins, Joydeep Bhattacharya and their colleagues at Goldsmiths, University of London has shown that expectations are likely to be based on learning through experience with music. Music has a grammar, which, like language, consists of rules that specify which notes can follow which other notes in a piece of music. According to Pearce: "the question is whether the rules are hard-wired into the auditory system or learned through experience of listening to music and recording, unconsciously, which notes tend to follow others."

The researchers asked 40 people to listen to hymn melodies (without lyrics) and state how expected or unexpected they found particular notes. They simulated a human mind listening to music with two computational models. The first model uses hard-wired rules to predict the next note in a melody. The second model learns through experience of real music which notes tend to follow others, statistically speaking, and uses this knowledge to predict the next note.
The results showed that the statistical model predicts the listeners' expectations better than the rule-based model. It also turned out that expectations were higher for musicians than for non-musicians and for familiar melodies -- which also suggests that experience has a strong effect on musical predictions.

In a second experiment, the researchers examined the brain waves of a further 20 people while they listened to the same hymn melodies. Although in this experiment the participants were not explicitly informed about the locations of the expected and unexpected notes, their brain waves in responses to these notes differed markedly. Typically, the timing and location of the brain wave patterns in response to unexpected notes suggested that they stimulate responses that synchronise different brain areas associated with processing emotion and movement. On these results, Bhattacharya commented, "… as if music indeed 'moves' us!"

These findings may help scientists to understand why we listen to music. "It is thought that composers deliberately confirm and violate listeners' expectations in order to communicate emotion and aesthetic meaning," said Pearce. Understanding how the brain generates expectations could illuminate our experience of emotion and meaning when we listen to music.

The Metaphor of "High and "Low" in Pitch

 

The Metaphor of "High and "Low" in Pitch

Notes by David Huron


Why are the terms "high" and "low" used to describe pitch?
"There is ample evidence that our characterization of musical pitches in terms of "high" and "low" is basically metaphorical. Consider "high" and "low" on the piano: how can D4 be "above" C4 on the piano when they are both on the same horizontal plane? Think of playing the two notes on the 'cello -- to play the "higher" D4, we have to move our left hand down, so that it is closer to the ground. Behind these linguistic expressions in the conceptual metaphor PITCH RELATIONSHIPS ARE RELATIONSHIPS IN VERTICAL SPACE, which maps spatial orientations such as up-down onto the pitch continuum.

"Although Scruton argued that it was virtually inconceivable to construe pitch in any way other than an up-down spatial relationship, evidence to the contrary comes from a variety of sources. Greek music theorists of antiquity spoke not of "high" and "low" but of "sharpness" and "heaviness"; in Bali and Java pitches are not "high" and "low" but "small" and "large"; and among the Suyá of the Amazon basin, pitches are not "high" and "low" but "young" and "old."[Cites as follows: On the matter of the characterization of pitch by Greek music theorists of antiquity see Andrew Barker (ed.), Greek Musical Writings, Volume II: Harmonic and Acoustic Theory (Cambridge: Cambridge University Press, 1989), n. 43, p. 134. For information about the characterization of pitch in Bali and Java I am indebted to Benjamin Brinner, personal communication. Regarding the characterization of musical pitch by the Suyá, see Anthony Seeger, Why Suyá Sing; A Musical Anthropology of an Amazonian People (Cambridge: Cambridge University Press, 1987).

Although Scruton's (and, by extension, Cook's) assertion about the metaphoricity of musical understanding occurs as part of a larger rationalistic argument about musical ontology, there is a body of recent empirical work by cognitive scientists that supports this assertion. This research suggests that metaphor is not simply an anomalous use of language or a mark of the way we conceive intentional objects but is in fact central to human understanding as a whole. This research is also distinct from other discussions of the importance of metaphor to musical understanding, whether from a philosophical or music-analytical perspective, in that it offers a way to explain why correlations of the sort noted by Scruton -- between musical pitch and physical space, or between successions of pitches and motion through physical space -- are possible in the first place, and how such correlations are constrained."

Cultured Brain Cells Taught to Keep Time



The ability to tell time is fundamental to how humans interact with each other and the world. Timing plays an important role, for example, in our ability to recognize speech patterns and to create music.

Patterns are an essential part of timing. The human brain easily learns patterns, allowing us to recognize familiar patterns of shapes, like faces, and timed patterns, like the rhythm of a song. But exactly how the brain keeps time and learns patterns remains a mystery.
In this three-year study, UCLA scientists attempted to unravel the mystery by testing whether networks of brain cells kept alive in culture could be "trained" to keep time. The team stimulated the cells with simple patterns -- two stimuli separated by different intervals lasting from a twentieth of a second up to half a second.

After two hours of training, the team observed a measurable change in the cellular networks' response to a single input. In the networks trained with a short interval, the network's activity lasted for a short period of time. Conversely, in the networks trained with a long interval, network activity lasted for a longer amount of time.

The UCLA findings are the first to suggest that networks of brain cells in a petri dish can learn to generate simple timed intervals. The research sheds light on how the brain tells time and will enhance scientists' understanding of how the brain works.
The study was supported by a grant from the National Institute of Mental Health.



How Music Training Primes Nervous System and Boosts Learning



(Original Link - http://www.sciencedaily.com/releases/2010/07/100720152252.htm)

Those ubiquitous wires connecting listeners to you-name-the-sounds from invisible MP3 players -- whether of Bach, Miles Davis or, more likely today, Lady Gaga -- only hint at music's effect on the soul throughout the ages.

Now a data-driven review by Northwestern University researchers that will be published July 20 in Nature Reviews Neuroscience pulls together converging research from the scientific literature linking musical training to learning that spills over to skills including language, speech, memory, attention and even vocal emotion. The science covered comes from labs all over the world, from scientists of varying scientific philosophies, using a wide range of research methods.

The explosion of research in recent years focusing on the effects of music training on the nervous system, including the studies in the review, have strong implications for education, said Nina Kraus, lead author of the Nature perspective, the Hugh Knowles Professor of Communication Sciences and Neurobiology and director of Northwestern's Auditory Neuroscience Laboratory.
Scientists use the term neuroplasticity to describe the brain's ability to adapt and change as a result of training and experience over the course of a person's life. The studies covered in the Northwestern review offer a model of neuroplasticity, Kraus said. The research strongly suggests that the neural connections made during musical training also prime the brain for other aspects of human communication.

An active engagement with musical sounds not only enhances neuroplasticity, she said, but also enables the nervous system to provide the stable scaffolding of meaningful patterns so important to learning.

"The brain is unable to process all of the available sensory information from second to second, and thus must selectively enhance what is relevant," Kraus said. Playing an instrument primes the brain to choose what is relevant in a complex process that may involve reading or remembering a score, timing issues and coordination with other musicians.

"A musician's brain selectively enhances information-bearing elements in sound," Kraus said. "In a beautiful interrelationship between sensory and cognitive processes, the nervous system makes associations between complex sounds and what they mean." The efficient sound-to-meaning connections are important not only for music but for other aspects of communication, she said.
The Nature article reviews literature showing, for example, that musicians are more successful than non-musicians in learning to incorporate sound patterns for a new language into words. Children who are musically trained show stronger neural activation to pitch changes in speech and have a better vocabulary and reading ability than children who did not receive music training.

And musicians trained to hear sounds embedded in a rich network of melodies and harmonies are primed to understand speech in a noisy background. They exhibit both enhanced cognitive and sensory abilities that give them a distinct advantage for processing speech in challenging listening environments compared with non-musicians.

Children with learning disorders are particularly vulnerable to the deleterious effects of background noise, according to the article. "Music training seems to strengthen the same neural processes that often are deficient in individuals with developmental dyslexia or who have difficulty hearing speech in noise."

Currently what is known about the benefits of music training on sensory processing beyond that involved in musical performance is largely derived from studying those who are fortunate enough to afford such training, Kraus said.

The research review, the Northwestern researchers conclude, argues for serious investing of resources in music training in schools accompanied with rigorous examinations of the effects of such instruction on listening, learning, memory, attention and literacy skills.
"The effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness and thus requires society to re-examine the role of music in shaping individual development, " the researchers conclude.

Musical Prescriptions...Music as Medicine



Patients could be prescribed music tailored to their needs as a result of new research.
Scientists at Glasgow Caledonian University are using a mixture of psychology and audio engineering to see how music can prompt certain responses.

They will analyse a composition's lyrics, tone or even the thoughts associated with it.
Those behind the study say it could be used to help those suffering physical pain or conditions like depression.

By considering elements of a song's rhythm patterns, melodic range, lyrics or pitch, the team believe music could one day be used to help regulate a patient's mood.
Audio engineer Dr Don Knox, who is leading the study, said the impact of music on an individual could be significant.

He said: "Music expresses emotion as a result of many factors. These include the tone, structure and other technical characteristics of a piece.

"Lyrics can have a big impact too. 
"But so can purely subjective factors: where or when you first heard it, whether you associate it with happy or sad events and so on."

So far the team has carried out detailed audio analysis of certain music, identified as expressing a range of emotions by a panel of volunteers.
'Emotional content'
 
Their ultimate aim is to develop a mathematical model that explains music's ability to communicate different emotions.

This could, they say, eventually make it possible to develop computer programs that identify music capable of influencing mood.

"By making it possible to search for music and organise collections according to emotional content, such programs could fundamentally change the way we interact with music", said Dr Knox.
"Some online music stores already tag music according to whether a piece is "happy" or "sad".
"Our project is refining this approach and giving it a firm scientific foundation, unlocking all kinds of possibilities and opportunities as a result."

Tuesday, September 7, 2010

How to Fix a Broken Midi Controller! Sliders!




(Original Link - http://www.djtechtools.com/2010/08/10/vci-100-how-to-fix-faders-jog-wheel/#more-7393)

A pretty common question I see on the DJ TechTools forum is about channel faders that are acting up, jumping around and not responding like they should. The normal answer is to clean and lubricate your cross-faders – that will usually fix the problem. However, when my cross-faders started to act up, I couldn’t find any good info on what exactly needed to be done. So as I was doing my maintenance, I shot some video and did a little walkthrough to make it a little easier for people who might be having the same problem. As a bonus, while I was inside the VCI I went ahead and lubricated my jog wheels as one of them felt like it was grinding a little when I spun it.

BILL OF MATERIALS

 

What you’ll need to complete this little bit of maintenance is a phillips screwdriver, canned air, isopropyl alcohol, fader lubricant and jog wheel grease. The Caig line of lubricants and cleaners seems to be highly regarded throughout the DJ community so I went with their DeoxIT G5 lubricant. Whatever you decide to use make sure it’s safe for electrical connections with plastic. For lubricating the jog wheels I used a very small amount of carbon-enriched conductive silicon grease.

GETTING TO IT

  1. First you’ll need to remove the fader knobs and the bottom faceplate, then remove the six screws for the faders. The faders will fall through the slots as the knobs aren’t on them, but that’s fine.
  2. Then turn over the VCI and remove the one screw towards the bottom rear of the unit and the four screws on the sides. Slide the baseplate forward a little, lift up and set it to the side.

FADER CLEAN AND LUBE

Your faders will be loose inside, so all you have to do now is unplug them one at a time (see photo for where to grab the plug), blow a few short bursts of canned air inside the fader slot to get out large pieces of dust and contaminants.


Set your fader in a small container and pour just enough alcohol in to cover the top of the fader. Move the fader back and forth while the whole piece is submerged in the alcohol to clean out any contaminants that may have built up over time.
Remove the fader from the bath and gently tap out the bulk of the alcohol that remains inside. Next you’ll take your canned air and gently blow off any remaining alcohol. Eye protection is a good idea here as alcohol that might blow into your face probably wouldn’t feel too good.
Next spray just a quick, short burst of lubricant inside the fader and slide it back and forth to distribute the lubricant evenly inside, plug it back in and repeat the procedure for the remaining faders. You can also put in a drop or two of a more heavy type of fader lubricant, although I chose not to because I have a dog and a cat and I live in an old dusty house. My thought was that with all of these things combined a heavier lubricant might attract some pet hair and dust, which would just cause me more problems later on.

JOG WHEEL LUBRICATION

 

If you’re going to lubricate your jog wheels, remove the three screws holding the white plastic piece on. If you look inside you’ll see a thin plate that contacts the rotary post acting as a ground finger. You want to get your conductive lubricant between the plate and the rotary post. A very small amount of conductive lubricant should do the job fine. In case you missed it, I said conductive. If it’s not conductive you’ll lose your touch sensitivity on the jog wheels!



You don’t need to take the jog wheel apart. This photo is just to illustrate what it looks like inside and give you a better idea of where you’re trying to put your conductive grease.
It’s important to make sure the bushing is in alignment when you put it back on. Follow the steps outlined in the video to make sure it’s put on securely and centered just right.

REASSEMBLY

One final thing you can do before putting everything back together is to check the cardboard piece and make sure it’s secure. A once over of all the plugs is also a good idea while you have the VCI open.
You’ll want to screw the faders back in while the back is off so you have access to them from the other side. Make sure you don’t have any wires between the faders and the faceplate as they’ll get pinched, and that could potentially cause problems. Another thing I want to point out is that it’s possible to screw the fader screws into the slot the fader slides in, so make sure you’ve got all the holes lined up just right.
So that’s it. A good detailed walkthrough of how to do a relative simple procedure that will keep your VCI running smoothly and letting you bust out more super-awesome mixes.
Song credit in the video goes to DJ TechTools forum member, Lambox. Check out his Soundcloud page here http://soundcloud.com/lambox

Monday, September 6, 2010

Sounds like art fraud: Acoustic waves give clues to paintings' provenance


(Original Link Here - http://www.scientificamerican.com/blog/post.cfm?id=sounds-like-art-fraud-acoustic-wave-2010-09-04)

Theft, imitation and outright deception can make a painting's history even murkier than centuries of accumulated grime. But getting to the bottom of a piece of art's origins can be crucial for restoration—and forensics.

In recent decades, art scholars, restorers and forensic specialists have relied increasingly on scientific techniques to determine the chemical composition of a work's pigments to try to ascertain when, where and by whom it was likely made. One ostensibly ancient Virgin with Child painting was revealed to be a 1920s fake after testing revealed that it contained Prussian Blue, a pigment that was invented in the 1700s—long after the painting would have been made if it were original.

Chemical processing of paint samples can provide useful molecular profiles, but it also means physically damaging a chip. Other methods using x-ray fluorescence, scanning electron microscopy and infrared spectroscopy have helped scholars and technicians peer into ancient paint, but they can be time- and labor-intensive.

A new study shows how sound waves can detect a dozen different inorganic pigments using Fourier-transform photoacoustic infrared (PAIR) spectroscopy (which makes use of signal processing functions developed by French physicist Joseph Fourier). The process is based in part on an 1880 discovery by Alexander Graham Bell, who demonstrated that shining a modulated beam of light onto an object could create a subtle acoustic wave.

The researchers were able to use PAIR's argon-ion laser to detect a range of common inorganic hues, including: four blues (cobalt, ultramarine, Prussian and azurite), three greens (malachite, chromium oxide and viridian), two yellows (cadmium and chrome) and three browns (iron oxide, ochre and Mars). A description of the work will be published in the October issue of Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy.

Tiny samples, which could later be restored to their paintings, were heated with the laser beam. This heat produced a change in pressure, making small acoustic waves, which were picked up by a super sensitive microphone. Each compound had a different sound profile that distinguished it from the rest. And because the samples are not damaged during the process, the researchers noted, they can be tested multiple times—a bonus not every analysis method can boast.

"The behavior of paints, pigments, glazes, etc. depends critically on the conditions associated with their production, storage and long-term display," the researchers noted in their paper. "Without a full comprehension of the reactivity of the chemicals involved, the attempted preservation of artworks can sometimes lead to more damage than would occur by just simply leaving the works untreated."

The researchers proposed that these simple readings could be included in a database for quick reference in the future. "Once such a database has been established, the technique may become routine in the arsenal of art forensic laboratories," Ian Butler, a chemistry professor at McGill University and coauthor of the new study, said in a prepared statement.

Image of
Virgin with Child, painted by an unknown Italian forger in the 1920s who used Prussian Blue, which was not invented until the 1700s, courtesy of Wikimedia Commons

Wednesday, September 1, 2010

Japan Media Arts Festival in Kyoto ---Alvaro CASSINELLI



The Science of How Music is Made


Sorry...no embed code for this....BUT THIS IS DEFINITELY worth a watch!!!!!!!


http://www.livescience.com/common/media/video/player.php?videoRef=LS_100813_music-man

Modified Classical Music on iPods Helps Toilet Train Liverpool Children

(Original Link - http://alexdoman.com/2010/07/08/toilet_train_music_liverpool_research/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheBrainUnderstandingItself+%28The+Brain+Understanding+Itself%29)


CLASSICAL music helped children toilet train in Liverpool after a world first scheme.
The Listening Program saw youngsters listen to the works of famous composers for 30 minutes a day on iPods.

Parents said the scheme worked wonders and their little ones’ potty habits have improved dramatically.
Specialist nurse June Rogers led the pilot project, which examined the link between childhood continence and classical music.
The scheme was tried out at Matthew Arnold School in Toxteth, and was funded by Liverpool PCT.
It examined how modified classical music can help children with autism and other learning difficulties to be toilet trained.

Ms Rogers is the head of NHS Liverpool Community Health’s Integrated Paediatric Continence Service, and has already been awarded an MBE for her work in the field.

She said: “There is often a presumption that children with special needs cannot be toilet trained – yet we know from experience that many such children have the ability to become continent if we could only find a way to unlock their potential.

“This project showed that by taking a different approach we have hopefully been able to find the key to help children reach their full potential and remove the stigma of incontinence. “However as this was only a pilot a larger study is planned to confirm the findings.”
Angela Measley says The Listening Program has “worked wonders” with her five-year-old son Jacob.
The youngster was three when he was diagnosed as having Fragile X Syndrome, and he developed severe learning difficulties.

She said: “It wasn’t easy at first, Jacob doesn’t usually like anything touching his ears, so he didn’t like putting the headphones on, but once he got used to wearing them, he really started to calm down. After eight weeks there were big improvements, which have continued to last.The Listening Program really has worked wonders.”

Ready-to-Play, Tuned Beer Bottles, and Other Design Experiments with Sound


(Original Link - http://createdigitalmusic.com/2010/08/27/ready-to-play-tuned-beer-bottles-and-other-design-experiments-with-sound/)


What if blowing tunes on beer bottles was raised to the level of musical science?
Through even the mundane medium of packaging, design can transform the everyday. DJ and designer Matt Braun of Philadelphia, collaborating with Chris Mufalli, use labels to tune the level of beer remaining in the bottle for musical results. Pitches are printed on the labels, allowing you to exactly match the liquid inside to a pitch you want, and join along with your fellow imbibers for a performance.

It’s not just a label that’s different. Ridges on the sides of the bottles make them double as Guiro-style percussion. The neck was adjusted for ergonomics. Even the wooden box becomes a tongue drum.
It’s all decidedly non-digital, group fun – Create Beer Music? (Actually, technically, they’re printing with digital tech, the quantization of liquid to discrete equal-tempered pitches is a digital process by definition, and you hold it with your fingers. So there.)

So far, this has been used in a microbrew, but the duo are looking for a partner. I’d love to have this at our next Handmade Music, if any of you are in the bottling business.
Tuned Pale Ale [2d3d5d.com - project site]
Found via the wonderful, whimsical design blog etre, maintained by a usability and design consultancy
Thanks to Johan Strandell / 40hz for the tip.
The Tuned Pale Ale are just one of a number of unique designs from Matt Braun, all emphasizing making the ephemeral world of sound more physical.



Matt’s site is a smörgåsbord of design concepts, many involving creative uses of lasercutters and 3D forms. There are “tuned gig buckets” for busking similar to the beer bottles, useful tools for DJs using 45s, and wooden drums made from digital images of the sounds of other drums, producing “generations” of instruments in which the sound of one gives form to the shape of another.
Two of my favorites are pictured here. Custom-made shirts use user-modifiable CAD illustrations to produce wearable art made from analysis of any sound file – below, Michael Jackson’s P.Y.T. becomes a pink tee. Another project in early development explores making skeletal three-dimensional forms from the structure of musical harmonies.

I look forward to seeing how these projects evolve; Matt’s looking for collaborators.
http://2d3d5d.com/