Thursday, November 25, 2010

Ten Music Technologies to Be Thankful For Right Now

(Original Link -

Happy Thanksgiving to our American readers. I was thinking about technologies for which I’m particularly thankful, some non-obvious, some perhaps so obvious they might be easily be taken for granted. Each I hope represents some opportunities for others. At the risk of starting a Thanksgiving roast, in no particular order, here are the ones foremost in my mind in the waning days of 2010.

1. MIDI: MIDI gets kicked around a bit – it’s not a perfect protocol, commonly-used messages are low resolution, and the parts most people use really haven’t changed since the mid-80s. But don’t discount why we use it so much: it’s ubiquitous, cheap, and lightweight. Want something simple that works over WiFi and Bluetooth? Want to connect something from 1986 you found on eBay to your iPad and then use on a DIY synth with a $3 microcontroller? Want to connect an Xbox keytar without any hacking? MIDI may not be the right tool for every job, but as a lingua franca, it sure is darned useful.

2. Linux: Linux can still sometimes exhibit a punishing learning curve, and proprietary drivers for devices like video cards can cause issues. But in a world of wildly diverse hardware and painfully-quick obsolescence, Linux is a lifesaver. It can resurrect old machines, make netbooks usable, and the Linux kernel is fast becoming the solution for embedded gear from Android-powered devices to DIY projects. For music, that means an OS that can run on anything, and quickly wind up making noise with tools from Pd and Csound to Renoise and DJ app Mixxx. Suddenly, anything that runs on electricity and has a processor looks like fair game.

3. Music notation: Fun toys aside, what’s the real killer app in 2010? It might be the score. It’s still the fastest way to communicate a musical idea to someone else, or quickly play the Billy Joel tune your cousin wanted to sing along with. (Best karaoke machine in the world: your brain.) And this year, we saw improved ways to enter those scores, from ever-more-mature commercial packages to free tools like Lilypad. An iPad can be a fake book full of lead sheets; a browser can turn some quickly-typed notes into notation. All this using something that wouldn’t look entirely unfamiliar to someone who stepped through a wormhole from a few centuries ago.

4. Reaper: We face a challenge in music technology: we’ve actually got too many great options. So it’s a good thing that there’s at least one DAW that’s easy to recommend that you know people can afford, with pricing ranging from $40-150. Reaper runs on Mac, Windows, and (with WINE) Linux. It’s not bloated with features, has no DRM, is heavily extensible (with both custom plug-ins and scriptable MIDI). And if you’re trying to get a friend to try a DAW without (cough) pirating it, you can point them to Reaper’s free trial version. Add to that the fact that you can author Rock Band songs for the game platform – including full keyboard and guitar transcriptions in the near future with Rock Band 3 – and Reaper is a DAW worth keeping around.

5. Four-lettered Synth Makers That Remember the Past: Not one but two famous names from synths yesteryear, MOOG and KORG, have been on fire in 2010. Moog celebrated its Minimoog anniversary with an enormous XL edition. Practical? Not terribly. Something boys and girls could pin up to their walls? Yes. And Moog also had a bigger-than-ever Moogfest, proving its synths and effects weren’t just the domain of electronic music geeks, plus an affordable iPhone/iPod touch app that turns those handhelds into portable machines capable of recording anything and adding far-out effects. KORG, for their part, proves a big music tech name can remember their past, too, with the soul of their MS-20 appearing in iPad apps, wonderful, stocking stuffer-friendly hardware (Monotron), new bundles of software emulation (for those who prefer “real computers” to iPads), and, heck, even retro t-shirts. What these two companies have in common: understanding that their legacy matters to people, and finding ways to get that legacy in front of as large an audience as possible. Those are both ideas I hope catch on.,

6. Portable Recorders: Then: Marantz, Nagra, Tascam Portastudio. Today: go-anywhere field recorders from Tascam, Zoom, Roland, Korg, and many others. The ability to go out and actually record stuff remains one of the most essential needs in music tech. Today’s devices add nifty extras like pitch-independent tempo adjustment and built-in metronomes, making them as much a friend to musicians as they are sound designers. Odds are, if you’re reading this, some portable audio recorder is one of your most valuable possessions. Tascam DR-03 @ CDM

7. Pd: Pure Data, the open-source offspring of Max/MSP creator Miller Puckette and contributors around the world, is a free graphical patching tool that runs everywhere. You can use it on ancient iPods, or – via libpd – on bleeding-edge Android and iOS handhelds, in addition to (of course) desktop computers. It’s been incorporated in free and open source projects, and commercial and proprietary projects alike. Thanks to terrific free documentation and sample patches, you can also use it as a window into learning, with the aid of being able to see signal flow visually. (Even Max gurus can pick up tips for that environment with some of the online help.) The beauty of Pd – as with a number of tools – is that sometimes just making what you need is easier than making something someone else made do what you need., pd-everywhere @ noisepages

8. Bandcamp: The Web is littered with services catering to artists – not least being the chaotic mess that is the remains of MySpace. Bandcamp, in contrast, is simple, efficient, and functional, and for many of us has been a place to acquire music direct from artists as well as to publish it – no complicated jukebox/storefront middlemen needed. Some of my favorite listening this year came from Bandcamp.

9. Contact mics: A few dollars in parts and a soldering iron will make you a perfectly-functional device you can use to explore sound. Or, you can splurge on high-end devices. Either way, the surest antidote to endless choice in software synthesis or enormous sample banks is to go out and get a little closer to sonic vibrations. brokenpants DIY contact mic tutorial

10. The Internet: Distraction. Time suck. Scourge to privacy. A funny thing happened on the way to the Internet: you may have found a group of people who inspired you to make more, and share more, helped you solve problems and get back to music. On Twitter, on Facebook, on forums, on, yes, our fledgling Noisepages, everywhere I go, I find people who help me get tech working for me and remind me why I love music. So… thanks. Maybe there’s hope for us after all. (see… The Internet)
That’s my list. What are you thankful for? Let us know in comments.

Both musicians and non-musicians can perceive bitonality

(Original Link -

Take a listen to this brief audio clip of "Unforgettable." (original link)

Aside from the fact that it's a computer-generated MIDI performance, do you hear anything unusual?
If you're a non-musician like me, you might not have noticed anything. It sounds basically like the familiar song, even though the synthesized sax isn't nearly as pleasing as the familiar Nat King Cole version of the song. But most trained musicians can't listen to a song like this without cringing. Why? Because the music has been made "bitonal" by moving the accompanying piano part up two semitones (a semitone is the difference between a "natural" note and a sharp or flat). Here's the original, unaltered piece:

Can you tell the difference? A 2000 study led by R.S. Wolpert found that non-musicians couldn't distinguish between monotonal and bitonal music played side-by-side. Meanwhile musicians found artificially-created bitonal music to be almost unlistenable. For most non-musicians, if they heard anything wrong with the clips, they typically said they were being played too fast, or mentioned some other unrelated concept.

But Mayumi Hamamoto, Mauro Bothelo, and Margaret Munger (AKA Greta) wondered if years of musical training were really necessary for non-musicians to hear bitonal music. Bitonality is actually a bit controversial in the world of music, and it can be a little hard to define. In principle, there's a difference between bitonality and just playing or singing off-key, but in practice, the difference may not even exist. Advocates of bitonality like to point to the works of composers like Milhaud, Bartók, Prokofiev, and Strauss. These composers deliberately wrote in two different musical keys. But how is that different from occasionally or regularly writing dissonant chords? After all, all the same notes can be written using any musical key. To be truly bitonal, advocates say the two separate parts must unfold independently in different keys. This results in a distinctive "crunch" when the music is played. The separate question is, is this noticeable? Wolpert's work shows that it is, at least for trained musicians.
Hamamoto's team replicated Wolpert's study by playing altered and original clips of familiar songs like the above example to three groups of undergraduates: "Musicians" with more than 5 years of training, "Amateur Musicians" with 1 to 5 years of training, and "Non-Musicians" with less than a year of training. There were 14 students in each group. Musicians were significantly better at noticing that the modified clips were bitonal or "out of tune."

Next, everyone was given brief training session, where instead of modifying monotonal music to be bitonal, some of Milhaud's music originally intended to be bitonal was modified to be monotonal. Here's an example bitonal piece (Milhaud's "Botafogo"):

After hearing the clip and seeing it identified as bitonal, the students were told

Notice sometimes there is a "crunch" in the sound. This should sound somewhat unpleasant and feel like it shouldn't be that way.
Then they listened to a manipulated version of the same clip:

Again, they were told this clip was monotonal and directed to notice how the sound seems smoother and more pleasant (to my mind, it's not nearly as interesting as the original -- but that wasn't part of the study). Next they were trained with feedback, listening and identifying clips until they could accurately label four in a row. This took just a few minutes.

Finally, the respondents were tested on four new clips, all songs by Milhaud. This graph shows the results:
As you can see, for all the songs except "Ipanema," the students were quite accurate at identifying both bitonal and monotonal songs (error bars are 95 percent confidence intervals). More important, however, was that there was no significant difference in the results for Musicians, Amateur Musicians, and Non-Musicians. All three groups fared equally well.

The authors conclude the identifying bitonal music isn't a matter of years of musical instruction; it can be achieved with just a brief training session. In fact, the Non-Musicians took no longer than Musicians to complete the training session, so years of experience don't even help with learning about bitonality.

It also may suggest that the controversy about whether bitonality actually exists may not be warranted. If nearly everyone can hear the difference, then it's probably a genuine musical phenomenon.

Monday, November 22, 2010

Junee train becomes a sound lab

MUSIC: Rolling Stock. Various sound artists and composer-performers. Wired Lab. In and around Junee, NSW, November 19. 
IT was noon on Saturday. Just over 200 people, a motley crew of local families and sound art aficionados from the city, were gathered at the Junee railway station. This was the third event that the irrepressible Sarah Last and her Wired Lab team have organised with the people of Junee: the one-day public art event featured 15 artists on a train, the culmination of a series of creative residencies in regional NSW.

Trains and everything associated with them are a religion at Junee, a wheatbelt town of about 4000 people, 444km southwest of Sydney. Its temple is the Junee Roundhouse, a transport museum with 42 tracks and dozens of old trains and carriages.

At the epicentre of the museum, a 33m turntable cranked into life as Dave Noyze and Garry Bradbury captured its industrial clangour with 15 microphones. Young men from the Australian Parkour Association leapt around the roofs of carriages. Outside, Joel Stern and Andrew McClennan created a gamelan sound tapestry from the rusting detritus of trains.

Then we boarded the train, its eleven carriages containing various sound events and theatre. Experimental films played in the sleeper compartments. One darkened carriage was festooned with LED glowsticks, creating a flicker-homage to Brion Gysin.

It was a happy mix of sound artist chic and local jollity that carried the train to its destination in Cootamundra, three hours away. There was a 40-minute pit stop at Cootamundra Station, where a Kenny Rogers imposter conducted a cheesy quiz on the platform. Then it was back to the train for the return journey, and the most accomplished sounds of the day. British sound-gatherer Chris Watson, noted as sound recordist for David Attenborough's television documentaries, has recreated a train journey through northern Mexico, its running commentary and sound tapestry blending perfectly with the clatter of our own train.

A bus trip (sacrilege!) took us to the celebrated Junee Licorice and Chocolate Factory where a rockabilly band, the Pat Capocci Combo, played into the night. "Much more fun than the RSL," a local woman says. "I'll be back for more of this, anytime."

Start-Up Company Music Mastermind Introduces Unique Music Creation Technology 'SoundBetter'

Calabasas, CA – November 18, 2010 – Music Mastermind, an independent music entertainment and technology company, revealed today details of SoundBetter, a cloud-based technology that lets anyone create studio-quality music. SoundBetter joins a robust, growing patent and trademark portfolio held by Music Mastermind (MMM).

The company's SoundBetter solution simplifies music making by automating typically complex digital audio workstation processes, thereby allowing anyone to instantly become a recording artist. SoundBetter provides a complete creative solution that lets users enhance their voices, transform their voices into instruments, create pro-sounding beats, add studio-quality backing tracks, and even generate and add adaptable licks to collaborate with friends and famous artists. The technology's entertaining, game-like elements utilize simple visual cues to make the creative process fun and accessible to all. SoundBetter produces truly individualized music that can be shared and discovered across a broad array of social networks.

"We're at the forefront of the next evolution of music entertainment, and it's time to break down the barriers that prevent people from expressing themselves musically," said Matt Serletic, CEO of Music Mastermind. "This company is all about fun and easy music creation for everyone. All people love music, and now absolutely anyone can produce great sounding songs to enjoy and share with the world."

Serletic, a multi-Grammy Award-winning producer and former Virgin Records Chairman and CEO, founded Music Mastermind in 2007 with his partner, Bo Bazylevsky, a veteran Wall Street bond trader, senior hedge fund portfolio manager, and former Global Head of Emerging Markets Corporate Trading at J.P. Morgan. Together, they compiled a world-class development team with a full-time staff of more than 30 professionals from multiple disciplines, including entertainment, sound engineering, music theory, technology, finance and gaming. Led by Chief Technology Officer Reza Rassool, whose work has garnered a Technical Oscar and Emmy, MMM's engineering and design teams have over 190 years of cumulative experience. Members of the team have advanced degrees in computer science and music, as well as 33 console game credits, including multiple Guitar Hero and Tony Hawk titles.

The company successfully raised its first round of funding in February 2010 with nearly $5 million from angel investors, and is currently in the process of closing its second investment round.

"Media creation and consumption are at an all-time high, and our technology will do for music what YouTube did for video," said Bo Bazylevsky, President and COO of Music Mastermind. "We want to put the power of real, true creation into everyone's hands, and we're confident that our products will do just that. The tech is wrapped in such a fun interface that you don't even realize that you're working to produce music!"

The company plans to implement this unique music creation technology across numerous mediums; the announcement for MMM's new creation platform will be revealed in the coming weeks.

Music Mastermind simplifies the traditionally complex world of professional-quality music creation, allowing anybody with a creative idea to be heard. For more information about the company and its patented music creation technologies, please visit, or follow us on Facebook and Twitter.

About Music Mastermind, Inc.
Based in Calabasas, CA, Music Mastermind was founded by Grammy Award-winning producer/songwriter Matt Serletic and top Wall Street bond trader Bo Bazylevsky. Formed in 2007, the venture-backed start-up is dedicated to developing technologies that break down the barriers to music creation. For more information please visit

Music turned into light, and fired at you

(Original Link -

When Richie Hawtin wanted to create synesthetic visuals triggered by the music he’s playing live as Plastikman, he turned to his old pals at Toronto software house Derivative.

Derivative’s TouchDesigner helped propel this year’s Plastikman tour to that mythical “next level” by providing an interface with the performance-friendly electronic-music software Ableton Live that allowed the component parts of Hawtin’s skeletal techno tracks to produce images that moved and changed shape in direct response to the sounds he’s generating onstage. In 3-D, no less.
Heaven only knows how one actually brings something like that to fruition, but TouchDesigner — which will respond to pretty much any input you desire, from sound to light to touch and beyond — is the relatively young outgrowth of designer Greg Hermanovic’s longtime desire to use computers to produce “interactive, real-time art.” He’d been dreaming of it since he put his first pixel up on a computer screen while working on a U.N. research ship in Africa during the ‘70s, came a little closer to realizing his dream doing special-effects software with successful local CGI-enabler Side Effects — whose Houdini product has since been used in more than 400 feature films — and has gotten as close as he’s yet come to his perfect vision of a worldwide, collaborative art-sharing platform that’s “self-perpetuating and a bit out of control in its own way” and that could be used as a universal education and research tool since launching TouchDesigner eight years ago.

TouchDesigner has made stunning “live” artworks possible everywhere from M.I.T. to the world’s largest yacht, but Hermanovic — whose software’s patch-and-collage aesthetic is inspired in part by his love of old modular synthesizers and their many dangling cables — has also become something of a go-to guy for electronic musicians looking for a visual component to their shows. Swayzak enlisted Derivative, for instance, to jazz up its recent DJ gig at 99 Sudbury, while when the Star spoke to Hermanovic this past Friday he was just returning from a little last-minute tweaking with DJ Shadow’s crew at the Phoenix.

Q: So was Hawtin running some custom stuff for those Plastikman live gigs?

A: It was, but everything that Rich is doing you can do with the free version that’s on our Web site. It’s custom because we added more stuff to it, but anybody could have done it. That’s the nice thing about TouchDesigner is anybody can use TouchDesigner to make anything they see other people making.

Q: In this case, Ableton Live was being used to generate the visuals, right?

A: He’s sending this stream of data into TouchDesigner, which is running live on another computer. So we’re just taking all this looping data and this controller data, and every song we have mapped differently to a visual. So TouchDesigner takes his inputs and for every song we know what the visual is going to be so we display it out on the LED screens. He’s kind of building music tracks as he goes and we’re working with him and a visual designer going ‘Okay, part of that sound goes with this visual element and this knob goes with that thing, and then when the song progresses it will increase the brightness of this and the size of that.’ So it’s Rich and us working side-by-side so you end up with a look and a theme for a song.

Q: Why design a tool for making interactive art?

A: I’m a big fan of experimental films — I’m a huge fan of Norman Mclaren — and I wanted to reproduce some of these experimental-film effects using software, so that’s why I got into computer graphics: so I could do special effects. I wanted to do what a musician does — perform live, tweak things — and do that visually, but I couldn’t do that with special-effects software. In the ’90s, you couldn’t do real-time computer graphics. Well, you could, but it was on computers that cost $200,000.

Q: What’s your ultimate goal for TouchDesigner? Having it react directly to electronic signals from people’s brains?

A: When I see researchers who are doing high-end chemistry research or something using a component made by a 10-year-old in his basement, not knowing where it came from, that’s when I’ll be aware that we’ve kind of closed the loop: when kids are making parts of bigger systems for high-end researchers or professionals. It’s gonna happen.

Music student does gig hours after rescuing teen in Colchester

(Original Link -

A MUSIC student who rescued a girl from a burning house returned to college hours later to play his first gig.

Jamie Cunliffe, 21, of Magnolia Drive, Colchester, came to the aid of Naomi Hare, who was trapped as fire ripped through her home in James Wick Court, Balkerne Hill.

Naomi, 17, a student at Colchester Institute, was asleep when the blaze started in her sister’s bedroom.
She was woken by a smoke alarm but was unable to get down the stairs due to the smoke.

Jamie, who is studying music at the same Sheepen Road college, said: “I was walking back to my mate’s house and saw a girl at the window screaming and screaming.

“I went to kick the door in, but it turned out it was open.

“I got to the top floor, where I could hear her screaming. It was really scary and the smoke was so black, and it was so hot.

“She was having a panic attack and wouldn’t move, so I picked her up and carried her out.
“We got outside, but then there were small explosions coming from the house, so I picked her up again and carried her around the corner.”

Neighbours came to the pair’s aid.

Jamie and Naomi were treated by paramedics for smoke inhalation, before being taken to Colchester General Hospital for further treatment.

Jamie added: “I was treated at hospital, but wanted to leave because I was desperate to get to a gig.
“I’m in a band called the Elements, and that night was our first gig.

“I'm the frontman in the band, and we have been practising really hard – I couldn’t not go to it.”

Naomi’s mum, Annette Kelly, phoned Jamie to thank him for saving her daughter’s life.

Fire crews, who are investigating the cause of the fire, which started on Tuesday at about 1.25pm, also praised Jamie for rescuing the girl.

He added: “People have been saying I’m a hero, but I’m not looking for praise. It was just a natural reaction. I couldn’t just stand there and watch.”

Fears for future of school music lessons

(Original Link -

School music lessons could be hit as local councils make savings and school budgets are redrawn, it is feared.

One in five music services, which support schools, expect councils will completely axe their grants and half fear cuts of up to 50%, a survey suggests.
The Federation of Music Services warned that some services which help provide subsidised lessons could collapse.

The government said all pupils should be able to learn an instrument or sing.
It has commissioned a review of music provision in schools, being carried out by Classic FM head Darren Henley, but this is not due to report until January.

However, local authorities in England which face cuts of about a third, get their funding allocations in early December.

It is clear from the federation's survey of 158 music services in England, Wales and Northern Ireland, that many are already planning cuts with some preparing to axe the funding completely.
Local authorities provide just one strand of funding for school music services, with the rest coming from central government grants and parental contributions.

But the expected cuts come as schools face a huge shake-up of their budgets. A number of schemes dedicated to supporting school music face cuts or being channelled into a general schools budget for redistribution.

The Department for Education later said it had not yet taken a decision on the main £82.5m Music Standards Grant and would not do so until the Henley review had reported.
But it would not guarantee that the money would be ring-fenced within schools.

'Steep decline'
Federation of Music Services (FMS) chief executive Virginia Haworth-Galt said: "We recognise the pressure many local authorities are under but would urge them to them to hold back their plans until we know the results of the Henley Review.

"Music and our children's education are too important to be jettisoned like this particularly when we know that 91% of the public back music education in schools."

She added that the FMS would be very disappointed if the music grant went directly into schools' budgets without any ring-fencing for music education.

"This situation occurred in the early 1990s with disastrous results; music went into a steep decline as the monies were spent elsewhere in schools. This is a music lesson that should not be repeated," she added.

Conductor of the Bedforshire Youth Orchestra Michael Rose says music services in his area, Central Bedfordshire, are set to have budgets and teaching staff cut to zero.

He said as music services were non-statutory they were particularly vulnerable in the present climate of cuts.

He said: "If funding is lost in this way music lessons will become the sole preserve of the middle classes."

He added: "Instrumental teaching in the county's schools is provided by a central staff of highly qualified instrumental teachers. It has resulted in literally many thousands of children having the experience of learning an instrument."

Schools minister Nick Gibb said too many children in state schools were denied the opportunity to learn to play a musical instrument.

This was why he had launched a major review of how music is taught and enjoyed in schools to help make sure all pupils get an opportunity to learn to play an instrument and to sing.

Its recommendations would determine how future funding could be best used in the future, he added.
  "Evidence tells us that learning an instrument can improve young people's numeracy and literacy skills and their behaviour.

"It is also simply unfair that the pleasure of musical discovery should be the preserve of those whose parents can afford it."

"As part of that review recommendations will be made to determine how future funding can best be used," he said.

He added that decisions on central funding for music would not be made until after the review had reported.

General secretary of the National Union of Teachers Christine Blower said the cuts to music in schools were even more shocking in light of Michael Gove's announcement that he would be holding a review into music education in schools, claiming that it was a "sad fact" that too few state school children learnt an instrument.

She added: "Music in schools makes a contribution way beyond the straightforward exercise of learning an instrument.

"Children and young people can experience coming together in a creative environment which benefits them in other aspects of their school life."

Recording Pioneers - Stories from History

Knows your roots!? I came across a great website detailing a lot of the finest stories from the greatest pioneers in recording sound!

A good history lesson is due!!!



Sunday, November 21, 2010

Ancient trumpets played eerie notes

(Original Link -

Scientists analyze tunes from 3,000-year-old conch-shell instruments for insight into pre-Inca civilization.

Listen to shell music.

Now you can hear a marine-inspired melody from before the time of the Little Mermaid’s hot crustacean band. Acoustic scientists put their lips to ancient conch shells to figure out how humans used these trumpets 3,000 years ago. The well-preserved, ornately decorated shells found at a pre-Inca religious site in Peru offered researchers a rare opportunity to jam on primeval instruments.

The music, powerfully haunting and droning, could have been used in religious ceremonies, the scientists say. The team reported their analysis November 17 at the Second Pan-American/Iberian Meeting on Acoustics in Cancun, Mexico.

“You can really feel it in your chest,” says Jonathan Abel, an acoustician at Stanford University’s Center for Computer Research in Music and Acoustics. “It has a rough texture like a tonal animal roar.”

Archaeologists had unearthed 20 complete Strombus galeatus marine shell trumpets in 2001 at Chavín de Huántar, an ancient ceremonial center in the Andes. Polished, painted and etched with symbols, the shells had well-formed mouthpieces and distinct V-shaped cuts. The cuts may have been used as a rest for the player’s thumb, says study coauthor Perry Cook, a computer scientist at Princeton University and avid shell musician, or to allow the player to see over the instrument while walking.
To record the tunes and understand the acoustic context in which the instruments, called pututus, were played, the researchers traveled to Chavín.

 As an expert shell musician blew into the horn, researchers recorded the sound’s path via four tiny microphones placed inside the player’s mouth, the shell’s mouthpiece, the shell’s main body and at the shell’s large opening, or bell. Similar to a bugle, the instruments only sound one or two tones, but like a French horn, the pitch changes when the player plunges his hand into the bell.

The team used signal-processing software to characterize the acoustic properties of each trumpet. Following the sound’s path made it possible to reconstruct the ancient shell’s interior, a feat that normally involves sawing the shell apart or zapping it with X-rays.

The researchers also wanted to know how the site’s ceremonial chamber, a stone labyrinth with sharply twisting corridors and ventilation shafts, changed the trumpet’s sound. To find out, the team arranged six microphones around the musician and reconstructed the sound patterns on a computer.
If the trumpets were played inside the stone chamber in which they were found, the drone would have sounded like it was coming from several different directions at once. In the dimly lit religious center, that could have created a sense of confusion, Abel says.

“Were they used to scare people while they were there?” asks Abel. “There are still a lot of things left open.”

Turns out, such questions about how sounds affect people and their behavior, an area called psychoacoustics, can be tested. It's a field of active research, and not just for ancient civilizations: Another group at Stanford is now studying how a room’s acoustics affects human behavior. In one recent experiment, researchers separated test subjects into different acoustic environments to do a simple task — ladling water from one bucket to another in a dimly lit room.
“What your ear can actually hear plays into how you would behave, or the psychological experience in the situation,” says Abel.


A group of conch-shell instruments made by a pre-Inca civilization sound similar to a kid learning to play the trumpet.Click here to listen.

A musician plays the fundamental frequency and the first overtone of a 3,000-year-old shell trumpet unearthed in Peru.Click here to listen.

What is Talent?

(Original Link -

Thanks to Edward Tenner for alerting us to a new WSJ piece by Terry Teachout that attacks Anders Ericsson's so-called "10,000-hour rule." Teachout summarizes the Ericsson rule in the following way:

"To become successful at anything, you must spend 10 years working at it for 20 hours each week. Do so, however, and success is all but inevitable."

A superb straw man. So simple to understand, so easy to knock down. But think about it for a moment: Would anyone with half a brain actually argue that a simple *amount* of practice time could *guarantee* success? Of course not, and that's not even remotely what Anders Ericsson does. 

The real Anders Ericsson is one of the leaders of a fascinating new academic field called "expertise studies" which carefully deconstructs the longstanding notion of innate talent by looking for hidden components that might actually help to explain success.

This is what science does. It seeks to understand how things actually work rather than settle for mysterious formulations like "gifted," "natural-born," and "genius."  

Teachout also writes that "The problem with the 10,000-hour rule is that many of its most ardent proponents are political ideologues who see the existence of genius as an affront to their vision of human equality, and will do anything to explain it away."

I honestly do not know which proponents Teachout is referring to. The writers that I'm most familiar with on the subject of understanding talent and success -- Malcolm Gladwell, Daniel Coyle, Mihaly Csikszentmihaly, Geoff Colvin, Carol Dweck -- are all actually trying to understand what goes into talent and success. 

He might be referring to the title of my book, The Genius in All of Us, which some non-readers have misinterpreted as a blank-slate argument of pure egalitarianism. But, again, that's a straw man. No one here is arguing that we're all equal or equally capable of the exact same achievements. We all have differences, and are therefore assured of becoming different people.

When it comes to the question of individual potential, though, it's important to avoid what neuroscientist and musicologist Daniel J. Levitin calls "the circular logic of talent." "When we say that someone is talented," he says, "we think we mean that they have some innate predisposition to excel, but in the end, we only apply the term retrospectively, after they have made significant achievements."

So what is "talent"? Is it some magic or genetic stuff that gives some of us a springboard to success? The closer we look at the building blocks of success, the more we understand that talent is not a thing; rather, it is the process itself. 

Part of this new understanding requires a new insight into genetics that helps us get past the myth of genetic-giftedness. Genes influence our traits, but in a dynamic way. They do not directly determine our traits. In fact, it turns out that while it is correct to say that "genes influence us," it's just as correct to say that "we influence our genes."

Everything about our lives is a process, and we are indebted to Anders Ericsson and others for helping us to obtain a richer understanding of that process. 

It's interesting that Teachout pounds so hard on (nameless) obstinate ideologues who refuse to open their minds to evidence. Blind ideology is exactly what I'm seeing in his confident (and factless) assertion that Wolfgang Mozart's success as a composer (as opposed to his sister Nannerl's lack-of-success) is simply due to this: "He had something to say and she didn't. Or, to put it even more bluntly, he was a genius and she wasn't." Twenty minutes of reading about their early lives and the cultural context provides a much richer understanding than that. Why rush to enshrine a myth when we have so many rich facts and observations to help us come closer to a true understanding?

Teachout also writes that any suggestion of genius as a process "fails to account for the impenetrable mystery that enshrouds such birds of paradise as Bobby Fischer, who started playing chess at the age of 6. Nine years later, he became the U.S. chess champion." Again, why leap to "impenetrable mystery" when we can actually understand these things better? There are some terrific books out there now that help us closely examine talent and success. Why is Teachout trying to convince us not to examine the evidence and not to think about these things more deeply?

In his 1878 book Menschliches, Allzumenschliches (Human, All-Too-Human), Friedrich Nietzsche described greatness as being steeped in a process, and of great artists being tireless participants in that process:
"Artists have a vested interest in our believing in the flash of revelation, the so-called inspiration . . . [shining] down from heavens as a ray of grace. In reality, the imagination of the good artist or thinker produces continuously good, mediocre, and bad things, but his judgment, trained and sharpened to a fine point, rejects, selects, connects . . . All great artists and thinkers [are] great workers, indefatigable not only in inventing, but also in rejecting, sifting, transforming, ordering."
As a vivid illustration, Nietzsche cited Beethoven's sketchbooks, which reveal the composer's slow, painstaking process of testing and tinkering with melody fragments like a chemist constantly pouring different concoctions into an assortment of beakers. Beethoven would sometimes run through as many as sixty or seventy different drafts of a phrase before settling on the final one. "I make many changes, and reject and try again, until I am satisfied," the composer once remarked to a friend. "Only then do I begin the working-out in breadth, length, height and depth in my head."

Alas, neither Nietzsche's nuanced articulation nor Beethoven's candid admission caught on with the general public. Instead, the simpler and more alluring idea of "giftedness" and "genius" prevailed and has since been carelessly and breathlessly reinforced by ideologues. But we can do better. We have the tools and the evidence now to go beyond "genius," beyond "gifted," beyond "innate," and beyond "impenetrable." 

Who knows, maybe someday we can even catch up to Nietzsche.

Thursday, November 18, 2010

My 2010 Lessons for DJing - Less is More - by FroBot

Well...2010 will be finished here in about a month or so...and I have stopped DJing for the rest of the year. I have many things going on in my life like a video / audio company I am starting in Hawaii, and a vacation back to America. So I wanted to write a small piece about my biggest lesson of 2010 when it comes to DJing.

Now, before I go too far, I know tons of you avid ableton DJs are gonna rip me apart for this blog. Remember, this is an opinion, and doesnt necessarily mean its the best opinion...but its MY opinion. my biggest lesson of 2010 is "LESS IS MORE". This can be applicable in many ways...lets start with the most simple way...and I will build up from least important to the most important.

5. Less is more when it comes to your tracks content. I have noticed this year, that the tracks (in house and tech house) that have nice simple bass lines, nice steady swinging beats, and profound but simple lyrics, are ALWAYS getting the BEST response on the dance floor. Maybe its because the normal dance goer is not as musically inclined as the artist performing it, maybe its because from a technical stand point there is more room for certain frequencies to stand out and punch. But, what I THINK it is, is that producers, now in the days of digital releases, are looking to gain smash, and make their track the LOUDEST they possibly can. By making more simple grooves, this creates more room for compression, and ultimately more room for the final output level...letting you make a LOUDER track. I have noticed that most people dont notice all the great effects and envelope automation that I do...but rather, HOW LOUD the track is. The producers that can make these SUPER LOUD tracks, always seem to sound better than the LESS LOUD track that was on right before it. When the track is more simple, you can raise the overall levels pretty high (above industry standards and distorting )...and that in turn makes the BASS sound more bassy, and the kicks thump your chest more. This ultimately has the greatest effect on the crowd, rather than complex rhythms and melodies that make controlling the distortion over 0db rather difficult. So, when I get a new track, and toss it into ableton, I always check how loud it is compared to the other tracks. Even though I could control how loud the track is using the tracks individual gain...this is not CONTROLLED distortion as producers have done. They have spent countless hours raising the gain, and using mastering techniques to clean up the distortion....using their ears to decide HOW distorted it is. When you can keep the track at is normally produced level...yet its LOUDER...that is when you get a thumping track in the club. Even when I listen to a track at home on professional studio monitors, and as a producer, can hear the compression and distortion (especially in the crash cymbals), it never seems to be noticeable in the club...and definitely not noticeable to the dancers. So....back to simple, thumping, loud tracks! I know a from a purist stand point, its WRONG...but in the club...all that matters is making the peoples feet and bodies groove harder.

4. Less is more in terms of the amount of remixing you do. Again, I know tons of you DJs will disagree...but this is what I think. There is a BIG DIFFERENCE between scratch DJs and House DJs. In house...people want to hear tracks for a little longer so they can really groove on a track. This is not hip-hop, where you are playing vocally charged tracks that people know because they are remixes of TOP40 tracks. These are groovy, rhythmic tracks that people move to because of their dynamics and flow. Funky to the core. Remixing is a nice technique to do as a DJ, but use it sparingly. One reason is, most of the time, if you are a thoughtful, "searching for tracks" kinds of DJ...chances are...99% of the people at the club have never HEARD the song you are playing in the first place. So remixing it doesnt do much good because they dont know what it sounded like originally. It kind of defeats the point of live remixing unless they can realize you are remixing it. And another thing... What makes you think you can remix it BETTER than the original artist anyway. That artist spent COUNTLESS hours making their track, thinking about every little detail of how they wanted the track to sound. If you are remixing it live, you are only changing it to sound the way YOU wanted it to sound...not how the artist originally intended. By adding an accapella or something over are now making the track into what YOU think is good, and not what THEY thought was good....and to be quite frank....99% of the time, the artist had it RIGHT in the first place....the DJ only ruined it. From a production stand point (which I will get into in my most important lesson)...and from a TASTE stand point. Usually, the reason we DJs remix a track is because we have heard the original so many times, that remixing it makes it sound FRESH to us. But FRESH does not mean BETTER. As DJs, we listen to TONS of songs. This creates a vicious "ADD style" chain reaction. The more tracks we listen to, the more we want to hear fresh new tracks. This makes us get sick of certain tracks more quickly...the more and more we listen to music. So, what do we do when we really like a track, but have heard it too many times...remix it. But again, this doesnt make it only makes it DIFFERENT. In the future, I will only use remixing techniques if I absolutely feel it enhances the track, neurally connects to the audience, and is worth the effort. I wont do it for the mere fact of remixing it.

3. Less is more in terms of the amount of effects I use. To be quite frank...its OVERDONE. All these filters, delays, flangers...blah blah blah...its old! Some DJs do it ALL THE TIME. It sounds fucking horrible. First off, your ruining the dynamics of the track by doing it. Since music notes have fundamental frequencies and harmonics...removing certain ones with bandpass filters ruins other parts of spectrum that your arent filtering. Low pass and high pass are SIMPLY overdone. They can be used NICELY...HERE AND THERE...on build ups...or artistically where beats are becoming stagnant. But, in the future I will do less. If you are an ableton have INFINITE amount of OTHER ideas you can do to make an interesting mix using clip envelopes, and more thought out mixes....rather than just turning some bullshit knob for the mere fact that you are BORED. As a house DJ...its OK to rock out a tune...and ENJOY IT...listen to it in the way it was intended.

2. Less is more! Im going back to the basics. To me, the art of mixing is just that - MIXING! Focusing on the seams between tracks and making them SEAMLESS! That is what I want to do in my next year, and what I have been focusing on. So many DJs are up there turning tons of knobs, doing all these crazy DJ effects...but when it comes time to switch between one track and the next...its a horrible...noticeable change! Instead of going nuts on your about using your time up there to more thoughtfully think about your next track...or even better yet...when you practice at home...remember what works and what doesnt. Its ok to pre-plan a little bit, as long as you are able to change depending on the crowd. But, DJing is about making nice seamless transitions between ONE track and the next! With ableton live, you have no EXCUSE for bad mix points besides your own negligence to prepare, or ability to hear what goes with what. With tools liked mixed in key (for only 40 dollars), and abletons ability to warp and match timing of tracks...there is really NO EXCUSE. Sometimes, DJs seem to feel like if they are standing up there not doing anything...they are doing something wrong. But, your HANDS dont have to be doing about instead...your BRAIN! Ultimately, its what makes the people enjoy the music and dance...that is what matters. And for the most part...they dont know what your are doing anyway...but they WILL notice when you change between 2 tracks drastically. So, make those mix points seamless, and spend more time thinking about HOW to make them seamless using envelope automation or skills. And dont worry about those people looking your screen...or that club owner who knows a thing or 2 from past DJs to judge whether you are doing A LOT or too little. THEY DONT MATTER. What matters, is good, thumping beats coming out of the speakers...and not what that 1% of other DJs that happen to be in the club THINK about the complexity of mixing. They are most likely just jealous anyway thinking "why is it that this DJing is doing so much less than I so much less talented than I am, but the people are grooving like its no tomorrow". Fuck em, because are the smarter DJ...and not the one just showing off the capabilities of your computer and software.

1. the the most important reason why LESS is MORE! Ok...start the hate speech..."FroBot...your an are wrong...etc etc). it is. since the release to the APC40 and novation launchpad...I have seen a drastic excess of ableton DJs. I, just like them, when I got my launchpad and VCM-600, loved the fact that I could download shitloads of loop them all together....improv a set...and make a NEW track that no one has ever heard on the fly. It is cool, and really works neurally with the need to HEAR new sounds constantly. Its almost like a disease we computer DJs have. After realizing the capabilities of your seem to want to exploit them by running 12 tracks at the same time...individual HH and kick samples...etc etc. It is really cool...and for a LIVE style performance...where you are playing with BANDS and improv is truly great. I definitely enjoy it more than DJing, because I have more control over unique sounds, and it really fuels my creativity. But...DJing is not about this. Its about providing a thumping beat that is rhythmically stable and full of nice changes, build ups, and thought out construction. The Key here is - IMPROV ABLETON AUDIO IS NOT MASTERED AUDIO!!!!!!
This is key to remember here. There is a HUGE DIFFERENCE between a song made by a producer that has been compressed, balanced, and made to perfection - and running multiple audio tracks together, improv style. Mastering is a KEY element of making a dance track...and real producers know this. There are so many important elements that go into getting the THUMP out of your kick, the WARMTH out of our bass, and the frequency separation of all your elements. Steps involving compression, harmonic balancing, EQing, overall reverb, harmonic exciters...all very precise configurations depending on the frequencies being used. When you are playing with multiple samples, especially in improv are taking this element out of the track making process...leaving...what producers consider...a track before the the mastering stage...or even worse...the mixing stage. Each sample that you play, is using certain frequencies in the spectrum. In order for things to sound right and powerful, is important to make way for each sound to stand out clearly...which means TIGHT EQing. Using notch filters to remove certain elements is CRUCIAL in making a thumping dance track. Especially in your kick and bass...most producers use sidechaining when producing to make sure that the kick and bass have nice equal room to stand out and shine...and that their high hats have nice placement, stand out. In an even more juvenile about even the KEY of the god damn sample. Tons of people arent even checking the keys of their samples...having a kick at say D, and bass at C. It just sounds terrible!!!! Without an understanding of your parametric EQ, spectrum analyzer, and the concept of detuning your samples...your cant even start to improv using your launchpad other midi controller. The overall result is a LESS powerful sounding set...and obviously sounds different to the DJs before and after you who are using vinyl, CDJs, or even a computer doing less complex mixing. I didnt even get into how HARD it is to do all this mixing correctly in the first place...and many of the people I see doing this style of DJing are NEW to DJing...and they can run 6-12 tracks at the same time without fucking up? That is a whole other point in itself. NOW I understand why many producers still DJ on CDJs...but are really good at understanding ableton. Because in the end...all that shit doesnt matter...its about a good, rocking beat. Even if you are just adding a few samples on top of an already mastered track... doing are RUINING the final mastered sound of the track. When you add frequencies to a not only puts new frequencies in...but can change correlating harmonics of other sounds. Just look at your spectrum EQ...sit down with a professional...and prepare to cover yourself from the vomit that is certain to be in your lap.

Now, some of you proficient with ableton live know some workarounds for this. Some I have heard is using Ozone, and analog warmers...etc etc. Yes, these are all good ideas, and can help to clean up your sound...but none of them can even compare the the results of a nicely mastered track. There is a video where deadmau5 showed how he gets the final sound from his improv style sets. Here - It takes TONS of processing, and extreme knowledge of digital music before you can actually produce a live set in improv form that stands up to real mastered tracks. The fact that he already has a good background in music and digital production, also helps with this. He really isnt improvising...he knows what his sounds are doing, and mixes them QUICKLY....but is still thinking about the placement of certain frequencies. This is not the same as loading up a bunch of samples and playing them all at the same time. Save that for JAM SESSIONS, or LIVE EVENT gigs, where other artists at the event are ALSO using un-mastered audio samples...or live instruments. When playing these kinds of events, your final output sound is not intended to be CLUB THUMPING, but rather an artistic musical creation where value is placed on the art and not the power of the sound. I have YET to see a local ableton DJ that plays in this improv style in the clubs...even come close to getting the sounds of a mastered track. In the end, you are left with weaker dynamics, lower volume, and ultimately have to push the gain on the clubs mixer. That is the only option you are left with...but still will not compete.

1.2 - Oh...and less is more for one other very important reason! You can get DRUNK! When you are at the wanna have a good time too! Dont forget to enjoy yourself...but sometimes, you dont have much of a choice. By keeping your setup simple...all the tequila shots and free beer wont affect your perfomance much even if you have a 4am set. Since its less complex, you should be able to it, blurry eyed and all!

Well, that concludes my "Lessons for DJing in 2010" rant. Please take it with a grain of salt. I am in no way condoning a lazy set...but I AM saying to THINK more than you DO. Use your brain a little more, hit the audio books and learn a little more, and watch your crowd carefully. Give them something to groove on, and realize you dont have to USE everything JUST because you have it. Do what sounds right, and not what sounds complex. The girls will thank you with a few extra hip swings and the guys with a few more "heads down" "in the groove" dance moves.



The Science Of Djing- Music Chills and Pop Cycles

(Original Link -

Ever wondered why you get chills when listening to music? Perhaps you might have suspected cycles of pop music seem to follow economic cycles. Well, writer Yale Fox has an entire blog dedicated to studying the “science of nightlife culture” called Darwin Vs The Machine that has looked at both subjects. In today’s article he goes into the chill theory and why popular music may pick up in pace as the economy slows down.

Have you ever listened to a song that’s given you shivers? The pleasant feeling of chills running up your spine are actually called Frissons. What is it about music that induces this feeling? I listened to this lecture by Dr. David Huron that discussed his theory behind it.

Biologically, chills are called piloerection. They are characterized by a pleasurable, cold sensation which sometimes produces a shudder. Chills are something we can normally experience based on certain stimuli.  At the core, these chills exhibit themselves as a result of surprise. It is the failure for the organism to predict their environment and what is going to happen next. The neurotransmitters released during this type of response are catecholamines; epinephrine (adrenaline) and dopamine. This brief and pleasurable scare is equivalent to the reason we enjoy rollercoasters and watch horror films.
Here are some other examples of when we experience these frissons.
  • Stepping in to a warm bathtub
This is a classic example of the organism not being able to predict their environment. The body feels a sudden change in temperature and reacts by eliciting the fight or flight response.
  • Nails on a chalkboard, or a loud scream
It comes as a surprise again, and is usually a sign of warning or help from another member of our species. Whether running to help, or running for safety it’s an indication that something unexpected is occurring in the environment.


A large part of music that we enjoy is the balance between predictability and unpredictability. This is probably a good way to think about track selection for your DJ sets, trying to put yourself somewhere in between predictable and unpredictable place. Perhaps adding an interesting effect or unique twist on a familiar track would be enough to induce that wonderful chill we associate with a great musical moment.

Personally, the only music that really gives me chills is lyrically based. More specifically, Punchlines and complex verses. This still fits the theory, as usually these lines are totally witty and unexpected. There’s no way of really predicting the verse before you hear it. The fact that it is a heightened emotional response means it likely becomes imprinted for future reference. Additionally, if I know the words to the song- I find I don’t get chills when I hear it again.

Virtually impossible to conduct a lab, different people are surprised at different times. I think the best thing to do is put this up to open debate. If readers could post their comments and suggestions- or specific songs and points in the song where they experienced chills.


I took a database of every song that has ever touched ground on the billboard top 100 charts since 1955-2009. Songs were analyzed and sorted in terms of two important characteristics; (i) tempo and (ii) modality. Tempo is measured in beats-per-minute, and is the general speed of the song. Modality or mode refers to whether or not the song is in a Major or Minor key. Major keys sound happy, and Minor keys sound sad- even an untrained ear is able to easily detect this.

Making beautiful music can strike a sour note

(Original Link -

Professional musicians are accomplished artists at the top of their field. And although their job is glamorous, health practitioners are tuning in to the fact that it can be stressful, too.
Consider the orchestra. It’s not unusual for members to sometimes be gripped by stage fright, or worry about becoming disabled and unable to perform. Their work can be physically demanding, and requires high levels of stamina.

Job frustration, a workplace hazard shared by many of less lofty vocation, is another source of a veritable symphony of stress. One reason? Musicians must deal with the frustrating combination of being highly skilled and accomplished while often having little authority about what and how to play. This can take its toll in various ways, but musicians must find ways to cope so they can keep making beautiful music.

A pain in the neck… or the back or the shoulders… is one way stress can strike. But in a new Norwegian study, orchestral musicians did not have higher levels of those complaints than others. That might be because people whose pain is debilitating would resign from the orchestra.
Members were more likely to complain about gastrointestinal problems, mood changes and fatigue. And those complaints were linked to higher stress, as evidenced by high saliva levels of the hormone cortisol.

It turns out that even coping mechanisms are linked with stress levels. Musicians who dealt with work-related problems by seeking social support or distractions had higher stress levels than those who tackled problems directly and tried to look for solutions.

Tuning in to maintaining good mental and physical health is important for handling daily stresses. And it certainly is key for musicians and music students who want to keep the music playing.

The Science of Music - From Rock to Bach

(Original Link -

What is a musical note? This is one of the deceptively simple questions asked and answered by John Powell in his fascinating book, "How Music Works."

It's an easy question, you might think. A musical note, as created by a musical instrument or a voice, is determined by the frequency of the sound waves produced. Wrong, that would be the note's pitch. Well, one can surely form a note by simultaneously depressing several related piano keys. Nope, that's not a note; that's a chord. A note, the basic building block of all music, is a repeating pattern of sound waves (which distinguishes it from the chaotic sound waves of nonmusical noises). It "consists," Powell says, "of four things: a loudness, a duration, a timbre and a pitch."

Starting with the four properties of a note, the author, who is both physicist and musician, uses easy-to-follow, conversational language to lead the reader into the science of music. He explains every common musical term, from "key" to "bar" to "scale." He differentiates a concerto from a sonata and shows how composers use chords to create harmonies. He brings his explanations to life with a wide range of examples. For instance, a certain type of chord called an arpeggio is found in "Hotel California," by the Eagles, while a complex harmony called counterpoint was used by Bach in his concertos.

After explaining the meaning of musical terms, Powell interprets those strange-looking symbols found in a piece of sheet music. It is amazing that after a few hours of Powell's explanations, a musical novice like me can begin to read music. And for those who would like to use their newly acquired musical education to make their own music, Powell offers advice on how to choose an appropriate first instrument. Violins are too hard; pianos are easier.

For those who approach music more passively, Powell provides a chapter on how and where to listen to music. Instead of spending $75,000 on "a special listening room," he advises us to install our equipment in a normal room, then move the speakers around to get the best sound. He also answers a question that is being passionately debated by audiophiles all over the world: "Are vinyl records better than CDs?" The answer, he says, is no. Those favoring vinyl are victims of "technology nostalgia."

Read more:

The Most From Your Workspace: The 5 Best Trash Audio Music Making Environments

Operating systems aside, the most important “platform” for your music may be the work environment you create for yourself to produce. Seeing that physical environment for someone else can be an inspiration, and certainly a window into their personality. So, as I look through the workspaces submitted by readers, I asked the terrific blog TRASH_AUDIO to select a few of the favorites from their series, “Workspace and Environment.” Rather than ask the usual, bland music journalistic questions of artists, they explore those artists’ creation spaces, and discuss process through that context. (Eat your heart out, MTV Cribs.)

TRASH_AUDIO also has a new site address, so go enjoy:

It’s worth checking out the whole site, but here are their top five favorite workspaces and environments, in no particular order. Some are the tangles of wires you might expect, others more unusual, clean digital environments like the images I chose here (if only because I’m more used to seeing the tangles of wires).

1. Finnish-born Sasu Ripatti of Vladislav Delay and Luomo has found an acoustically-wonderful, isolated environment on an island, an environment surrounded by trees and far from people. On the road, it’s just one laptop, one Korg nanoKEY, and an audio interface, to which he adds Faderfox MIDI controllers, small KAOSS pads, and effects pedals for live gigs.

2. Alec Empire stays true to his Berlin roots with an all-white minimal studio. It’s distraction-free – and having a big, dedicated studio space means no neighbors. Think loud. “Actually you wouldn’t really find much colour in there,” he tells TRASH-AUDIO. “And what surprises visitors is that we have no paintings or posters or anything visual up on the walls. I really find this distracting. Somehow my mind would get off path. The great thing is that we can record whenever we want.” On the road, it’s a Mac and Digidesign gear, but most importantly, a big mobile hard drive, so sounds can come along with him for constant revision. Add to that an iPhone as a musical notebook for sketching ideas.

3. Alessandro Cortini, an Italian-born artist living in the US, focuses on Buchla modular gear as the center of his workspace, with the monome and MLR as the software accompaniment. Corners of the space, he says, are dedicated to different working styles – modular, drum machine, computer – but everything is within reach, which to me is also the epitome of the brilliant Buchla design itself. If you can’t afford a modular (and certainly most of us can’t afford a Buchla 200), perhaps the ergonomics is the single most important lesson to learn here.

4. Mavis Concave, Robert Inhuman and Vankmen of Realicide adapt to a variety of environments – the corner of someone’s room, different homes. As Mavis says, the people in your surroundings often matter more than the architecture: “I need to have enough physical space for my gear and be surrounded by people who encourage the work that I am doing. I can’t be surrounded by people who write off my music production as a nuisance to have in the household. That is probably the biggest creativity/productivity block there is for me.” And for fans of hardware (you’re heard in the poll, don’t worry), that means favorite gear that can go in a car trunk, like the Korg ElecTribe ES-1 (called out by both Mavis and Robert).

5. Atom TM. I just love this, because seeing look-alike studios is boring, because I feel strongly that aesthetics around you can provide visual stimulation for your sonic creativity centers, and because it defies conventional wisdom. So I have to just run the whole quote – decoration instead of gear. (Next – perhaps decorated gear?) Take that, blank white walls of Berlin!
“Decoration instead of gear” became the motto. All my workspaces had to have big windows and if possible a nice view (even though I tend to close the curtains in summer during daytime). I don’t like “studio” atmosphere. I don’t like cables, gear and the entire tech-look. Environments that make me feel well and relaxed are usually of a different type. I like old furniture, warm colours, ornaments and in general everything that does not look contemporary. The contemporary look usually is contaminated with bad taste and pretentious design. Further, the decoration itself helps to absorb reflections and creates a dryer sound. I can say that the decoration itself, that is, obtaining/installing as well as creating amongst it, gives me more satisfaction than obtaining/installing equipment. I can see why “studios” have to look “tech”, that is because the studio owner needs to impress the entirely clueless cast of customers. There is no reason whatever to follow that look, just because it is somewhat implied in the equipment itself. In general I’m very sensible when it comes to “making music”. I find it hard to focus in other studios that don’t fit my aesthetics and sound. I think that my workspace is a perfect combination of the technical-, creative- and aethetic aspects of my work and it has become what it is through a long development of those three components.
Editorial note: In a blinding error of reading on my part, I read the words “Analog Live” as a misprint of “Ableton Live,” as referenced in the original draft of this story. I’ve been looking at software too long. To be clear, this was my inability to read, not a typo on the part of TRASH_AUDIO. I still like the idea of a parody of Ableton’s site redone in analog gear. I will from now on keep that fantasy to myself and stop applying it to the rest of the world.
Whether or not any of these approaches is meaningful to you may vary. But to me, just hearing people make decisions to reorganize their space is refreshing. I find sometimes even an arbitrary change of scenery can help unstop creative juices. Let us know if the same is true for you.

A Game of Checkers Becomes a Step Sequencer, Ableton Live Controller

Checkerboard Step Sequencer V2 from Josh Silverman on Vimeo.

This is a demo video to demonstrate the use of a checker board as a step sequencer. This video should make obvious the relationship between the position of the checkers pieces and the noises they represent and trigger. It's still a work in progress, but for now I won't subject you to the cacophony that is the sound of an actual game of checkers.

Aside from the kick drum, which just keeps pace on every beat, all other drum samples are triggered off the board.

In this version, I've implemented a Mute Region that surrounds the board. When the application sees activity in the mute region, it disables the updating of the sequencer. This way, my hand doesn't trigger a cacophony when I move the pieces.

More details at​?p=124

Built with openFrameworks and Ableton Live.

Sunday, November 14, 2010

Bat Brains Offer Clues as to How We Focus on Some Sounds and Not Others

(Original Link -

How do you know what to listen to? In the middle of a noisy party, how does a mother suddenly focus on a child's cry, even if it isn't her own?

Bridget Queenan, a doctoral candidate in neuroscience at Georgetown University Medical Center is turning to mustached bats to help her solve this puzzle.

At the annual meeting of the Society for Neuroscience in San Diego, Queenan reports that she has found neurons in the brains of bats that seem to "shush" other neurons when relevant communications sounds come in -- a process she suggests may be working in humans as well.

In her investigations, she has also found that "some neurons seemed to know to yell louder to report communication sounds over the presence of background noise."

"So we can now start to piece together how the cells in your brain are able to deal with the complex sensory environment we live in," Queenan added.

To understand auditory brain function, bats are especially interesting animals to study because they process sound through echolocation, which is a kind of biological sonar. Bats call out and then listen to their own echoes produced when those calls bounce off nearby objects. Bats use these echoes to navigate and to hunt.

Not only do the brains of bats have to process a constant stream of pulses and echoes, they have to simultaneously process the bats' social communication, Queenan says.

"What we are trying to figure out is how a bat can fly around echolocating -- screeching and listening to its own individual sounds bouncing back -- amidst a whole colony of hundreds of other echolocating bats -- and possibly hear another bat saying 'watch out! Bats actually do make these cautious calls quite a bit," she says. "In fact, bats have a whole host of communication sounds: angry sounds, warning sounds, and sounds that says 'please don't hurt me."

The auditory processing area in bats' brains is larger than other centers, just like the visual processing center in humans is large. "Humans operate predominantly by sight so a huge portion of our brain is devoted to vision processing. Bats, however, operate by sound," Queenan says.

In this study, Queenan and her colleagues presented different combinations of echolocation sounds with various communication sounds to awake bats to see how neurons in the bat brains were dealing with this incredible cacophony. The researchers found that some bats' neurons control the activity of other neurons when important sounds are perceived. These GUMC scientists also found other neurons that amp up perception of bat communication in the face of background noise. Working together, these clumps of neurons allow the bats to hear what is needed.

"All organisms are constantly assaulted by incoming stimuli such as sounds, light, vibrations, and so on, and our sensory systems have to triage the most relevant stimuli to help us survive," Queenan says. "As humans we are not only sensitive to a child's cry, but we notice flashing ambulance lights even though we are engrossed in something else. We want to know how that happens."

Queenan says her next task is to record brain neurons in bats that are not only awake, but flying.

The sound (and sight and feel) of music for the deaf

(Original Link -

Frank Russo helps make music for the deaf.

Working with a team of researchers, the Ryerson psychology professor invented a chair that allows deaf people to feel music through vibrations. He also works with both deaf and hearing musicians to compose music that focuses on vibrations and vision rather than sound.

Prof. Russo, a music cognition expert who also sings and plays guitar, will discuss music without sound at the TEDx Talks in Toronto Thursday. The conference’s tagline is “ideas worth spreading.”

Your talk will be on experiencing music without sound. Tell me more.
I plan to talk about the other modalities – or the other senses – and whether or not we can experience music through these other senses. This is interesting from a scientific perspective. It also has some interesting practical and artistic implications when we’re considering music experienced by the deaf.
Performers do things when they’re performing that convey emotion and these things can be seen. So, for example, when a performer is performing something that is melancholy, their movements are melancholy. By movements, I mean their facial expressions, the way that their body moves, the way that their hands move. There’s really a lot that can be seen that conveys important structural and emotional information about music. There’s [also] a long history of the deaf experiencing music through vibration.

Legend has it that in his later years, a deafened Beethoven cut the legs off his piano to feel the vibrations through the floorboards. How do deaf people experience music and how does this inform your work?
Deaf culture is extremely visual and it also involves the body, more prominently I would say than oral cultures. So their experience of music, maybe not surprisingly, is informed by what they see and what they feel. There’s this long history of feeling music. For example, there’s a famous percussionist, Evelyn Glennie. She’s deaf and she talks about experiencing music through her body. So she’ll perform without shoes so that she can feel the vibration through her body.

You and a team of researchers at Ryerson developed the emoti-chair. What is it and how does it work?
The emoti-chair is a sensory substitution technology that’s designed to take sound and present it to the body as vibration. You can put your hand on a speaker and you can feel the vibration because all sound emanates from some form of vibration. The challenge, though, with touching a speaker or even touching a musical instrument is what we call perceptual masking. Perceptual masking occurs in vibration when the lower frequency vibrations dominate the higher frequency vibrations. So all we feel is the thump, thump, thump. So what we’ve done in the emoti-chair is separate out the frequencies and present them to different parts of the body. We’ll take the high frequencies and we’ll present them to the upper part of the back. We’ll take the lower frequencies in the music signal and we’ll present them to the lower part of your back.

You’ve held a couple dozen concerts for deaf and hard-of-hearing people with the emoti-chair. What are the concerts like?
It’s really evolved. We’ve gone from taking prefabricated music that’s been constructed for hearing ears and have translated it into deaf music. We are now doing something entirely different, where from the conceptualization of the music we’re thinking about this as a vibe track or a piece of music that’s primarily for vibration and vision, not sound. So that opens up all sorts of interesting artistic possibilities for the deaf and hearing community.

It sounds like you’re almost creating a new art form of music without sound.
That’s what we like to think, yeah. And we actually are putting on a series of workshops across the country where we’re exploring this. We did one in Vancouver last June. We’re going to do the next one at the Banff Centre for the Arts next spring. At these workshops, we’re trying to bring together music performers or composers that want to work on this new art form, on developing something that’s music-like but has this reallocation of the sensory priorities so that vibration and vision are in the foreground.

Do people who experience music without sound also experience the emotion that is so much a part of music?
Absolutely. We have been doing some research in the lab along those lines. And yes, there’s a great deal of agreement between the emotion experienced by a deaf individual and a hearing individual.

Ringing in your ears

Tinnitus, that phantom ringing in the ears that affects thousands in Canada, is generated not by the ear, but by neurons firing in the brain, according to a North American research team that includes a McMaster University scientist.

“The tinnitus is not generated by processes in the ear, but changes in the brain when hearing loss occurs,” said McMaster professor emeritus Larry Roberts, with the department of psychology, neuroscience, and behaviour. 

Neurons, he said, are meant to talk to each other. When the ear stops talking to them, usually because of hearing loss, they start talking to themselves and this in turn, generates the ringing. “The sound is generated by neuron activity.”

Roberts said the conclusion is the result of collaborative work in the past decade, but said many people are not aware it’s the neurons, or changes in the brain producing tinnitus. Now the question is: how is the noise generated in the brain? “What are the neurons doing, and where are they doing it?” he said. “Our work will assist.” 

Understanding how it happens might lead to finding a treatment. The findings also help scientists understand why tinnitus is such a difficult problem to treat, he added.
They also point to the importance of prevention.

About 300,000 to 350,000 people in Canada, or about one to two per cent of the population, suffer from severe tinnitus. About 10 to 12 per cent of all Canadians have some form of tinnitus, he said.
Peter Austen, acting president of the Tinnitus Association of Canada, has suffered from a severe form for five years and says he’s researched everything and tried everything. He says it’s long been known that tinnitus is a phantom noise generated in the auditory cortex in the brain.

The main problem with tinnitus, he believes, is that people are trying to find cures but none of what is out there will help. 

“There’s no cure. Only management,” he said.

“You never want to get it. Don’t ever go to a concert without wearing earplugs,” he warns. “Teenagers don’t realize what they’re doing to themselves.”

Roberts said although tinnitus is most common after age 60, chronic tinnitus can happen at any age and it is a major cause of disability in soldiers returning from Afghanistan and Iraq.
Studies show hearing loss among young people is increasing and this may also lead to an increase in tinnitus, he said.

“If there’s a price to be paid for listening to loud music, it’ll be later in life,” Roberts said Thursday before leaving for the annual Society for Neuroscience meetings in San Diego where he and the other researchers will present a symposium on their findings.

Roberts said U.S. data shows 12 to 13 per cent of adolescents have hearing impairments. With iPods so common and the use of ear buds almost universal, this is quite alarming because more children will be more susceptible to tinnitus as they grow older, he said.