Sunday, October 31, 2010

FroBot now on Beatport!!!!!!!!!

Hey everyone!!! I got great news today. Its Halloween here in Japan...just finished up 3 parties...and then came home to find out that my track "Stuff You Don't Hear in the Club" is out on Beatport. It is released on DJ VIVID'S label "Turned On Trax". Along with my track is his remix of my track!!! Both tracks can be found here -

I want send out a HUGE thank you to all my friends, supporters, teachers, and blog writers who helped me learn everything I have learned to get this far! A very BIG special thanks to DJ VIVID for helping me and releasing this track on his label!!!!!! Keep sharing music with the world everyone!!!! And share the knowledge!!!

Peace & Love!


Thursday, October 28, 2010

Scientists Show How Tiny Cells Deliver Big Sound In Cochlea

(Original Link -

Deep in the ear, 95 percent of the cells that shuttle sound to the brain are big, boisterous neurons that, to date, have explained most of what scientists know about how hearing works. Whether a rare, whisper-small second set of cells also carry signals from the inner ear to the brain and have a real role in processing sound has been a matter of debate.

Now, reporting on rat experiments in the October 22 issue of Nature, a Johns Hopkins team says it has for what is believed to be the first time managed to measure and record the elusive electrical activity of the type II neurons in the snail-shell-like structure called the cochlea. And it turns out the cells do indeed carry signals from the ear to the brain, and the sounds they likely respond to would need to be loud, such as sirens or alarms that might be even be described as painful or traumatic.

The researchers say they've also discovered that these sensory cells get the job done by responding to glutamate released from sensory hair cells of the inner ear. Glutamate is a workhorse neurotransmitter throughout the nervous system and it excites the cochlear neurons to carry acoustic information to the brain.

"No one thought recording them was even possible," says Paul A. Fuchs, Ph.D., the John E. Bordley Professor of Otolaryngology-Head and Neck Surgery and co-director of the Center for Sensory Biology in the Johns Hopkins University School of Medicine, and a co-author of the report. "We knew the type II neurons were there and now at last we know something about what they do and how they do it."

Working with week-old rats, neuroscience graduate student Catherine Weisz removed live, soft tissue from the fragile cochlea and, guided by a powerful microscope, touched electrodes to the tiny type II nerve endings beneath the sensory hair cells. Different types of stimuli were used to activate sensory hair cells, allowing Weisz to record and analyze the resulting signals in type II fibers.
Results showed that, unlike type I neurons which are electrically activated by the quietest sounds we hear, and which saturate as sounds get louder, each type II neuron would need to be hit hard by a very loud sound to produce excitation, Fuchs says.

The cell bodies of both type I and type II neurons sprout long filaments, or axons that head to the brain, and some others that connect to sensory hair cells. Unlike the big type I neurons, each of which make one little sprout that touches one sensory hair cell in one spot, the type II cells have projections that contact dozens of hair cells over a relatively great distance.

"Somewhat counter-intuitively, the type II cell that contacts many hair cells receives surprisingly little synaptic input," Fuchs says. "In fact, all of its many contacts put together yield less input than that provided by the one single hair cell touching a type I neuron."

Fuchs and his team postulate that the two systems may serve different functional roles. "There's a distinct difference between analyzing sound to extract meaning -- Is that a cat meowing, a baby crying or a man singing? -- versus the startle reflex triggered by a thunderclap or other sudden loud sound." Type II afferents may play a role in such reflexive withdrawals from potential trauma."
This study was supported by the National Institute on Deafness and Other Communication Disorders, and a grant from the Blaustein Pain Foundation of Johns Hopkins.

Authors on the paper are Fuchs, Weisz and Elisabeth Glowatzki, all of the Center for Hearing and Balance and the Center for Sensory Biology, Johns Hopkins University School of Medicine.

Built-in Amps: How Subtle Head Motions, Quiet Sounds Are Reported to the Brain

(Original Link -

The phrase "perk up your ears" made more sense last year after scientists discovered how the quietest sounds are amplified in the cochlea before being transmitted to the brain.

When a sound is barely audible, extremely sensitive inner-ear "hair cells" -- which are neurons equipped with tiny, sensory hairs on their surface -- pump up the sound by their very motion and mechanically amplify it. Richard Rabbitt of the University of Utah, a faculty member in the MBL's Biology of the Inner Ear course, reported last spring on the magnification powers of the hair cell's hairs.

Now, Rabbitt and MBL senior scientist Stephen Highstein have evidence that hair cells perform similarly in another context -- in the vestibular system, which sends information about balance and spatial orientation to the brain.

"The bottom line is we have 'accelerometers' in the head that report on the direction of gravity and the motion of the head to the brain," says Highstein. "What we found is they respond with a greater magnitude than expected for very small motions of the head. This brought to mind a similar amplification of very small signals by the human inner-ear cochlea. And, in fact, the vestibular system and the cochlea have a sensory element in common: the hair cells." Rabbitt and Highstein found that, in both the auditory and the vestibular systems, the hair cell response exhibits "compressional nonlinearity": The lower the strength of the stimulus, the more the hair cells "tune themselves up to amplify the stimulus," Highstein says.

The toadfish was used for this study. "What's interesting is the boney fishes evolved some 3 to 4 million years ago; subsequently this feature of its hair cells was apparently co-opted by the mammalian cochlea. Evolution conserved this feature, and the mammal later used it to improve hearing sensitivity," Highstein says.

Sony ends sales of cassette-tape Walkman players in Japan

(Original Link -


Sony Corp has ended domestic sales of its Walkman music players for cassette tapes due to flagging demand amid the spread of portable digital devices that play music downloaded online or from CDs via computer, company officials said Friday. Sony had already finished shipments of cassette Walkmans this spring, meaning the iconic product will disappear from the Japanese market after stock runs out.

Sony’s cassette Walkman player became a global hit on its launch in 1979, selling about 220 million units by the end of March this year. But its sales slumped in recent years in parallel with the company’s intense competition with Apple Inc of the United States over digital music players.
To counter Apple’s popular iPod series, Sony launched digital Walkman players that became the mainstay product in its Walkman lineup.

The company officials said Sony will continue the production of cassette Walkmans, undertaken by a Chinese manufacturer on consignment, for overseas markets.

There are no plans at present to halt the production of Walkmans for CDs or MDs although their demand is also on a downturn, according to the officials.

Where is the Underground?

In an age when music is available anywhere, anytime, where do you find the underground, and what defines it? 

At festivals around the world guitars are being played with handheld fans, contact microphones are exposing the hidden sounds of the most basic acts of friction, and turntables are being played without any records on them. I have even seen amplified glass being eaten as if it were as delicious as chocolate.

For the critic Simon Reynolds, ‘the web has extinguished the idea of a true underground; it’s too easy for anybody to find out anything now.’ But the underground is not simply about access, nor is it a mere description of the physical context of the music. The underground is essentially a practice, a cultural philosophy of music that exists outside of the mainstream. This philosophy, rather than being extinguished, has actually been invigorated through new innovations in social media, digital technology and audio culture.

What do I mean when I say ‘underground’? Historically, the underground could include 1960s psychedelic music of the US hippie counterculture, the DIY anti-corporatism of 1970s-era punk rock, the early 1990s-era of grunge rock, or 1970s and 2000s-era hip hop. Running through these styles is an emphasis on authenticity and a comparative lack of commercial appeal, but the underground I’m talking about is distinct from these. Though underground music sometimes crosses paths with popular music, its ambitions lie elsewhere. My own view is that contemporary improvisers, noise musicians and drone artists, broadly, make up the underground of today, and though the field is large and the styles broad, these musicians’ general aesthetic ambitions, combined with their comparative lack of public exposure, means that it still makes sense to consider them together as a discernible international scene.

Key to the underground philosophy is that it represents an aesthetic third space, one which eludes conventional boundaries. The ancestry of both this idea and today’s underground musical style can be traced to the eclectic activities of such sixties musicians as the Nihilist Spasm Band, Henry Flynt and Captain Beefheart (and further back again, to Dadaism). The American music journalist Ellen Willis called the Velvet Underground ‘anti-elite elitists’, expressing something of the underground’s peculiar mix of high and low cultural practices.

The underground is a guerrilla philosophy that is mostly defined in relation to the mainstream, and so could be anything at any time. Defining it in concrete, practical terms is therefore a tricky business. Frank Zappa tried: ‘The mainstream comes to you, but you have to go to the underground’. In the sixties, seventies and eighties, the fact of having to go to the underground was more clear cut, but since the advent of digital technology and the web, such a relation has become confused. MP3 blogs and file sharing websites, in addition to social networking platforms such as MySpace, have all facilitated the spread of underground music in a way that was inconceivable in the pre-internet age, when small fanzines and bootlegged tapes dominated. Everything has become available, everywhere, all of the time: culture has become flat.

Audiences no longer have to go to the underground in the same way that was required of them in the seventies, for example. As Martin Raymond, co-founder of trend forecasting company The Future Laboratory, says: ‘Trends aren’t transmitted hierarchically, as they used to be. They’re now transmitted laterally and collaboratively via the internet. You once had a series of gatekeepers in the adoption of a trend … but now it goes straight from the innovator to the mainstream.’

But the idea of the underground lives on, despite the possibility of general access. The word ‘underground’ connotes a sense of concealment, even of contraband, and this is at the heart of what still defines it as a musical philosophy. The music’s general abrasiveness repels the mainstream; the distinct willingness of the general public to either turn away or ignore its existence in the first place is what gives underground its identity, not some farcical public inability to locate it.

Cities with a rich cultural history and with firmly established public arts institutions lead the field in terms of underground scenes. Berlin, for so long cleft in two in every way imaginable, has hosted a thriving underground for decades, and particularly since reunification in 1990. Orientating around totemic minimal techno producers such as the duo behind Basic Channel, Mark Ernestus and Moritz von Oswald, and Robert Henke from Monolake, and also noise and experimental pop musicians such as Felix Kubin and Gudrun Gut, the Berlin underground scene connects back within the country’s own history to the fertile days of the Weimar Republic. But it also connects outward to other underground scenes through digital means, through festivals such as Transmediale and MaerzMusik, through venues such as Berghain, and through record shops such as Hard Wax in Kreuzberg, to name only a few of the conduits to other scenes.

London can boast a similar vitality, despite Mayor Boris Johnson’s reliably baffling recent comments lamenting the lack of a ‘counterculture’ in the city. In contrast to the largely dance-orientated music of Berlin, it is networks of improvisers and noise musicians that dominate the London underground. Building on a politically engaged tradition of underground music-making that originally developed in the sixties, musicians such as John Butcher, Sebastian Lexer, Kaffe Matthews and Eddie Prevost, among many others, deepen the cultural discourse through regular live activity at venues such as the Vortex, Boat-ting and Café OTO. Shops such as Sound 323 formerly provided the physical core for London underground musicians, but that function has largely been usurped by the aforementioned venues, in addition to the important web presence that London labels and promoters such as the leading black metal, black ambient and noise organisation Cold Spring, and disparate webzines and blogs, maintain.

In both London and Berlin, and in other important cities for underground music around the world (Tokyo comes immediately to mind), comparative economic wellbeing has made it easier to nurture underground scenes. The example of the USA, a country with perhaps the leading DIY tape and noise scene in the world, is a case in point. That DIY scene derives a kind of implicit practical support from the USA’s economic security that would be impossible in countries with less stable economies.
The institutional aspect of underground culture – its relation to the mainstream – has remained relatively unchanged over the past few decades. The impact of the web, however, has led to a fundamental shift in recent years in the nature of the underground’s very existence. The underground has largely shifted from physical meeting places such as record shops to virtual networks organised through and on the web. Underground musicians themselves are keenly aware of this, promoting their activity through their own websites, or through independent, web-focused labels, and transmitting much of their music through social media such as Soundcloud.

The web has been pivotal for the underground scene in Ireland, a country in which the institutional frameworks that buttress activity in London and Berlin simply do not exist. The country nonetheless boasts a small but fervent underground scene. An array of leading figures constitute the artistic and promotional firmament of Irish underground music. Gavin Prior, improvising noise musician, head of the Deserted Village label, and member of such bands as Wyntr Ravn and United Bible Studies, and Andrew Fogarty of weird-synth outfit Boys of Summer, of Toymonger, and head of Munitions Family label, both in Dublin; and Vicky Langan, who runs the Black Sun weirdo/outer limits music and film nights, and Paul Hegarty, of the extreme noise-group Safe and head of Dot Dot Music, both in Cork, are just some of those involved with developing a cultural alternative to the mainstream. A particularly healthy scene has developed in the past ten years or so in Cork, but Dublin, with almost ten times the population, still has the edge: artists like the Jimmy Cake and the Redneck Manifesto leading an avant-rock centred field, and Children Under Hoof, Patrick Kelleher and His Cold Dead Hands (who is notably on the Skinny Wolves label, another player in all of this) and others gigging in venues such as Anseo, Whelan’s, The Shed and the contemporary art space The Joinery, and organising the (echt-underground) ‘box socials’ on South Circular Road.

It is difficult for underground scenes to reach a degree of maturity without economic and institutional stability, but the relative health of the Irish underground scene testifies to the ability of underground cultures to flower in adverse economic or cultural circumstances, often thanks largely to the collective enthusiasm of a relatively small group of people. Similar processes can be identified in other burgeoning scenes around the world, such as that in Buenos Aires, where local musical traditions combine fruitfully with experimental dance styles and contexts, or in Beijing, where recent economic accomplishment, amongst other cultural factors, has allowed a diversity of underground musical activity to flourish. This is the case particularly with regards to the scene that has developed around the improviser and promoter Yan Jun and artists such as FM3, the former of who runs an annual underground music festival called Mini Midi, and also a famous series of improvised music weeklies, ‘Waterland Kwanyin’.

The guerrilla nature of the underground, then, persists in the digital context, and has even been invigorated by its new possibilities for international communication. The institutional and cultural richness of larger metropolitan centres such as Berlin and London has led to the development of a strong backbone of underground musicians, many of whom have been able to, by virtue of the platform given to them in their own country and through the web, connect across local boundaries with musicians and promoters from across the world. Gavin Prior’s wonderful coinage, ‘To hell or to internet’, sums up the situation for underground musicians from smaller musical centres. Economic stability can facilitate the spread of underground musical cultures, but it is not required, with the many and varied promotional and communicative possibilities of the internet proving a decisive recent factor in the nurturing of small, interpenetrating international underground scenes. The very existence of an underground culture – antagonising the mainstream, redreaming its resources for obscure ends, opening up a crucial space for experimentation and for critiquing the mainstream – in fact exemplifies the type of positive, web-mediated collective space that our new digital age has promised for so long.

Hearing the Music, Honing the Mind

Music produces profound and lasting changes in the brain. Schools should add classes, not cut them.
Nearly 20 years ago a small study advanced the notion that listening to Mozart’s Sonata for Two Pianos in D Major could boost mental functioning. It was not long before trademarked “Mozart effect” products appealed to neurotic parents aiming to put toddlers on the fast track to the Ivy League. Georgia’s governor even proposed giving every newborn there a classical CD or cassette.

The evidence for Mozart therapy turned out to be flimsy, perhaps nonexistent, although the original study never claimed anything more than a temporary and limited effect. In recent years, however, neuroscientists have examined the benefits of a concerted effort to study and practice music, as opposed to playing a Mozart CD or a computer-based “brain fitness” game once in a while. Advanced monitoring techniques have enabled scientists to see what happens inside your head when you listen to your mother and actually practice the violin for an hour every afternoon. And they have found that music lessons can produce profound and lasting changes that enhance the general ability to learn. These results should disabuse public officials of the idea that music classes are a mere frill, ripe for discarding in the budget crises that constantly beset public schools.

Studies have shown that assiduous instrument training from an early age can help the brain to process sounds better, making it easier to stay focused when absorbing other subjects, from literature to tensor calculus. The musically adept are better able to concentrate on a biology lesson despite the racket in the classroom or, a few years later, to finish a call with a client when a colleague in the next cubicle starts screaming at an underling. They can attend to several things at once in the mental scratch pad called working memory, an essential skill in this  era of multitasking.

Discerning subtleties in pitch and timing can also help children or adults in learning a new language. The current craze for high school Mandarin classes furnishes an ideal example. The difference between m¯a (a high, level tone) and (falling tone) represents the difference between “mother” and “scold.” Musicians, studies show, are better than nonmusicians at picking out easily when your m¯a is ing you to practice. These skills may also help the learning disabled improve speech comprehension.
Sadly, fewer schools are giving students an opportunity to learn an instrument.

In Nature Reviews Neuroscience this summer, Nina Kraus of Northwestern University and Bha­rath Chandrasekaran of the University of Texas at Austin, who research how music affects the brain, point to a disturbing trend of a decline of music education as part of the standard curriculum. A report by the advocacy organization Music for All Foundation found that from 1999 to 2004 the number of students taking music programs in California public schools dropped by 50 percent.

Research of our brains on music leads to the conclusion that music education needs to be preserved—and revamped, as needed, when further insights demonstrate, say, how the concentration mustered to play the clarinet or the oboe can help a problem student focus better in math class. The main reason for playing an instrument, of course, will always be the sheer joy of blowing a horn or banging out chords. But we should also be working to incorporate into the curriculum our new knowledge of music’s beneficial effect on the developing brain. Sustained involvement with an instrument from an early age is an achievable goal even with tight budgets. Music is not just an “extra.”

LimeWire file-sharing service shut down in US

(Original Link -

An injunction issued by the US district court in New York has effectively shut down LimeWire, one of the internet's biggest file-sharing sites.

It ends four years of wrangling between the privately-owned Lime Group and the Recording Industry Association of America (RIAA).

The injunction compels Lime Group to disable its searching, downloading, uploading and file trading features.

The firm plans to launch new services that adhere to copyright laws soon.
Visitors to the LimeWire website are confronted with a legal notice that reads: "This is an offical notice that LimeWire is under a court ordered injunction to stop distributing and supporting its file-sharing software."

It adds that "downloading or sharing copyrighted content without authorisation is illegal".
The RIAA told the AP news agency that it was pleased by the judge's decision.

"It will start to unwind the massive piracy machine that LimeWire... used to enrich themselves immensely," said RIAA spokesman Jonathan Lamy.

LimeGroup appeared to acknowledge defeat.

"We are out of the file-sharing business, but you can make it known that other aspects of our business remain ongoing," Lime Group spokeswoman Tiffany Guarnaccia told AP.

The firm is working on developing new software that will adhere to copyright laws.

Thursday, October 21, 2010

You, The DJ. The Future of Music.

(Original Link -

No one knows what the future of the music business will look like, but the near future of listening to music looks a lot like 1960. People will listen, for free, to music that comes out of a stationary box that sits indoors. They’ll listen to music that comes from an object that fits in the hand, and they’ll listen to music in the car. That box was once a radio or a stereo; now it’s a computer. The handheld device that was once a plastic AM radio is now likely to be a smart phone. The car is still a car, though its stereo now plays satellite radio and MP3s. But behind the similarities is a series of subtle shifts in software and portability that may relocate the experience of listening—even if nobody has come close to replacing the concept of the radio d.j., whose job lingers as a template for much software. 

“Of the twenty hours a week that an average American spends listening to music, only three of it is stuff you own. The rest is radio,” Tim Westergren told me. Westergren is the founder of Pandora, one of several firms that have brought the radio model to the Internet. Pandora offers free, streaming music, not so different from the radio stations that many people grew up with, except that the d.j. is you, more or less. The company does not sell music—like normal radio, Internet radio is considered a promotional tool for recordings, even though the fees that it pays to labels are currently higher than those paid by terrestrial stations.

If you go to Pandora, on the Web or on a phone, you begin by picking a song or an artist, which then establishes a “station.” Pandora’s proprietary algorithm, in which a panel of musicians assesses about four hundred variables, like “bravado level in vocals” and “piano style,” for each song, leads you from what you chose to a song that seems to fit with it, musically. You also have the option to teach the algorithm, by giving a song a thumbs up or a thumbs down. The company has captured a very large chunk of the Internet-radio audience—the service now has fifty million users, who listen an average of more than eleven hours a month.

The Pandora experience isn’t much like being guided by a d.j. on a radio station—at least, not yet. (That delicious unpredictability is now approximated by the thousands of mixtapes and podcasts that are released by individuals on the Web, free of charge, every day.) I started my station with Public Image Ltd’s “Poptones,” a 1979 song that is loaded with bass, dissonant guitar, and the sinus bray of John Lydon, once known as Johnny Rotten. The band’s sound is deeply indebted to reggae—the original bassist was named Jah Wobble—but I couldn’t make a reggae song appear on my Poptones station. I did get lots of bands I like: the Minutemen, the Birthday Party, and Fugazi, who all make aggressive music that, like Public Image’s, is heavy on articulate rhythm and acidic guitar.

After skipping six songs, I received this message on my iPhone app: “Sorry, our music licenses force us to limit the number of songs you may skip.” Pandora is acting like a radio station, not like a replacement for a potential sale—you can’t keep skipping until it plays what you want.

On a recent car trip I took through Florida, Pandora was perfect: I plugged in my phone, hit a couple of buttons, and was rewarded with ninety minutes of instrumental hip-hop.

The most popular alternative to the broadcast model is “on demand,” which usually charges a subscription fee in return for the ability to choose exactly which song you’d like to hear. In Europe, the most prominent such service is Spotify, a Swedish company that has grown rapidly in the past year. In America, where Spotify has yet to début, one of the biggest on-demand players is MOG, a new service that offers a wide array of listening options, the least expensive of which costs five dollars a month. MOG offers the option of streaming 320-kilobyte-per-second files, the highest available digital quality, though customers have been reluctant to pay extra for greater audio fidelity. 

With MOG, you can play entire albums, create playlists, or let the service perform the same kind of algorithmic radio function that Pandora provides. (While listening to a song, you pull a slider all the way to the right; the software suggests related artists and tracks.) You can also share playlists with other users. I looked up the German rock band Can, and saw, on the right side of my Web browser, a small box called “Popular Playlists Featuring Can.” I clicked on one playlist called “Irritation Mix,” created by a user named Scotfree, whose avatar picture looks like Iron Man. The Can track included was the spacey instrumental “Spray,” from the 1973 album “Future Days.” The rest of the playlist leaned on seventies rock—the Faces, Mott the Hoople, Iggy & the Stooges—but used recent tracks to keep things pleasantly unpredictable: Lady Sovereign’s bubbly dance track “Blah Blah” and a track called “Johnny Depp,” by the sixties revivalists Chocolat, from Montreal.

I didn’t care for a few of the songs, but the experience was much more like grappling with a d.j. than like watching a piece of software operate. I learned about two bands I didn’t know, was reminded of beloved tracks I had forgotten, and didn’t listen to anything I already had in mind. Scotfree’s playlist didn’t last as long as a good d.j.’s shift; the burden is on the user to find other appealing users and more lists, and to build the experience. In some ways, it’s an improvement on the radio model: the number of potentially appealing d.j.s here dwarfs what you might have once found on radio.

The broadcast and on-demand models are governed by different rules, but they share one important feature: neither depends on downloading files or finding storage space on a personal computer. Lurking behind these models are two enormous companies that will likely change the landscape of online audio in a matter of months: Google and Apple. Google will soon offer a streaming music service for its Android phone that, like all of these services, uses the increasingly vital concept of the cloud—your music is all on a server, which you can access from any computer or smart phone, with little trouble and no wires. Apple, whose iTunes store is the biggest music retailer in America, bought the online streaming service Lala last year and then promptly shut it down. This suggests that there may soon be an, a Web-based streaming system that will leave behind the model of buying discrete tracks. In music’s new model, fees are charged not necessarily so that you can physically possess a file but so that you can have that song whenever you want it.

An album “collection” is no longer relevant for many listeners. Limited only by the number of songs offered by any service—MOG offers nearly eight million—they can create as many playlists as they like, and access them from almost any device. Whoever comes up with the most powerful and elegant version of the streaming model will have a very big portal. If iTunes becomes a dominant radio force, it could control an overwhelming portion of the music business. Google owns YouTube, which already serves as a sort of ad-hoc radio station for many young people. If Google’s streaming service works well with its Android applications and creates a music-bundling system, it, too, could take over a large share of the market.

While using these services, I kept thinking about an early-eighties drum machine called the Roland TR-808, which has seduced generations of musicians with its heavy kick-drum sound and the oddly human swing of its clock. Whoever programmed this box had more impact on dance music than the hundreds of better-known musicians who used the device. Similarly, the anonymous programmers who write the algorithms that control the series of songs in these streaming services may end up having a huge effect on the way that people think of musical narrative—what follows what, and who sounds best with whom. Sometimes we will be the d.j.s, and sometimes the machines will be, and we may be surprised by which we prefer.

Why Some Brand Names Are Music to Our Ears

(Original Link -

If you're having a bad day, you may want to stay away from listening to commercials for Lululemon or Coca Cola. Or from any retailer or merchandise whose name bears a similarly repetitive phonetic sound.

University of Alberta marketing professor Jennifer Argo recently published a study in the Journal of Marketing indicating that hearing the names of brands containing these types of repetitive sounds can influence our mood and thus our decision-making ability when it comes to choosing whether or not we frequent that establishment or buy those items.

Argo, along with her colleagues, conducted a number of studies testing brand names, including identical samples of ice cream that were given two different names: one for which the name contained a repetitive sound and one where there was none. The researchers introduced the identical products to test subjects one at a time, citing the name for each sample aloud during the product description. Despite the same ice cream being used, the majority of respondents chose the brand with the repetitive-sounding name.

In other studies, giving people choices over everything from types of desserts in one or cell phone options in another, the researchers found similar results from the respondents' selections. In these cases, they chose based on an affective (emotional) response. Argo says that an audible repetition needs to be present -- findings that are key for marketers, advertisers and store managers.

"Based on the results, it would say that tv and radio advertisements are critical to this strategy," Argo said. "But the employees are also critical. Before customers order, a server can remind the name of the restaurant they're at. Sales people can talk with customers and mention the brand name."
In all of the six trials Argo's group conducted, each invented brand name underwent only minute changes in variations, such as "zanozan" versus "zanovum." Argo noted that, in all cases, such small variations, even as much as a single letter, had a huge impact as to the person's choice and how they responded.

Alas, too much sound repetition can also be a bad thing, as can developing a name that does not follow a natural linguistic sound, for example, "ranthfanth." In these cases, she says, respondents displayed negative affect when these conditions were present.

"You can't deviate too much from our language, otherwise it will backfire on you," said Argo.
Argo, whose studies often deal with subjects related to consumer awareness, notes that there is one loophole to the brand/sound strategy: the device is less effective if the person is already positively affected. Argo's advice for someone practising retail therapy would be to "plug your ears; don't let anyone talk to you." Overall, Argo notes that people need to be aware of the influence that a brand name may have on mood and choice and that marketing strategists have gone to great lengths in choosing the moniker for their product.

"The companies have spent millions of dollars choosing their brands and their brand names and they've been picked explicitly to have an influence on consumers," she said. "We show that it can get you at the affective level."

Scientists Closer to Grasping How the Brain's 'Hearing Center' Spurs Responses to Sound

(Original Link -

Just as we visually map a room by spatially identifying the objects in it, we map our aural world based on the frequencies of sounds. The neurons within the brain's "hearing center" -- the auditory cortex -- are organized into modules that each respond to sounds within a specific frequency band. But how responses actually emanate from this complex network of neurons is still a mystery.

A team of scientists led by Anthony Zador, M.D., Ph.D., Professor and Chair of the Neuroscience program at Cold Spring Harbor Laboratory (CSHL) has come a step closer to unraveling this puzzle. The scientists probed how the functional connectivity among neurons within the auditory cortex gives rise to a "map" of acoustic space.

"What we learned from this approach has put us in a position to investigate and understand how sound responsiveness arises from the underlying circuitry of the auditory cortex," says Zador. His team's findings appear online, ahead of print, on October 17th in Nature Neuroscience.

Neuronal organization within the auditory cortex fundamentally differs from the organization within brain regions that process sensory inputs such as sight and sensation. For instance, the relative spatial arrangement of sight receptors in the retina (the eyes' light-sensitive inner surface) is directly represented as a two-dimensional "retinotopic" map in the brain's visual cortex.

In the auditory system, however, the organization of sound receptors in the cochlea -- the snail-like structure in the ear -- is one-dimensional. Cochlear receptors near the outer edge recognize low-frequency sounds whereas those whereas those near the inside of the cochlea are tuned to higher frequencies. This low-to-high distribution, called 'tonotopy,' is preserved along one dimension in the auditory cortex, with neurons tuned to high and low frequencies arranged in a head-to-tail gradient.
"Because sound is intrinsically a one-dimensional signal, unlike signals for other senses such as sight and sensation which are intrinsically two-dimensional, the map of sound in the auditory cortex is also intrinsically one-dimensional," explains Zador. "This means that there is a functional difference in the cortical map between the low-to-high direction and the direction perpendicular to it. However, no one has been able understand how that difference arises from the underlying neuronal circuitry."

To address this question, Zador and postdoctoral fellow Hysell Oviedo compared neuronal activity in mouse brain slices that were cut to preserve the connectivity along the tonotopic axis vs. activity in slices that were cut perpendicular to it.

To precisely stimulate a single neuron within a slice and record from it, Oviedo and Zador, working in collaboration with former CSHL scientists Karel Svoboda and Ingrid Bureau, used a powerful tool called laser-scanning photostimulation. This method allows the construction of a detailed, high-resolution picture that reveals the position, strength and the number of inputs converging on a single neuron within a slice.

"If you did this experiment in the visual cortex, you would see that the connectivity is the same regardless of which way you cut the slice," explains Oviedo. "But in our experiments in the auditory cortex slices, we found that there was a qualitative difference in the connectivity between slices cut along the tonotopic axis vs. those cut perpendicular to it."

There was an even more striking divergence from the visual cortex -- and presumably the other cortical regions. As demonstrated by a Nobel Prize-winning discovery in 1962, in the visual cortex, the neurons that share the same input source (or respond to the same signal) are organized into columns. As Oviedo puts it, "all neurons within a column in the vertical cortex are tuned to the same position in space and are more likely to communicate with other neurons from within the same column."
Analogously, in the auditory cortex, neurons within a column are expected to be tuned to the same frequency. So the scientists were especially surprised to find that for a given neuron in this region, the dominant input signal didn't come from within its column but from outside it.

"It comes from neurons that we think are tuned to higher frequencies," elaborates Zador. "This is the first example of the neuronal organizing principle not following the columnar pattern, but rather an out-of-column pattern." Discovering this unexpected, out-of-column source of information for a neuron in the auditory complex adds a new twist to their research, which is focused on understanding auditory function in terms of the underlying circuitry and how this is altered in disorders such as autism.

"With this study, we've moved beyond having only a conceptual notion of the functional difference between the two axes by actually finding correlates for this difference at the level of the neuronal microcircuits in this region," he explains.

This work was supported by grants from the US National Institutes of Health, the Patterson Foundation, the Swartz Foundation and Autism Speaks.

World's Smallest on-Chip Low-Pass Filter Developed

A research team from Nanyang Technological University (NTU) in Singapore has successfully designed the world's smallest on-chip low-pass filter which is 1,000 times smaller than existing off-chip filters.

A low-pass filter is a circuit that allows low-frequency signals to pass through while reducing unwanted high-frequency signals from passing through. Compared to existing off-chip filters, which are discrete and bulky components, on-chip filters occupy a small area on integrated circuit chips, which can be found in portable devices such as mobile phones, laptops, vehicle-mounted radars, as well as speed guns used in traffic monitoring.

The successful completion of this research project was announced at the official opening of VIRTUS, the new Integrated Circuit Design Centre of Excellence, which was launched by NTU and the Economic Development Board just 10 months ago.

The man behind this invention is Professor Yeo Kiat Seng, Head of Circuits and Systems at NTU's School of Electrical and Electronic Engineering. The breakthrough in design for this filter is set to revolutionise wireless communication.

"This new low-pass filter can lead to a significant improvement in signal quality as it removes nearly all unwanted interferences and noise in the environment," said Professor Yeo.

"This results in clearer reception and enhanced clarity for mobile phone users and users of wireless applications such as Bluetooth and other mobile devices. For example, if you are speaking to your friend on your mobile phone in a noisy food centre or in a train, you would still be able to hear him clearly."

"The filter also consumes less power and can be easily incorporated into existing integrated circuit chips at almost no cost. This means that in addition to better signal quality, consumers enjoy lower power consumption without any additional cost," he added.

The new filter will pave the way for further research and development of high-performance integrated circuits and wireless communication products. Integrated circuit chips incorporating the filter can result in new applications for transmitting uncompressed digital audio/video data, and high-speed wireless local area networks for instantaneous wireless file transfer.

Beethoven and Your Brain’: a synaptic symphony

Think of a new piece of music you heard recently. Chances are you knew right away if you liked it, hated it, or didn’t care.

Now try to describe what caused that instant reaction. If you’re like most people, it isn’t easy translating a visceral impulse into words.

That made conductor Edwin Outwater think. “Everyone always talks or writes about music in terms of structure; no one ever describes it in terms of effect,” he says.

To help him make the point, the dynamic and inventive music director of the Kitchener-Waterloo Symphony has teamed up with McGill University neuroscientist Daniel Levitin — author of This is Your Brain on Music and The World in Six Songs — to present a very different kind of concert at Koerner Hall next Wednesday night.

In “Beethoven and Your Brain,” Outwater, his orchestra and Levitin are going to take the audience through the infamous Symphony No. 5, focusing on what has kept this music so fresh and compelling over the two centuries since its premiere.

Rather than an old-fashioned show-and-tell, this concert is about an involved audience. Outwater says as many patrons as possible will be given electronic “clickers” to measure reactions throughout the symphony.

“We’re going to show the results of each poll on a big screen,” Outwater explains. “It’ll be a way for audience members to feel a sense of community with each other.”

Much of the focus will be on showing how “predictive” our brains are — that we expect the music we hear to do certain things. If those expectations are met, we are likely to enjoy the music; if the music keeps crashing into our expectations, we get upset.

Outwater, palpably energized by this project, talks about how Beethoven plays with expectations in his symphony, beginning with the strange pause at the end of the famous “ba-ba-ba-bam” opening.
“And how does that relate to the world around us?” Outwater asks. “Could it be like suddenly hearing a car alarm go off?”

The conductor and Levitin hit it off after an initial meeting two years ago, and co-wrote a big chunk of the evening’s script by trading emails during their busy schedules.

Toronto gets the premiere performance. They repeat their experiment on Oct. 28 and 29 at the Conrad Centre in Kitchener, and hope that there will be interest farther afield in the future.
“I’m really nervous about it,” Outwater admits, smiling. Conductors are expected to make music, not talk about it.

But the native Californian isn’t going in cold.

“I sang in an a cappella chorus in college,” Outwater recalls. The group linked its musical numbers with jokey introductions. “We learned timing really fast.”

The conductor admits he didn’t come to classical music until he was 14. “I had an epiphany,” he adds, wishing that more people would let go their inhibitions and give the genre a try.

“So many people are afraid of asking questions,” he says. “They’re afraid that they might not like it.”

Outwater says that finding out what he does for a living causes many people to wonder what that means. “Well, the music’s all there on paper, right?” is a common reaction.

The maestro has his answer ready: “How many ways are there to say, ‘To be, or not to be?’” he asks, launching into several very different versions. He’ll then tell that person that it’s the conductor’s job to choose which version is going to get heard from the stage.

“As soon as I’ve done that, people understand what interpretation means, right away,” Outwater says with a smile.

He hopes “Beethoven and Your Brain” will offer up more of those “aha” moments — along with some fine musicmaking.

Wednesday, October 20, 2010 for Great Audio Tutorials

I came across a decent website for audio tutorials! Check out

Covers some of the basics of sound!



Saturday, October 16, 2010

Wednesday, October 13, 2010

Copyright Law is Killing Audio Preservation

(Original Link -

In 2000 the Library of Congress was tasked with preserving the audio portion of our cultural heritage by The National Recording Preservation Act of 2000 (P.L. 106-474). A study was initiated to determine the best way to preserve audio, identify problems and examine possible solutions. That study was released a few weeks ago, and you can find the 181 page pdf here. It identified several difficulties in preserving audio recordings, including the many different digital formats that have come and gone in the recent past, leaving some audio in formats that are difficult to read. In fact, it is actually harder to access some recent digital recordings than to access recordings that are around a hundred years old.

But the greatest threat to preserving our audio heritage isn't technological, it's legal. According to the study there is no legal way to adequately archive audio. Copyright law is written in such a way that it is next to impossible for libraries to archive - and grant access to - many, if not most, audio files. In fact, the study says that,
Privileges extended by copyright law to libraries and archives to copy sound recordings are restrictive and anachronistic in the face of current technologies, and create only the narrowest of circumstances in which making copies is fully permissible.
It makes me wonder: It is actually legal for libraries to loan out books on tape or CD? They suffer from many of the same copyright issues as audio recordings. I find it refreshing that a government institution is beginning to realize that, while there is a legitimate purpose for copyright, when it gets too restrictive it becomes more harmful than helpful. One of the greatest results of any creative work is actually the effect it has on those who experience it - and on works they produce.

It's interesting, although not surprising when you think about it, the parallels between intellectual property rights and privacy rights. Both are important for society to function, and both are a balancing act. In the case of copyright, many of the changes in the past 50 or so years have been at the urging of large corporations such as Disney, Sony, and RCA to protect their financial interests. Now we're beginning to see that the tight control they sought is actually detrimental to society as a whole. I wonder how long it will take to show the same is true of personal freedom?

Tuesday, October 12, 2010

Japanese infants hear sounds based on native language by 14 months

Japanese infants, by the time they are 14 months old, are believed to have tuned their perception to how sounds are sequenced in their native language even before learning its words and grammar, the Riken Brain Science Institute said Tuesday in a report on its joint studies with a French laboratory.
Involving 24 8-month-old Japanese and French infants each and as many 14 months old, the joint study found that only 14-month-old Japanese infants were unable to distinguish words with sound sequences foreign to the Japanese ear, Riken said.

The question of how infants learn to perceive and segment speech is central to the understanding of the origins and development of language, according to Riken.

Studies have shown that young infants can already distinguish patterns common to their language from those that are not, but it is not clear how the capacity relates to the highly tuned perception of speech known to occur in adults.

One way to explore the connection is through the phenomenon of ‘‘phonological illusions,’’ in which adults hear sound sequences from a foreign language as if they were ‘‘repaired’’ to fit their native tongue, Riken said.
To determine at what age such illusions first develop, the joint study tested the ability of Japanese and French infants at 8 and 14 months of age to distinguish series of utterance pairs such as ‘‘abna’’ and ‘‘abuna,’’ only the latter of which is pronounceable in Japanese.

Earlier research by the team had shown that adult Japanese perceive such utterances as the same, inserting an illusory vowel ‘‘u’’ between the cluster of consonants, Riken said. The current experiments show that while at 8 months of age, the phenomenon does not yet occur in either group, by 14 months a clear difference emerges—Japanese infants, unlike French infants, no longer perceive the distinction between these utterances unless they are presented to them in isolation.

In the Japanese language, all words are composed of either vowels only or combinations of consonants and vowels. Words that have a succession of consonants exist in the French language, but not in Japanese.

‘‘It has been thought that this repairing ability is acquired by adults after they learn many words. It is amazing that infants already hear sounds in a similar way to how adults do,’’ said Reiko Mazuka, who led Riken’s team in the study.

How Loud is Too Loud for your iPod? - Video

(Original Link (with VIDEO)-

Loud, sustained sound can damage tiny hairs in the cochlea, and yet 80 percent of people listen to personal music devices at dangerous levels above background noise, a study by acousticians shows. Certain models of earphones are safer for the ear, the study also concluded.

Can you hear me now? Not if you've pumped up the volume on your MP3 player. In noisy places, everyone is turning up the tunes, and they could be drowning out their own hearing. A new study tells how loud is too loud.

Audiologists Brian Fligor, Sc.D., and Terri Ives have identified safe volume levels for you to use in noisy places. Dr. Fligor, an audiologist and Director of Diagnostic Audiology at Children's Hospital Boston says, "Your typical listener is not at risk if they are listening in a quiet situation, but if they are in a noisier situation, such as commuting, they very easily are going to be at risk." Their study concludes that 80 percent of people listen at dangerous levels when background noise comes into play.
As sound travels through the ear canal, it ends up in the inner ear, or cochlea. When it's too loud, tiny hair cells, which send sound information to the brain, are damaged or destroyed. "They're not meant to be hit with noise for long periods of time," Dr. Fligor says. Over time, this can lead to permanent damage of the hair cells and your hearing.

The study concludes the average person listens to music at the same noise level as we hear a gas lawnmower. So what can you do? Dr. Fligor says, "Something that people can do is set their music to a comfortable level when they are in a quiet situation." Dr. Fligor recommends leaving it at that safe level, 75 decibels or below, and investing in earphones that block out background noise.

During the study, only twenty percent of patients who used "in-the-ear" earphones, designed to block out background noise, exceeded sound levels considered to be risky, compared to 80 percent who listen dangerously with other types of earphones. Proof that your choice of earphone combined with smart volume control settings can help save your hearing. Turning down the music will ensure you will be able to hear music in the future.

BACKGROUND: As portable digital music players -- iPods and other MP3 players -- become more and more popular, people are becoming concerned about whether they are dangerous to our hearing. Now hearing researchers have measured specific sound levels in a variety of players using several different types of earphones. They used this information to develop the first detailed guidelines with safe volume levels for listening to the iPod with earphones. They also evaluated the output levels of several other popular players to determine any risks to hearing from using these devices.

ABOUT HEARING LOSS: Loud sounds stress and could damage the delicate hair cells in the inner ear that convert mechanical vibrations in the air (sound) into the electrical signals that the brain interprets as sound. If exposed to loud noises for a long time, the hair cells can become permanently damaged and no longer work, producing hearing loss. Noise-induced hearing loss can be caused by two types of noise: sudden bursts, such as firearms or fireworks; or continuous exposure to loud noise, such as motorized recreational vehicles, loud sporting events, power tools, farming equipment, or amplified music. For a person to lose their hearing because of continuous exposure, it would depend on how loud the sound was and how often and for how long they heard it. It takes repeated exposures over many years to cause a noise-induced hearing loss in both children and adults.

WHAT THEY FOUND: The researchers conducted a study observing the listening habits of 100 graduate students listening to iPods through earphones. They found that all the players had very similar sound output levels. Also, in-ear earphones, which broadcast sound directly into the ears, are no more dangerous than headphones placed over the ears. However, if the user listens to music in noisy surroundings, they are much more likely to raise the volume to risky levels, suggesting that people should seek quieter listening areas when possible, and use earphones that block out background noise.

RECOMMENDED LEVELS: The more often and the louder you player your player, the more likely you'll experience some hearing loss. To come up with recommended listening times and sound levels, the researchers compared the players' volume levels to the minimum sound level for the risk of hearing damage: 85 dBA. Typically, a person can tolerate about two hours of 91 dBA per day before risking hearing loss. The researchers recommend listening to iPods for -- hours a day with earphones if the volume is at 80% of maximum levels. Listening at full volume is not recommended for more than 5 minutes per day using the earphones that come with the player.

Hooked on Headphones? Personal Listening Devices Can Harm Hearing, Study Finds

ScienceDaily (Sep. 1, 2010) — Personal listening devices like iPods have become increasingly popular among young -- and not-so-young -- people in recent years. But music played through headphones too loud or too long might pose a significant risk to hearing, according to a 24-year study of adolescent girls.

The study, which appears online in the Journal of Adolescent Health, involved 8,710 girls of lower socioeconomic status, whose average age was about 16. Their hearing was tested when they entered a residential facility in the U.S Northeast.

"I had the rare opportunity, as an audiologist, to see how this population changed over the years," said Abbey Berg, Ph.D., lead study author and a professor in the Department of Biology & Health Sciences at Pace University in New York.

In this period, high-frequency hearing loss -- a common casualty of excessive noise exposure -- nearly doubled, from 10.1 percent in 1985 to 19.2 percent, she found.

Between 2001, when testers first asked about it, and 2008, personal music player use rose fourfold, from 18.3 percent to 76.4 percent. High-frequency hearing loss increased from 12.4 percent to 19.2 percent during these years, while the proportion of girls reporting tinnitus -- ringing, buzzing or hissing in the ears -- nearly tripled, from 4.6 percent to 12.5 percent.

Overall, girls using the devices were 80 percent more likely to have impaired hearing than those who did not; of the teens reporting tinnitus, all but one (99.7 percent) were users.

However, "just because there's an association, it doesn't mean cause and effect," Berg said. For the girls who took part in the study, other aspects of their lives -- poverty, poor air quality, substance abuse, risk-taking behavior -- might Sadd to the effects of noise exposure.

"This paper offers compelling evidence that the inappropriate use of headphones is indeed affecting some people's hearing, and the number of 'some people' is not small," said Brian Fligor, director of diagnostic audiology at Children's Hospital Boston.

The level of impairment detected in this study might have been relatively subtle "but the point is that it is completely avoidable," said Fligor, who has no affiliation with the study.

"The ear is going to be damaged throughout your lifetime; what we're seeing here resembles early onset age-related hearing loss -- you might think of it as prematurely aging the ear," he said.
"I don't demonize headphones," said Fligor, who encourages moderation, not prohibition. At a reasonable volume -- conversational or slightly louder -- there's little risk, he said: "It's when you start overworking the ear that you get problems."

Berg said her findings suggest the need for more effective educational efforts to reduce unsafe listening behavior, particularly among disadvantaged youth. "You have to target them at a much younger age, when they are liable be more receptive," she said.

New Norwegian Earplug Solution to a Deafening Problem

(Original Link -

ScienceDaily  — Some 600 cases of noise-induced hearing impairment are reported by the Norwegian petroleum industry every year. A new, intelligent earplug is now set to alleviate the problem.

Norway's largest company, Statoil ASA, is taking the problems associated with noise exposure seriously. Over the course of four years the international energy company has led efforts to further develop an existing combined hearing protection and communication product for use on offshore platforms.

World's most advanced hearing protection device

A microphone on the outside of the new "offshore" version of the QUIETPRO earplug picks up ambient sounds. The sound is digitally processed, and unwanted loud noises are filtered out before the sound is sent to a speaker inside the earplug. Users can adjust the level of ambient sound, as desired.
A microphone on the inside of the earplug picks up speech signals through the skull. This means that users do not have to have a microphone in front of their mouth, as is the case with the ear protection devices currently used on most offshore platforms. Another advantage is that the microphone inside the ear does not pick up background noise in the way that a microphone in front of the mouth does.
The QUIETPRO hearing protection and communication device was originally developed for military use by the Trondheim-based company Nacre AS, which has its origins in Scandinavia's largest independent research organisation, SINTEF. The company's customers include the United States Army, which uses QUIETPRO devices in armoured vehicles, among other applications.

More energy and increased safety

"The new hearing protection device enables employees to preserve a lot of energy," explains Asle Melvær, noise specialist at Statoil, who initiated and is responsible for the R&D project Offshore Safety for Hearing and Verbal Communication (SoHot). The project receives funding under the Research Council of Norway's Large-scale Programme for Optimal Management of Petroleum Resources (PETROMAKS).

"Users of the new device do not have to strain to hear what is being said over the radio, and the noise reduction system in the earplug means that the level of sound is adapted to the surrounding environment. On board an oil platform understanding messages transmitted by radio can be a matter of life and death," states Mr Melvær.
The earplug also alerts the user if it is not inserted into the ear correctly, providing additional safety.

New generation soon to be tested

The hearing protection device was tested in 2009 on the helicopter landing pad at the Oseberg Field Centre outside Bergen. Starting in December 2010 the next generation of devices will be tested both there and at the Snorre oilfield a little further north.

"One important feature of the new version is a built-in noise dose meter that emits a warning signal before any damage to hearing has occurred -- which is quite unique," explains an enthusiastic Asle Melvær. "This function will make it possible for us to withdraw personnel from hazardous noise areas before they have been exposed to noise levels that can damage their hearing."
The new earplug is explosion-proof and can be used anywhere on the platform.

Important initiative

"It is wonderful to be able to play a role in the development of new technology that will undoubtedly reduce the number of cases of hearing damage among employees in the petroleum industry," says Mr Melvær. "Nevertheless, it is important to emphasise that the development of better hearing protection must not become an excuse for failing to implement measures to reduce noise levels. This should still be given first priority," he states.

Research Council supports HSE projects

The PETROMAKS programme is responsible for the Research Council's health, safety and environment-related (HSE) activities within the petroleum sector. "Efforts to develop a new version of the QUIETPRO earplug provide a good example of the type of creative projects that exist in this field that make use of technology and system solutions across sectors," explains Tor-Petter Johnsen, Adviser for the PETROMAKS programme.

"Close cooperation between advanced Norwegian technology groups and highly skilled customers in the petroleum industry has not only led to the development of a new product but has also provided better insight into the serious health risks to which employees in the industry are exposed," Mr Johnsen concludes.

Scientist Compares Classical Singing to Traditional Indian Singing to Find Speech Disorder Treatment

ScienceDaily (Oct. 5, 2010) — Hindustani singing, a North Indian traditional style of singing, and classical singing, such as the music of Puccini, Mozart and Wagner, vary greatly in technique and sound. Now, speech-language pathology researchers at the University of Missouri are comparing the two styles in hopes of finding a treatment for laryngeal tremors, a vocal disorder associated with many neurological disorders that can result in severe communication difficulties.
Sound is developed in the larynx, an organ located in the neck. A laryngeal or vocal tremor occurs when the larynx spasms during speech, creating a breathy voice featuring a constantly shifting pitch. People with Parkinson's disease and other similar disorders often display vocal tremors. Currently, speech-language pathologists are only able to help patients manage tremors. By understanding the physiology behind voluntary and involuntary pitch fluctuation, an MU researcher hopes to find a treatment.

"Hindustani and classical singing styles are very different," said Nandhu Radhakrishnan, professor of communication science and disorders in the School of Health Professions. "In Hindustani singing, performers use 'Taan' to modulate pitch voluntarily, while classical singers use vibrato to vary pitch involuntarily. With this knowledge, we may be able to develop a specific therapy to cure laryngeal tremors."

Radhakrishnan is the first researcher to study the physiology of Hindustani singing. He worked with Ronald Scherer of Bowling Green State University in Ohio, and Santanu Bandyopadhyay, a vocal teacher in West Bengal, India. In his study, he discovered several differences between Hindustani and classical singing. Primarily, Hindustani singing features a voluntary, rapid dip in pitch, which Radhakrishnan refers to as a "Taan gesture." In contrast, classical singers use a vocal modulation like vibrato to make a smooth transition between pitches.

Classical singers use what is known as a singer's formant to enhance a specific range of frequency that will be pleasing to the ear by lowering their larynx and widening the vocal tract. However, Hindustani singers do not use a singer's formant. Without this, Hindustani singers perform at a much lower volume than classical singers, and their singing voice sounds very similar to their speaking voice. Radhakrishnan also observed that Hindustani singing requires precise pronunciation of lyrics, whereas notes guide pronunciation in classical music.

To uncover the secrets of Hindustani singing, Radhakrishnan recorded a traditional Indian singing teacher repeatedly performing a single Taan gesture. Although singers usually perform several of these pitch fluctuations in succession, Radhakrishnan recorded just one gesture to isolate the technique for scientific study. Radhadrishnan used equipment that measures variables like lung pressure, the duration that vocal folds are open and closed, and the rate at which air is flowing out of the larynx.

The study was published recently in the Journal of Voice. In the coming months, Radhakrishnan will publish another study on Taan gestures that focuses on performance aspects of the technique.

Measurement Scientists Set a New Standard in 3-D Ears

(Original Link -

ScienceDaily (Oct. 12, 2010) — Scientists at the UK's National Physical Laboratory (NPL) have developed a means of representing a 3D model ear, to help redefine the standard for a pinna simulator (the pinna is the outer part of the ear) -- used to measure sound in the way we perceive it.

The nature of human hearing is heavily dependent on the shape of the head and torso, and their interaction with sound reaching the ears allows for the perception of location within a 3D sound field.
Head and Torso Simulators (HATS) are designed to model this behaviour, enabling measurements and recordings to be made taking account of the Head Related Transfer Function (HRTF) -- the difference between a sound in free air and the sound as it arrives at the eardrum.

HATS are mannequins with built-in calibrated ear simulators (and sometimes mouth simulators), that provide realistic reproduction of the acoustic properties of an average adult human head and torso. They are ideal for performing in-situ electro-acoustic tests on, telephone handsets (including mobile and cordless), headsets, audio conference devices, microphones, headphones, hearing aids and hearing protectors.

Critically the shape of the pinna has a large effect on the behaviour, and as a result it is defined for HATS by its own standard (IEC TR 60959:1990) to provide consistency across measurements. However, this standard defines the shape of the pinna through a series of 2D cross-sectional profiles. This form of specification and definition has on occasion proven to be an inadequate guide for manufacturing processes.

As part of a revision of this standard, the Acoustics Team at NPL teamed up with the National Freeform Centre in a novel move to redefine the standard through an on-line 3D CAD specification. A model ear was measured using a coordinate-measuring machine with laser scanner to produce a 3D scan of the ear, which can then be used to provide manufacturers with a more practical specification for reproduction and a standard that is easily comparable with similar non-contact freeform measurement techniques.

Ian Butterworth from NPL, said: "Having a 2D pinna in an artificial ear has some inherent frequency limitations. For example, when sound spreads through structures like narrow tubes, annular slits or over sharp corners, noticeable thermal and viscous effects take place causing further departure from the lumped parameter model. The new standard for the 3D model has been developed to give proper consideration to these effects. We worked with the National Freeform Centre, experts in measuring items that are unconventional in shape or design, to develop the new standard -- which will now help manufacturers develop better products."

Making Microscopic Music

(Original Link -

ScienceDaily (Sep. 29, 2010) — Strings a fraction of the thickness of a human hair, with microscopic weights to pluck them: Researchers and students from the MESA+ Institute for Nanotechnology of the University of Twente in The Netherlands have succeeded in constructing the first musical instrument with dimensions measured in mere micrometres -- a 'micronium' -- that produces audible tones. A composition has been specially written for the instrument.

Earlier musical instruments with these minimal dimensions only produced tones that are inaudible to humans. But thanks to ingenious construction techniques, students from the University of Twente have succeeded in producing scales that are audible when amplified. To do so, they made use of the possibilities offered by micromechanics: the construction of moving structures with dimensions measured in micrometres (a micrometre is a thousandth of a millimetre). These miniscule devices can be built thanks to the ultra-clean conditions in a 'clean room', and the advanced etching techniques that are possible there.

"You can see comparable technology used in the Wii games computer for detecting movement, or in sensors for airbags," says PhD student Johan Engelen, who devised and led the student project.


The tiny musical instrument is made up of springs that are only a tenth of the thickness of a human hair, and vary in length from a half to a whole millimetre. A mass of a few dozen micrograms is hung from these springs. The mass is set in motion by so-called 'comb drives': miniature combs that fit together precisely and shift in relation to each other, so 'plucking' the springs and creating sounds. The mass vibrates with a maximum deflection of just a few micrometres. This minimal movement can be accurately measured, and produces a tone. Each tone has its own mass spring system, and six tones fit on a microchip. By combining a number of chips, a wider range of tones can be achieved. "The tuning process turned out to be the greatest challenge," says Engelen. "We can learn a lot from this project for the construction of other moving structures. Above all, this is a great project for introducing students to micromechanics and clean room techniques."

The micronium played a leading role at the opening of a two-day scientific conference on micromechanics in the Atak music venue in Enschede on September 27 and 28. A composition has been specially written for the instrument: 'Impromptu No. 1 for Micronium' by Arvid Jense, who is studying MediaMusic at the conservatorium in Enschede.

A scientific paper -- 'A musical instrument in MEMS' -- has also been devoted to the instrument, and this will be presented to the conference by Johan Engelen. The project was carried out by the Transducers Science and Technology group led by Professor Miko Elwenspoek. The group forms a part of the MESA+ Institute for Nanotechnology of the University of Twente.

FroBot's Youtube Playlist

Never really shared this on here. Played some shows over summer...thought you guys might like it!



Monday, October 11, 2010

10 Things You Didn't Know About Sound

(CNN) -- Most of us have become so used to suppressing noise that we don't think much about what we're hearing, or about how we listen. Yet our well-being is now being seriously damaged by modern sound. Here are 10 things about sound and health that you may not know:

1.) You are a chord. This is obvious from physics, though it's admittedly somewhat metaphorical to call the combined rhythms and vibrations within a human being a chord, which we usually understand to be an aesthetically pleasant audible collection of tones. But "the fundamental characteristic of nature is periodic functioning in frequency, or musical pitch," according to C.T. Eagle. Matter is vibrating energy; therefore, we are a collection of vibrations of many kinds, which can be considered a chord.

2.) One definition of health may be that that chord is in complete harmony. The World Health Organization defines health as "a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity" which opens at least three dimensions to the concept. On a philosophical level, Plato, Socrates, Pythagoras and Confucius all wrote at length about the relationship between harmony, music and health (both social and physical). Here's Socrates: "Rhythm and harmony find their way into the inward places of the soul, on which they mightily fasten, imparting grace, and making the soul of him who is rightly educated graceful, or of him who is ill-educated ungraceful."

Watch an interview with Julian Treasure

3.) We see one octave; we hear ten. An octave is a doubling in frequency. The visual spectrum in frequency terms is 400-790 THz, so it's just under one octave. Humans with great hearing can hear from 20 Hz to 20 KHz, which is ten octaves.

4.) We adopt listening positions. Listening positions are a useful set of perspectives that can help people to be more conscious and effective in communication -- because expert listening can be just as powerful as speaking. For example, men typically adopt a reductive listening position, listening for something, often a point or solution.

Women, by contrast, typically adopt an expansive listening position, enjoying the journey, going with the flow. When unconscious, this mismatch causes a lot of arguments.
Other listening positions include judgmental (or critical), active (or reflective), passive (or meditative) and so on. Some are well known and widely used; for example, active listening is trained into many therapists, counselors and educators.

5.) Noise harms and even kills. There is now wealth of evidence about the harmful effect of noise, and yet most people still consider noise a local matter, not the major global issue it has become.
According to a 1999 U.S. Census report, Americans named noise as the number one problem in neighborhoods. Of the households surveyed, 11.3 percent stated that street or traffic noise was bothersome, and 4.4 percent said it was so bad that they wanted to move. More Americans are bothered by noise than by crime, odors and other problems listed under "other bothersome conditions."

The European Union says: "Around 20% of the Union's population or close on 80 million people suffer from noise levels that scientists and health experts consider to be unacceptable, where most people become annoyed, where sleep is disturbed and where adverse health effects are to be feared. An additional 170 million citizens are living in so-called 'grey areas' where the noise levels are such to cause serious annoyance during the daytime."

The World Health Organization says: "Traffic noise alone is harming the health of almost every third person in the WHO European Region. One in five Europeans is regularly exposed to sound levels at night that could significantly damage health."

The WHO is also the source for the startling statistic about noise killing 200,000 people a year. Its findings (LARES report) estimate that 3 percent of deaths from ischemic heart disease result from long-term exposure to noise. With 7 million deaths a year globally, that means 210,000 people are dying of noise every year. Jose Abreu on kids transformed by music

The cost of noise to society is astronomical. The EU again: "Present economic estimates of the annual damage in the EU due to environmental noise range from EUR 13 billion to 38 billion. Elements that contribute are a reduction of housing prices, medical costs, reduced possibilities of land use and cost of lost labour days." (Future Noise Policy European Commission Green Paper 1996).

Then there is the effect of noise on social behavior. The U.S. report "Noise and its effects" (Administrative Conference of the United States, Alice Suter, 1991) says: "Even moderate noise levels can increase anxiety, decrease the incidence of helping behavior, and increase the risk of hostile behavior in experimental subjects. These effects may, to some extent, help explain the "dehumanization" of today's urban environment."

Perhaps Confucius and Socrates have a point.

6.) Schizophonia is unhealthy. "Schizophonia" describes a state where what you hear and what you see are unrelated. The word was coined by the great Canadian audiologist Murray Schafer and was intended to communicate unhealthiness. Schafer explains: "I coined the term schizophonia intending it to be a nervous word. Related to schizophrenia, I wanted it to convey the same sense of aberration and drama."

My assertion that continual schizophonia is unhealthy is a hypothesis that science could and should test, both at personal and also a social level. You have only to consider the bizarre jollity of train carriages now -- full of lively conversation but none of it with anyone else in the carriage -- to entertain the possibility that this is somehow unnatural. Old-style silence at least had the virtue of being an honest lack of connection with those around us. Now we ignore our neighbors, merrily discussing intimate details of our lives as if the people around us simply don't exist. Surely this is not a positive social phenomenon.

7. Compressed music makes you tired. However clever the technology and the psychoacoustic algorithms applied, there are many issues with data compression of music, as discussed in this excellent article by Robert Harley back in 1991. My assertion that listening to highly compressed music makes people tired and irritable is based on personal and anecdotal experience - again it's one that I hope will be tested by researchers.

8. Headphone abuse is creating deaf kids. Over 19 percent of American 12 to 19 years old exhibited some hearing loss in 2005-2006, an increase of almost 5 percent since 1988-94 (according to a study in the Journal of the American Medical Association by Josef Shargorodsky et al, reported with comments from the researchers here). One university study found that 61 percent of freshmen showed hearing loss (Leeds 2001).

Many audiologists use the rule of thumb that your headphones are too loud if you can't hear someone talking loudly to you. For example, Robert Fifer, an associate professor of audiology and speech pathology at the University of Miami Leonard M. Miller School of Medicine, says: "If you can still hear what people are saying around you, you are at a safe level. If the volume is turned so loudly that you can no longer hear conversation around you, or if someone has to shout at you at a distance of about 2 or 3 feet to get your attention, then you are up in the hazardous noise range." Evelyn Glennie shows how to listen

9. Natural sound and silence are good for you. These assertions seem to be uncontroversial. Perhaps they resonate with everyone's experience or instinct.

10. Sound can heal. Both music therapy and sound therapy can be categorized as "sound healing." Music therapy (the use of music to improve health) is a well-established form of treatment in the context of mainstream medicine for many conditions, including dementia and autism.

Less mainstream, though intellectually no more difficult to accept, is sound therapy: the use of tones or sounds to improve health through entrainment (affecting one oscillator with a stronger one). This is long-established: shamanic and community chant and the use of various resonators like bells and gongs, date back thousands of years and are still in use in many cultures around the world.
Just because something is pre-Enlightenment and not done in hospitals doesn't mean that it's new-age BS. Doubtless there are charlatans offering snake oil (as in many fields), but I suspect there is also much to learn, and just as herbal medicine gave rise to many of the drugs we use today, I suspect there are rich resources and fascinating insights to be gleaned when science starts to unpack the traditions of sound healing.

I hope these thoughts make a contribution to raising awareness of sound and its effects on health. I welcome your reaction, and I will check this forum and respond.