Not verified, but pretty sure about this

December 12, 2007 at 10:07 am 3 comments

There’s two phenomena I want to touch in this post :

  1. The human ability to filter noise sources based on relevance
  2. The mobile phone’s ability to compress audio in order to eliminate microphone effects

A human can decide on which audio source he is focusing. This audio source will be amplified, the other ones will be tuned out. This ability allows you to have a conversation with someone at a reception, even though the intensity of the crowd’s noise is higher than that of your conversation partner.

A mobile phone compresses audio, which boils down to amplifying sound waves with a small amplitude a lot, and amplifying sound waves with a large amplitude just a bit. This way, all sound waves are brought to the same amplitude.

Why does a phone do this? It’s in order to eliminate the microphone effects you would get when the speaker moves the phone closer or further from his mouth.

The problem with this is, all ambient noise emanating from other source like cars, kids, vending machines, … is also amplified to the same level as the speaker.
This makes it practically impossible to comfortably talk to someone in a loud environment over a cell phone. Apparently, once the noise is compressed, converted into a low quality digital stream, and then unwrapped again into an analog stream into your mobile phone’s speaker, the human ability to tune out peripheral noise is cancelled. There’s no way for the brain to differentiate between primary and secondary sound sources.

I find this very annoying.


Entry filed under: Truly important stuff.

Spork? Best Wishes Homies

3 Comments Add your own

  • 1. stinoz  |  December 12, 2007 at 12:48 pm

    as good as correct; the ear (actually, the brain’s temproal lobe I think) doesn’t really amplify the sound focused on, but rather only attenuates the ones not focused on.
    but the main reason why we can’t tune out the noise coming from a phone’s (single) speaker, is that in order to cancel out sources, they have to be spatially defined; in other words: if you’re in a big hall the sources are simply everywhere, the brain has all spatial info it needs, as the two ears record everything, including all the different amplitudes and phase information.
    In case of the phone, everything comes out of one tiny speaker, and is directed right into the ear. That, together with the fact the other ear doesn’t have that information, makes the brain go like “hmm, let’s cancel out everything but the phone”.

  • 2. qbziz  |  December 13, 2007 at 8:55 am

    Thanks for the elaboration dude.

    So we’re basically waiting for cell phones to have multiple input mic’s with ear-like shapes to gather spatial information and then only transmit the speaker’s voice.

  • 3. stinos  |  December 14, 2007 at 8:13 am

    Indeed! now let’s hope the cell phone manufaturers read this post!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed

Recent Posts


%d bloggers like this: