Not verified, but pretty sure about this
There’s two phenomena I want to touch in this post :
- The human ability to filter noise sources based on relevance
- The mobile phone’s ability to compress audio in order to eliminate microphone effects
A human can decide on which audio source he is focusing. This audio source will be amplified, the other ones will be tuned out. This ability allows you to have a conversation with someone at a reception, even though the intensity of the crowd’s noise is higher than that of your conversation partner.
A mobile phone compresses audio, which boils down to amplifying sound waves with a small amplitude a lot, and amplifying sound waves with a large amplitude just a bit. This way, all sound waves are brought to the same amplitude.
Why does a phone do this? It’s in order to eliminate the microphone effects you would get when the speaker moves the phone closer or further from his mouth.
The problem with this is, all ambient noise emanating from other source like cars, kids, vending machines, … is also amplified to the same level as the speaker.
This makes it practically impossible to comfortably talk to someone in a loud environment over a cell phone. Apparently, once the noise is compressed, converted into a low quality digital stream, and then unwrapped again into an analog stream into your mobile phone’s speaker, the human ability to tune out peripheral noise is cancelled. There’s no way for the brain to differentiate between primary and secondary sound sources.
I find this very annoying.
Entry filed under: Truly important stuff.