Human perception of sound is wildly strange, though. Now that you know that the human hearing organ is basically a spectrogram with 13,000 or so bins and 100+ dB of dynamic range, think about how it is possible for a person (with hearing only in a single ear) to be able to listen to two different people at a time? Or even detect the voice of someone familiar in a babbling crowd? I'm tempted to say the human brain is a pattern detector, but its is so much more than that, because in detecting a voice (be it human, animal, event, musical instrument, or whatever), the time component is also involved; and it can detect a huge amount of different sounds at a time, even in parallel –– like running water, birdsong, leaves rustling, and two people talking at the same time.
It is even possible to generate sound that is perceived as quieter than silence. This sound is just shaped noise modeling the sensitivity of sound perception, i.e. the human psychoacoustic model. To the brain, it contains nothing interesting, and just shifts the "base" threshold of detection, so that even sounds generated by our own bodies (heartbeat, blood flow in veins) get attenuated: everything becomes "more quiet".
In audio compression, the psychoacoustic model has been used ever since MP3 came to be, to distribute the quantization noise, yielding better perception of the sound. (Essentially, it is about controlling what information is conveyed, and where in the spectrum the errors/noise due to compression and quantization is placed in.)
Because of the brain, I bet that things like lighting (as in whether you are in a dimly-lit room with soft yellow-orange lights, or harsh cold blue-white light) and especially scents have more effect on the experience than things like exactly which amplifier or amplifier settings you use.