I just found out recently that 440hz was set as the standard tuning for music, yet 432hz is the one most recognized by the musicians themselves as being superior in sound.
Without getting into the conspiracy theories right away (although it's bound to happen sooner or later) Why exactly was 440hz chosen over 432hz?
Wayne Parham Messages: 18793 Registered: January 2001
Illuminati (33rd Degree)
440Hz is A above middle C. A musician or group can always be off a few Hertz, and as long as they're all off by the same amount, it's fine. But the standard A-note is 440Hz.
I wouldn't pay attention to tin-foil-hat wearers that discuss which single tone "sounds better" but I would encourage you to look at which scales sound better. I don't mean which of the common scales sound best, like C major or A# minor. I'm talking about scales like chromatic, octatonic, heptatonic, etc. There are lots of different scales, and they all sound a little different.
From what I understand, different countries around the world had themselves a pow-wow and determined the 440hz standard more for the makers of the instruments than the musicians themselves. The musicians had to adapt a bit to decisions made by people who probably didn't play instruments.
On the other hand, how many people on an assembly line making cars never learned how to drive? In time, making anything turned into a matter of pure mechanics.
Wayne Parham Messages: 18793 Registered: January 2001
Illuminati (33rd Degree)
I don't think there is any qualitative difference in the sound of a 440Hz sine wave tone and a 432Hz sine wave tone. I think both will be equally pleasant. Both just sound like an unremarkable tone.
What I think can make a subjectively qualitative change is when you start adding sine waves to a fundamental of any frequency. These added notes can be pleasant or unpleasant, depending on their spacing, e.g. their harmonic relationship with one another. Chords and note progressions can sound better or worse depending on their harmonic arrangement too.
johnnycamp5 Messages: 354 Registered: June 2015 Location: NJ
Grand Master
Wayne is you last comment related to the old "even order/odd order harmonics" argument we often hear about tube vs. ss amplification?
Also, while not quite on topic-
Is loudspeaker sensitivity generally measured at 1khz?
We always see the typical 1W @ 1M but almost never which frequency, which I would imagine could have an effect on sensitivity...
Wayne Parham Messages: 18793 Registered: January 2001
Illuminati (33rd Degree)
Loudspeaker sensitivity is often times quoted at a specific frequency, and you're right that 1kHz is often chosen. But some manufacturers treat it as more of an averaged figure when specifying a single value and provide a response chart at 1W/1M (or 2.83v/M) to give an exact illustration of the SPL provided by the loudspeaker at all frequencies.
The odd versus even harmonics in distortion is pretty similar to what I was describing, yes. Both are a matter of dissonance. But distortion is always an anomalous unwanted artifact, and it's just that dissonant sounds are more obvious than harmonically-related ones. So a second harmonic isn't as ugly sounding as a fifth harmonic. You can tolerate a lot more second harmonic than you can fifth.
Chords and note progressions aren't distortions - they're notes played on an instrument, presumably by a musician - but they can be dissonant or harmonious. Harmonious chords sound more pleasant than dissonant ones.
I have always understood sine waves to be the building blocks on which all else is built from. So the difference between 432 and 440 becomes insignificant when you start adding music from that base.