Audibility of Phase [message #61138 is a reply to message #61137] |
Sun, 04 October 2009 11:57 |
|
Wayne Parham
Messages: 18786 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
This has been a sort of hot-button topic of mine, over the years. There are a few manufacturers that have made "time alignment" a marketing catch phrase. It always sort of irks me. I prefer to discuss the position of lobes and nulls, because I think it is more accurate to talk in these terms for several reasons.
I suppose it's really a semantics thing. But what I have noticed is that some manufacturers artificially distinguish themselves from others using a marketing ploy, saying their product is "time aligned". The concept used by most of these guys is the same thing suggested by Altec in the 1960's and 1970's. No better and no worse. So when you see someone champion the virtues of time alignment, look closely at what they're trying to say. Ask yourself if they're really setting the position of the forward lobe or if they're just trying to impress their audience with gee-whiz words.
The problem is each sound source has a physical position, so there is a specific path length from each source to the listener. This path length may change as the listener moves, and it may be different between two sources in a loudspeaker, either from two drivers in a multi-way loudspeaker or even at different points across the surface of the radiator of a single driver. There is also the matter that electrical, mechanical and acoustic reactances causes phase that changes with respect to frequency. These two things - delay from physical offset and delay from phase shift - are not the same thing. One changes with frequency and one does not. So using one to counteract another works only over a very narrow band of frequencies. It is a worthwhile approach, but the point of all this is it means "time alignment" is a relative thing (relative to position) and it probably makes more sense to talk about the position of the lobes and nulls.
Keith's example in the previous post does a great job at illustrating the point that our ears are much more sensitive to the changes in amplitude response as a result of combining signals with phase shift than they are to the phase shift itself. Another example of how our ears interpret time delays of this sort is to listen to a sawtooth wave with the peak on the left verses an identical one with the peak on the right.
These two waveforms have exactly the same frequency harmonic content but have different phase. The difference between the two is reverse sawtooth waves have even harmonics phase shifted. Like the headphone test in the previous post, listening to an individual waveform in isolation prevents the listener from obtaining frequency cues that expose a difference in phase. No summing is taking place. If a person could detect phase shifts even without a corresponding change in amplitude response from summing, they should be able tell the difference between a sawtooth wave and its reverse. But they cannot.
Cancellations of certain frequencies, beat frequencies and things that make response aberrations are pretty easy to detect, but absolute phase isn't. So you're going to have a very hard time finding people that can detect a sawtooth from a reverse sawtooth in a blind test. Try it and see.
|
|
|