Home » Sponsored » Pi Speakers » Uniform Directivity - How important is it?
Uniform Directivity - How important is it? [message #76813] |
Mon, 20 May 2013 13:13 |
|
Wayne Parham
Messages: 18786 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
I must say that I'm in the middle on this subject of directivity and how much it matters. Back when directivity wasn't even considered by most audiophiles, I made more a point of discussing its importance. But now that constant directivity has become more popular in hifi circles, I tend to find myself reminding the new converts not to throw the baby out with the bathwater.
That's why I often refer to speakers with uniform directivity rather than constant directivity. It is more important to me that the power response be uniform than it is for directivity to be constant. They are similar, but not exactly the same.
To illustrate what I mean, consider two loudspeakers, one designed for uniform directivity and another designed solely for flat on-axis response.
The "traditional" loudspeaker designed for flat on-axis response has a woofer that is omnidirectional at LF and crossed-over where convenient. Not that it is a design criteria, but at this point, woofer beamwidth has narrowed, probably in the vicinity of 90°. The tweeter is omnidirectional at crossover and its beamwidth doesn't narrow to 90° until the top octave. Measured response is flat on-axis.
The speaker designed for uniform directivity is crossed-over where woofer and tweeter directivity match. So instead of beamwidth narrowing through the woofer band, then widening as the tweeter is crossed-in, the tweeter directivity matches the directivity of the woofer and remains relatively constant at that point. Alternatively, some designs have the tweeter beamwidth continue to narrow.
Provided there is no abrupt beamwidth jump, I would consider both approaches to be "uniform directivity" designs.
Why does this matter?
Because people listening at at off-axis angles hear smooth response from the speaker that provides uniform directivity, but they don't from the other design. Also, the reverberent field is more natural when the uniform directivity speaker is playing, and that's important because that's the sound that surrounds you. It's as simple as that.
My experience with constant directivity designs started by making Klipschorn-style cornerhorns with constant directivity horns. I learned early on that they did something special where imaging was concerned. The walls confined the midrange and midbass down to the Schroeder frequency, so the sound was uniform throughout the room. Directivity is constant through the entire audio band in a design like this.
There is no way to be outside the pattern in a design like this. It is unique in that regard, and has always been my favorite design approach. But rooms with the right layout to support constant directivity cornerhorns are rare.
Another option presented itself, which is the matched-directivity approach. Physically, it is the same thing as the large Altecs and the JBL 4430. Those were my inspiration for this second design approach. They don't provide constant directivity, because the radiation pattern is omnidirectonal at low frequencies and beamwidth narrows as frequency goes up. But the directivity is at least uniform, and that means off-axis response is still flat. It may have a downward slope, but it is relatively smooth.
Speakers with uniform directivity sound more natural because the reverberent field has spectral balance.
One problem presents itself, and that is horns that provide constant directivity sometimes don't sound all that good. The best example is early CD horns with sharp edges inside. The discontinuities created by the sharp edges cause internal reflections and response anomalies. So I switched back to radial horns early on. These provide nearly constant horizontal beamwidth and gently collapsing vertical beamwidth. That is a very good characteristic for home hifi, in my opinion.
Prosound horns have a different set of priorities where constant directivity is concerned. Prosound horns not only need high-efficiency and constant beamwidth, but they must also be arrayable. This means they have to concern themselves with things like astigmatism, pattern flip and waistbanding. As an example of why those things are important, consider an installation where horns are splayed. In this arrangement, the primary lobe of one horn gets interference from the secondary lobe of the adjacent horn. So in this case, the characteristics of the sound radiated outside the pattern is as important as the sound radiated within the pattern.
A prosound horn design will allow some response ripple and other anomalous behavior within the pattern for "good behavior" outside the pattern. This makes sense, because horns in a prosound environment will be arrayed, so it doesn't make sense to optimize the horn for single use.
But there are a different set of priorities for a horn/waveguide designed for studio monitors or for home hifi use.
For home hifi, we want the response in the pattern to be as smooth and clean as possible, and ideally we want the beamwidth to be as constant as possible too. But where a trade-off must be struck, it doesn't make sense to optimize the response at the extreme edge of the pattern or outside, like we might choose to do in a prosound horn. The home hifi horn will not be arrayed. So the best approach is to make the radiation pattern uniform and to optimize response within the pattern.
This brings me back to what I said in the first paragraph about some people throwing the baby out with the bathwater. I'm all for uniform directivity, and found myself regularly arguing its virtues over the years. But it seems like lately, I see guys posting polars and sonograms, looking for a holy grail in its virtues. Some have gone way too far with that, in my opinion, and are using prosound techniques to build home hifi speakers. Their polars look wonderful, but their response curves and distortion performance is only so-so.
Compare the two curves on the chart below and you'll see what I mean. These are two waveguides that are approximately the same size, designed to be used over the same passband. One is optimized to provide smooth response, the other to provide constant beamwidth. The one with smoother response has about 2dB ripple, but directivity narrows slightly below 2kHz, before pattern control is completely lost around 1kHz. It is also 3dB louder with the same input signal, so drive voltage requirements are reduced, lowering distortion. The other waveguide has about 5dB ripple, but is able to maintain beamwidth down to its lower cutoff around 1kHz.
I've measured several different horns over the years, from radials to tractrix, and a lot of them are captured in the link below. Look through the list and you can see measurements of various types of horns and waveguides:
Remember the examples above, the traditional loudspeaker with the 90°-to-omni shift compared with the speaker having uniform directivity? In this case, when we compare response at 45° off axis, we see the traditional speaker has a 6dB dip at the crossover point, because the 90° beamwidth is defined by its -6dB point. The speaker with uniform directivity has no dip, its off-axis curve is a straight diagonal line.
But what about a speaker with a smidge of waistbanding? This is something to avoid in the prosound world, because a horn creates a secondary lobe in the waistbanding region, and this secondary lobe interferes with the primary lobe of another horn in an array. Not so in a home hifi setup, because there is no other other horn to interfere with. So what other consequence do we see from wasitbanding?
Waistbanding is a "pinch" of the pattern at low frequencies. An example is a horn/waveguide that provides constant 90° beamwidth from say 2kHz up, but that narrows to 70° below that, before ultimately opening wide up approaching omnidirectional radiation at low frequencies below 1kHz where pattern control is completely lost. A sonogram will show this "pinch" in the 1kHz to 2kHz region.
What does it mean in a home hifi environment? Practically nothing, it's inaudible. What is really happening is the sound at 45° is reduced slightly between 1kHz and 2kHz, and by slightly, I mean about 2dB. It is minor, nothing like the 6dB dip off-axis of a traditional loudspeaker designed solely for on-axis response.
Does that mean waistbanding is completely insignificant? Of course not. Would we want to design to reduce it? Certainly, provided there are no other trade-offs. But there are trade-offs, there are always trade-offs. The most common mechanism to counter waistbanding is a secondary flare, and this increases horn size, which increases center-to-center distance, which in turn brings the vertical nulls closer, limiting the size of the forward lobe. Or worse, if the mouth size must remain constant, then adding a secondary flare requires shortening the main body of the horn, modifying the flare profile and possibly truncating it. This introduces extra ripple. That may be worthwhile in a prosound horn, but it probably isn't in a waveguide designed to be used in a studio monitor or home hifi speaker.
Which leads me back to the radial horns. I personally would rather have a radial horn that provided constant directivity in the horizontal, gently collapsing directivity in the vertical and smooth response in the pattern than I would a so-called "waveguide" that had peaky response. I've seen some out there that have 5dB ripple, and that's about twice what I would be willing to live with.
A good hifi horn offers response flat within a 2dB window, and a good studio monitor waveguide is able to do this too. The waveguide usually cannot be used to as low frequency as a similarly sized (exponential) radial horn, because the waveguide's acoustic loading isn't as good at low frequencies. It has this in commmon with a tractrix horn, which also loads relatively poorly down low. But it need not increase response ripple like prosound horns do, otherwise it has thrown the baby out with the bathwater.
|
|
|
Goto Forum:
Current Time: Tue Nov 26 22:00:34 CST 2024
|