Time delay revisited [message #23176] |
Mon, 11 September 2006 12:15 |
Marlboro
Messages: 403 Registered: May 2009
|
Illuminati (1st Degree) |
|
|
In point source speakers, the time delay caused by tweets, mids, and woofers not in the same vertical horizontal plane with the voice coils, and the tilted response axis, can be handled with a sloping baffle, stepped baffle, or with electronic time delay between speakers. What about a speaker where the speakers are not in a vertical line one on top of each other where a sloped baffle would work? I'm not sure what a delay in the line source would do? Can someone herein with greater knowledge shed some light on the time delay issues between speakers in a line source as opposed to the point source? Marlboro
|
|
|
Re: Time delay revisited [message #23178 is a reply to message #23176] |
Mon, 11 September 2006 17:31 |
|
Wayne Parham
Messages: 18786 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
There's a lot more going on time-wise than just the position of the drivers. There's the resistive/reactive nature of the driver's voice coil and crossover. There's the resistive/reactive nature of the driver's mechanical movement, i.e. mass/suspension. There's the resistive/reactive nature of the cabinet, horn, baffle, etc. And there are several chaotic components in each of these sets of parameters as well, such as cone flex (breakup modes), non-linear BL, non-linear inductance, non-linear suspension compliance, etc. So even though loudspeakers are pretty simple as machines go, some complex behaviours rise up in the system. One thing that's very important is summing. It really involves a range of values, not so much a specific dead-set position. This is why people talk about center-to-center spacing not to exceed a certain distance for best performance up to a certain frequency. The idea is to make summing good, which involves keeping things within 1/4λ, where possible. This is because as the distance between sound sources nears 1/2λ, response forms a null because 1/2λ is 180° out-of-phase, causing cancellation. You don't want the difference in the distance between the listener and two sound sources to be 1/2λ apart, or a multiple of 1/2λ. One way around this, however, is to place sound sources so that some drivers are constructive where others are destructive. This is called dense interference and it's another way to smooth the sound field when the number of sound sources are high or when distances involved are necessarily large with respect to frequency. That ventures a little bit off the subject, but they're related issues. Here are some other links that might be of interest:
|
|
|
Re: Time delay revisited [message #23179 is a reply to message #23178] |
Mon, 11 September 2006 18:03 |
Marlboro
Messages: 403 Registered: May 2009
|
Illuminati (1st Degree) |
|
|
Wayne, My instruction manual for my Rane AC23 goes to great lengths to describe exactly how to manage the time delay, through three methods. But I think it assumes a point source, not an array. Is this an impossible point for arrays? Marlboro
|
|
|
Re: Time delay revisited [message #23180 is a reply to message #23179] |
Mon, 11 September 2006 20:03 |
|
Wayne Parham
Messages: 18786 Registered: January 2001
|
Illuminati (33rd Degree) |
|
|
Arrays are different than point sources. A line array has the same distances between each driver and the listener as you move perpendicular to the line, but movement parallel to it changes the relationships. The array designer does this on purpose, using dense interference to provide directional control and to smooth response along the plane of choice.There are a few different modes that any (finite) array will work through. One is where wavelengths are so long with respect to array size that it acts as a point source. Next is the range of frequencies above that, where the array is acting as a line source and low enough that center-to-center spacing is less than 1/4λ. That's where summing is best. Then there is a transition range where spacing is greater than optimal, but still not so large that each driver is several wavelengths apart. That's the range where lobing starts, but response is probably usable. And then there is a range above that where each element of the array is several wavelengths apart and no longer summing at all. Arrays act differently in each of these modes. In some frequency ranges, you'll treat them just like point sources. In other ranges, you'll treat them more like averages. This is where experience comes into play, and I'll defer that to Jim Griffin because arrays are his specialty. But you can get started by looking through some of those links I provided. There is information there about arrays, sound source interactions, phase, summing, etc.
|
|
|
|