
10122016, 06:10 AM

#1

Human being with feelings
Join Date: Mar 2016
Posts: 176

How to plot filter's frequency response using WDL_fft?
Hi there,
I'm trying to plot (most because I'm trying to learning DSP and stuff) the frequency response of a basic low pass filter I'm managing on my VST (as you can see, I'm at very basics); I'd like to visually show what it will does on my audio source.
This is the code I've wrote:
Code:
WDL_fft_init();
WDL_FFT_COMPLEX buf[bufsize];
for (int i = 0; i < bufsize; i++) {
// impulse
double in = 0.0;
if (i == 0) {
in = 1.0;
}
buf[i].re = mFilterFreq.Process(in);
buf[i].im = 0.0;
}
WDL_fft(buf, bufsize, 0);
for (int i = 0; i < bufsize; i++) {
int sorted_idx = WDL_fft_permute(bufsize, i);
WDL_FFT_REAL re = buf[sorted_idx].re;
WDL_FFT_REAL im = buf[sorted_idx].im;
// complex to magnitude/phase
double magnitude = sqrt(re * re + im * im) / (bufsize / 2.0);
// double phase = atan2(im, re);
double x = mRECT.L + step * i;
double y = magnitude * monitorHeight;
pGraphics>DrawLine(&COLOR_RED, (float)(x), (float)(mRECT.B  y), (float)(x), (float)(mRECT.B), 0, false);
}
I send an impulse through my filter, I get the buffer (bufsize = 8192 samples) from it (filter's impulse response) and I try to retrieve the magnitudes by the first 8192 bins. For what I've learned, the 4096° bin will express a frequency of 22050 hz, since k * samplerate/8192).
But unfortunately, it doesn't work
Setting my filter around 70Hz (i.e. "a" value of filter = 0.189118) it returns, for each bin, a magnitude of 0.040527338919 more or less.
Which is very low and "flat" (since all values got that magnitude).
Where am I wrong?
Thanks to every one who will also help me on this
Last edited by Nowhk; 10122016 at 08:37 AM.



10122016, 08:38 AM

#2

Human being with feelings
Join Date: Mar 2016
Posts: 176

Not sure why, but if I change:
Code:
double magnitude = sqrt(re * re + im * im) / (bufsize / 2.0);
with:
Code:
double magnitude = sqrt(re * re + im * im);
it seems to works as aspected.
Why it works without "normalization"? Shouldn't the FFT return "nonnormalized" complex values?



11022016, 04:40 PM

#3

Human being with feelings
Join Date: Oct 2016
Posts: 15

The FFT (and general fourier transform) isn't well defined in general, in terms of scaling. Sometimes the forward transform has a 1/N term, sometimes it doesn't and the inverse does, sometimes they both have a 1/sqrt(N) term. I'm not sure how it's implemented here, but that's probably the reason.



11032016, 05:48 AM

#4

Human being with feelings
Join Date: Mar 2016
Posts: 176

I see! Thank you for the info!



12222016, 11:00 PM

#5

Human being with feelings
Join Date: Dec 2016
Posts: 36

Quote:
Originally Posted by Nowhk
I see! Thank you for the info!

I hope you don't mind my asking, but this is code in your constructor of you main class, or defined in a new class?
I'm still relatively novice and trying to adapt bits of code to dynamic range plugin



12232016, 11:35 AM

#6

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by CaptnWillie
I hope you don't mind my asking, but this is code in your constructor of you main class, or defined in a new class?
I'm still relatively novice and trying to adapt bits of code to dynamic range plugin

In this case, I placed that code within the Draw function (defined by extending an IControl class). Hope it helps.



12242016, 11:58 AM

#7

Human being with feelings
Join Date: Dec 2016
Posts: 36

Quote:
Originally Posted by Nowhk
In this case, I placed that code within the Draw function (defined by extending an IControl class). Hope it helps.

Thanks, I've gotten thus far, the Draw function is defined by the IControl class, or are the fft and bufsize and other handles? I'm getting errors so I'm wondering where they are defined. Draw seems to work fine



12272016, 03:25 AM

#8

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by CaptnWillie
Thanks, I've gotten thus far, the Draw function is defined by the IControl class, or are the fft and bufsize and other handles? I'm getting errors so I'm wondering where they are defined. Draw seems to work fine

You have to include fft.h and check that fft.c is recognized by the project (try to add it to the project inside your IDE). fft.h/c are included within the WDL folder (I'm using WDLOL).



01052017, 05:35 PM

#9

Human being with feelings
Join Date: Dec 2015
Posts: 164

Just a tip for the general idea of obtaining the frequency response:
Taking the FFT of an impulse response isn't ideal. It'll work fine for such a filter with a short impulse response, but the exact solution would be to obtain it from the coefficients directly. That solves the problem (for some filters) that the impulse response will be too long and get truncated by the FFT. You can take the FFT of the numerator coefficients and denominator coefficients and divide. But it's fairly simple to evaluate the response on the unit circle in the z plane, and that has the advantage that you can pick the frequency points that you want to evaluate exactly, unlike the FFT which gives a power of 2 number of data points, evenly spaced. Picking frequencies make things like log frequency plots much easier.
You can read about it here:
http://www.earlevel.com/main/2016/12...encyresponse/
In the case of the onepole filter you linked to, "a = 0.99f; b = 1.f  a;", "a" corresponds to the feedback coefficient through the unit delay. The code rolls a minus sign at the summation into the coefficient, so it's really 0.99 if you use the classic direct form notation. "b" corresponds to the feedforward, in this case a value of 0.01.
You can plug these value into my frequency response widget to see what the response should look like:
http://www.earlevel.com/main/2016/12...ponsegrapher/
Because I use the convention that "a" is feedforward and "b" is feedback (read my latest article), you need to swap. And because the coefficients are assumed to be completely general (not necessarily normalized to the output), you need to include a value of 1 as the first b coefficient. So, put "0.01" in the lefthand field, "1.0, 0.99" into the righthand field, and click the graph to accept them and draw it.



01092017, 02:21 AM

#10

Human being with feelings
Join Date: Mar 2016
Posts: 176

Very nice, thank you for the amazing work! Ill try later with RBJ filters http://www.musicdsp.org/files/AudioEQCookbook.txt
Now: what if in my low pass filter I've added resonance suggested by Martin blog http://www.martinfinke.de/blog/arti...ns013filter/ ?
At the moment my code is:
Code:
void LowPass_OnePole_SecondOrder::CalculateCoefficients() {
a0 = 1.0  exp(2.0 * PI * mCutOff / mSampleRate);
mFeedbackAmount = mQ + mQ / (1.0  a0);
}
double LowPass_OnePole_SecondOrder::Process(double input) {
mOut0 += a0 * (input  mOut0 + mFeedbackAmount * (mOut0  mOut1));
mOut1 += a0 * (mOut0  mOut1);
return mOut1;
}
Which a0/b0 must be used? Feedback amount value is a1? I dont think so: this way your graph output strange things



01092017, 11:50 AM

#11

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
Very nice, thank you for the amazing work! Ill try later with RBJ filters http://www.musicdsp.org/files/AudioEQCookbook.txt
Now: what if in my low pass filter I've added resonance suggested by Martin blog http://www.martinfinke.de/blog/arti...ns013filter/ ?
At the moment my code is:
Code:
void LowPass_OnePole_SecondOrder::CalculateCoefficients() {
a0 = 1.0  exp(2.0 * PI * mCutOff / mSampleRate);
mFeedbackAmount = mQ + mQ / (1.0  a0);
}
double LowPass_OnePole_SecondOrder::Process(double input) {
mOut0 += a0 * (input  mOut0 + mFeedbackAmount * (mOut0  mOut1));
mOut1 += a0 * (mOut0  mOut1);
return mOut1;
}
Which a0/b0 must be used? Feedback amount value is a1? I dont think so: this way your graph output strange things

I don't understand what you are trying to do—you seem to be trying to extend the onepole filter to second order? You don't get resonance with a single pole, it takes at least two. Martin's blog looks like he's using a state variable filter configuration (second order), which has resonance control. Fundamentally, it's a decent place to start for a synth filter (or filter you might want to sweep, using an LFO or envelope. However, the naive implementation is only stable from DC to about a sixth of the sample rate.
My website has C++ code for biquads—the coefficient calculations produce the same values as rbi's, just different derivation. But if it's a synth filter you want, the direct form biquads are not a good way to go. You might want to start with Andy Simper's trapezoidalintegrated state variable filter:
https://cytomic.com/index.php?q=technicalpapers



01102017, 02:26 AM

#12

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
I don't understand what you are trying to do—you seem to be trying to extend the onepole filter to second order? You don't get resonance with a single pole, it takes at least two. Martin's blog looks like he's using a state variable filter configuration (second order), which has resonance control.

Well, in fact what I'm doing is a VST FX with 3Band EQ. Thus, I'm using 3 RBJ's Peak filter in series.
Since I'm a newbie of these stuff, I've started using that basic One Pole filter above (which should be an EMA filter) to easily study, apply, catch FFT and plot the frequency response graph.
The things later has been evolved, so I've added another order to it (i.e. two EMA in series) and add that Resonance stuff (which, if I well understand from your words, that filter become a SVF filter, wrote by Martin).
That's all: I just asked to you how to place the coefficients of this "hybrid" SVF filter inside your "evaluating the filter frequency response" code/article to catch the exact response, but I think that's not possible with that filter. Was like a sort of "homework" task, to learning more Anyway, evaluating the filter frequency response of that SVF doesn't matter anymore if I'll use RBJ.
Quote:
Originally Posted by earlevel
But if it's a synth filter you want, the direct form biquads are not a good way to go. You might want to start with Andy Simper's trapezoidalintegrated state variable filter

Since it will be an FX/EQ, RBJ filters should be good for my purpose right?
Quote:
Originally Posted by earlevel
My website has C++ code for biquads—the coefficient calculations produce the same values as rbi's, just different derivation.

What do you mean with "just different derivation"? Doesn't only matter the coefficients at the end? If they are equal, why I should use one or the other?



01102017, 04:37 PM

#13

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
Well, in fact what I'm doing is a VST FX with 3Band EQ. Thus, I'm using 3 RBJ's Peak filter in series.
Since I'm a newbie of these stuff, I've started using that basic One Pole filter above (which should be an EMA filter) to easily study, apply, catch FFT and plot the frequency response graph.
The things later has been evolved, so I've added another order to it (i.e. two EMA in series) and add that Resonance stuff (which, if I well understand from your words, that filter become a SVF filter, wrote by Martin).
That's all: I just asked to you how to place the coefficients of this "hybrid" SVF filter inside your "evaluating the filter frequency response" code/article to catch the exact response, but I think that's not possible with that filter. Was like a sort of "homework" task, to learning more Anyway, evaluating the filter frequency response of that SVF doesn't matter anymore if I'll use RBJ.

You're were putting two onepole filters in series with feedback? No, that's not the same as a state variable filter. And even if you used a state variable filter (two coefficients), you couldn't use the same coefficients as those calculated for a direct form biquad (five coefficients).
Quote:
Since it will be an FX/EQ, RBJ filters should be good for my purpose right?

When you say you're using three rbj peak filters in series, I guess you mean that you're now using three direct form biquads? (I recommend direct form II transposed, if you want direct form with floating point calculations.) Along with rbj's calculations for the coefficients? Yes, that is suitable for the task.
Quote:
What do you mean with "just different derivation"? Doesn't only matter the coefficients at the end? If they are equal, why I should use one or the other?

I mean exactly that—I have biquad coefficient calculations, rbj has different calculations. They only differ in the derivation—they produce the same numbers.



01112017, 03:06 AM

#14

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
You're were putting two onepole filters in series with feedback? No, that's not the same as a state variable filter.

I see. So in fact which kind of filter I've (uhm, in fact, Martin) did? An "hybrid" one pole second order filter?
Quote:
Originally Posted by earlevel
And even if you used a state variable filter (two coefficients), you couldn't use the same coefficients as those calculated for a direct form biquad (five coefficients).

Ok, I'll remember in the future that if I use one of this SVF filter I'd need to search a proper "evaluating filter frequency response" function
Quote:
Originally Posted by earlevel
When you say you're using three rbj peak filters in series, I guess you mean that you're now using three direct form biquads? (I recommend direct form II transposed, if you want direct form with floating point calculations.) Along with rbj's calculations for the coefficients? Yes, that is suitable for the task.

Not really sure what you are asking to me here hehe I've started learning filter 1 month ago, reading Understanding Digital Signal Processing (3rd Edition) as support book. Since I'm using RBJ filters here I think I'm using "Direct Form 1" right? Or at least, he seems to wrote that in his audio cookbook: "The most straight forward implementation would be the "Direct Form 1"
(Eq 2)". Is it a "bad" form for audio? Just curious: why he doesn't use direct form II transposed?
Quote:
Originally Posted by earlevel
I mean exactly that—I have biquad coefficient calculations, rbj has different calculations. They only differ in the derivation—they produce the same numbers.

Maybe that's related to what we are saying above? That you are using direct form II transposed instead of direct form 1 (as for RBJ)?
Thanks for all helps/reply/support you are giving to me. I'm learning a lot with your suggestions!



01112017, 11:49 AM

#15

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
Since I'm using RBJ filters here I think I'm using "Direct Form 1" right? Or at least, he seems to wrote that in his audio cookbook: "The most straight forward implementation would be the "Direct Form 1"
(Eq 2)". Is it a "bad" form for audio? Just curious: why he doesn't use direct form II transposed?
Maybe that's related to what we are saying above? That you are using direct form II transposed instead of direct form 1 (as for RBJ)?

Robert and I are old 56K DSP programmers, which is done in assembly language and with a matching pair of specialized, fixedpoint accumulators. It's most convenient and efficient to do your work in a single accumulator, and these accumulators have extra bits to allow temporary overflow to accommodate DSP functions like filtering. They are built to do efficient FIRs and direct form I filters. The fixedpoint math produces exact results (24bit samples and coefficients, 56bit accumulator), even though you'll probably truncate to 24 bits when the operation is completed. Note that the block diagram of a direct form I filter has exactly one summation—this matches well with a single accumulator. (The 56k has two, but you use one at a time.)
Floating point is different, and highlevel languages too. You essentially have unlimited accumulators with C, and floating point numbers and calculations are approximations, in general. So, you have a different set of priorities. It's not important to use a single accumulator (summing variable). Instead, the priority is to reduce the inherent floating point error, and minimize resources. Direct form I uses two sets of delay elements, four total. So, let's use direct form II, and drop that to two elements (this is the "canonical" form—the minimum number of delay elements). But if we transpose the filter, it has better floating point characteristics, so transposed direct form II is better. (By better characteristics, I mean that floating point is more accurate when adding numbers of similar size.)
So, Robert is making that suggestion based on his experience. Direct form I will work fine with floating point, but you're using double the delay elements needlessly.



01112017, 12:04 PM

#16

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
I see. So in fact which kind of filter I've (uhm, in fact, Martin) did? An "hybrid" one pole second order filter?

A onepole filter can only be first order (twopole second order, etc.). I think you're seeing the delay elements with feedback in Martin's filter and thinking that they are two onepole filters with feedback. They aren't—they are integrators.
Here, I found a block diagram of an analog state variable filter:
http://www.electronicstutorials.ws/...102.gif?x98918
See how it's summing bandpass and lowpass feedback? Here's the basic digital version, from my site:
http://www.earlevel.com/main/2003/03...riablefilter/
Note that the lowpass feedback is fixed at 1, and bandpass feedback controls the peaking (resonance).
Anyway, I haven't seen your modified filter, maybe it's the same as Martin's, maybe not—I'm just responding to your comments.
This state variable (generally attributed to Hal Chamberlin, from his book Musical Applications of Microprocessors) works pretty well, but only to about a sixth of the sample rate. I think I mentioned Andy Simper's trapezoidalintegrated version that is far better, and only slightly more complicated. If you really want a synth filter, start with that.



01122017, 04:25 AM

#17

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
Robert and I are old 56K DSP programmers, which is done in assembly language and with a matching pair of specialized, fixedpoint accumulators. It's most convenient and efficient to do your work in a single accumulator, and these accumulators have extra bits to allow temporary overflow to accommodate DSP functions like filtering. They are built to do efficient FIRs and direct form I filters. The fixedpoint math produces exact results (24bit samples and coefficients, 56bit accumulator), even though you'll probably truncate to 24 bits when the operation is completed. Note that the block diagram of a direct form I filter has exactly one summation—this matches well with a single accumulator. (The 56k has two, but you use one at a time.)
Floating point is different, and highlevel languages too. You essentially have unlimited accumulators with C, and floating point numbers and calculations are approximations, in general. So, you have a different set of priorities. It's not important to use a single accumulator (summing variable). Instead, the priority is to reduce the inherent floating point error, and minimize resources. Direct form I uses two sets of delay elements, four total. So, let's use direct form II, and drop that to two elements (this is the "canonical" form—the minimum number of delay elements). But if we transpose the filter, it has better floating point characteristics, so transposed direct form II is better. (By better characteristics, I mean that floating point is more accurate when adding numbers of similar size.)
So, Robert is making that suggestion based on his experience. Direct form I will work fine with floating point, but you're using double the delay elements needlessly.

I have to be honest with you: unfortunately I don't catch what you are trying to said to me with the reply above I'm not so expert right now. But I trust you: if in C++ TDF2 is "better", I think I'll use it.
But with noob words ( ): "better" is because there are less rounding errors? You said that both (your biquad, which I presume are TDF2, and the DF1 by RBJ) produce the same coefficient values; so if they are the same, no (different) rounding errors will occurs. Or it this just a matter of faster CPU calculations/memory allocations? Excuse my ignorance...
Quote:
Originally Posted by earlevel
Anyway, I haven't seen your modified filter, maybe it's the same as Martin's, maybe not—I'm just responding to your comments.

I didn't modified any filter, I'm really just using the one by Martin. I called it "second order" (which it seems to be wrong a this point, sorry) because I stack two onepole (first order, thus 6db) in series (becoming 12db):
Code:
buf0 += calculatedCutoff * (inputValue  buf0 + feedbackAmount * (buf0  buf1));
buf1 += calculatedCutoff * (buf0  buf1);
How would you call this filter?



01122017, 04:14 PM

#18

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
I have to be honest with you: unfortunately I don't catch what you are trying to said to me with the reply above I'm not so expert right now. But I trust you: if in C++ TDF2 is "better", I think I'll use it.

It's not important, but I'll give a try at explaining why the calculation order can be important in floating point. First, try a biquad calculator (like mine, here: http://www.earlevel.com/main/2013/10...calculatorv2/). put in a very small frequency value, like 0.01. See how the a (feedforward) coefficients are tiny numbers (e12, e13)? And the feedback number are much greater in magnitude (1, 2)? Floating point has two major flaws. One is that while a multiply generates basically double the number of significant digits (.9 x .9 = .81), the floating point result is always the same size (a 32bit float time s 32bit float gives a 32bit float result). It just discards the extra bits (floating point numbers are approximations, in general). Also, when you add two numbers, you lose precision. If you add two numbers of similar magnitude, you might lose one bits (least significant), but the real problem is when they a very far apart. Imagine a theoretical decimalbased processor that allowed three digits of mantissa. So, you could have 1.23, 123000, 0.123, 0.00000123 (but you could not have 0.1234—it would just truncate to 0.123). If you add 123000 + 0.0123, the result is 123000 (the exact result is 123000.0123, but since the format can't store that, you lose those least significant digits). Now, let's say you have this calculation:
x + y  x
where x is 12300, and y is 0.123, in our hypothetic floating point system. If you execute the equation in the order given, the result is 0. That's off by 0.123. But if you rearrange it:
x  x + y
the result is 0.123—the exact solution.
Transposing direct form II gives intermediate results that are closer numbers, less error.
Quote:
But with noob words (): "better" is because there are less rounding errors? You said that both (your biquad, which I presume are TDF2, and the DF1 by RBJ) produce the same coefficient values; so if they are the same, no (different) rounding errors will occurs. Or it this just a matter of faster CPU calculations/memory allocations? Excuse my ignorance...

You're conflating two differences–coefficient calculation, and filter structure.
Robert suggests direct form I, for fixed point DSP and for floating point. It's not a bad suggestion—I agree 100% with the fixed point DSP format. For floating point, we can get rid of half the delay elements, so direct form II is the better way to go. But we want the transposed DFII for better numerical qualities. Fixed and float math have different strengths and weaknesses—DFI matches up well with fixed (particularly the augmented accumulator implementations of DSPs like the 56k), TDFII matches best for float.
As for the coefficient differences, Robert likes to break tan into sin and cos. His reasoning is less error in a fixed point processor (56k was the target of his original derivation) near 0. The difference is not meaningful for modern floating point processors. Either method produces accurate coefficients.
Quote:
I didn't modified any filter, I'm really just using the one by Martin. I called it "second order" (which it seems to be wrong a this point, sorry) because I stack two onepole (first order, thus 6db) in series (becoming 12db):
Code:
buf0 += calculatedCutoff * (inputValue  buf0 + feedbackAmount * (buf0  buf1));
buf1 += calculatedCutoff * (buf0  buf1);
How would you call this filter?

Er, "broke" comes to mind. I even coded it real quick in MathCad to make sure I wasn't missing something. It has output resembling lowpass at some settings, passband gain varies at different settings, it blows up at others, and you can see that the feedback is dependent on cutoff, so it's not even parameterized like you'd want it. But it's pretty close, architectural, to a miswired Chamberlin state variable. Note that the SVF is not two series onepole filters with feedback—those are integrators (onepole lowpass filters are sometimes called "leaky integrators", due to the multipliers that bleed the integrators—the SVF has no such internal multiplies).
http://www.earlevel.com/main/2003/03...riablefilter/)
Last edited by earlevel; 01122017 at 11:08 PM.
Reason: typos



01132017, 07:06 AM

#19

Human being with feelings
Join Date: Mar 2016
Posts: 176

Thanks for the replies you are giving to me, awesome!!!
Quote:
Originally Posted by earlevel
I'll give a try at explaining why the calculation order can be important in floating point.

Got it now what you mean And since I'm using C++ and floating points math (double/float) I see why TDF2 will produce same result using less delay and less errors! Even if with modern processor, also the DF1 should be very accurate, as you stated above.
I've so taken your Biquad C++ code and wrote my own C++ class. It works like a charm at the first try! Very nice Now I'll try your evalutating frequency response code with this TDF2, avoiding the impulse>fft method (which I've noticed will also introduce lots of ripples with higher Q).
However, that's not the end I've some further questions about your article (which I hope could also improve your post):
1  You wrote: "For this filter, we’ll have a filter type (lowpass, highpass, bandpass, notch, peak, lowshelf, and highshelf), frequency, Q, and peak gain (used only for the low and high shelf types)." I think a typo, because you will also use peak gain for the Peak filter. Or maybe it was too predicted
2  Does it automatically works if Sample Rate is 48000hz? Or I should change somethings in the filter itself? You are only take care about sample rate when passing the freq (dividing by the sample rate).
3  Q is between 0.1 and 10.0. With a Peak filter I can just write 0.110.0 on my plugin Knob label. But when using it with a BandWidth filter, would be more suitable write octave in hz as label. Which conversion do you suggest? Rane conversion would mirror the math of your filter?
4  If I separately process left/right signal to the filter, I need to create 2 filters instead of one. But since they are filtering with the same parameters, instead of calculate coefficients twice, does it makes sense(after calculating coefficients of pFilterLeft) just take its coefficients values and paste them to pFilterRight? Or I should also reset some buffer? Somethings like this:
Code:
void Biquad_TDF2::CopyCoefficients(Biquad_TDF2 &src, Biquad_TDF2 &dest) {
dest.a0 = src.a0;
dest.a1 = src.a1;
dest.a2 = src.a2;
dest.b1 = src.b1;
dest.b2 = src.b2;
}
5  What if I also need an All Pass filter? RBJ give one in his DF1 implementation. Maybe you can complete it with this one too
Again, thank you very much man!
Last edited by Nowhk; 01132017 at 07:29 AM.



01132017, 11:40 AM

#20

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
1  You wrote: "For this filter, we’ll have a filter type (lowpass, highpass, bandpass, notch, peak, lowshelf, and highshelf), frequency, Q, and peak gain (used only for the low and high shelf types)." I think a typo, because you will also use peak gain for the Peak filter. Or maybe it was too predicted

Good catch, fixed.
Quote:
2  Does it automatically works if Sample Rate is 48000hz? Or I should change somethings in the filter itself? You are only take care about sample rate when passing the freq (dividing by the sample rate).

Everything is relative (to the sample rate) in filters. I always use "normalized frequency"—1.0 is the sample rate, 0.5 is the Nyquist frequency, 0.0 is DC, etc.
Quote:
3  Q is between 0.1 and 10.0. With a Peak filter I can just write 0.110.0 on my plugin Knob label. But when using it with a BandWidth filter, would be more suitable write octave in hz as label. Which conversion do you suggest? Rane conversion would mirror the math of your filter?

I'd suggest looking at other EQ plugins. I think you'll generally find Q in audio, though. Think about it...you have a filter with bandwidth 100 Hz. Is that wide, or narrow? If the center is at 6 kHz, it's narrow. If it's at 120 Hz, it's huge. And if you're sliding the center frequency around, looking to take out a resonance, perhaps, you don't want the sharpness of that filter changing as you move it around.
Quote:
4  If I separately process left/right signal to the filter, I need to create 2 filters instead of one. But since they are filtering with the same parameters, instead of calculate coefficients twice, does it makes sense(after calculating coefficients of pFilterLeft) just take its coefficients values and paste them to pFilterRight? Or I should also reset some buffer?...

I suggest a stereo (or multichannel) filter. The filter would have just one set of coefficients (and obviously one set of methods to set frequency/Q/type), but two (or more) sets of delay pairs. You could have a separate left/right process routine, or a single one that accepts an index (or both). Your interface would treat it as a single filter, but you audio processing routine would view it as two processes.
Quote:
5  What if I also need an All Pass filter? RBJ give one in his DF1 implementation. Maybe you can complete it with this one too

Yes, I omitted partly because I was keeping things simple and not bringing phase into the picture (and plotting it, etc.). Some day.



01142017, 04:27 AM

#21

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
Everything is relative (to the sample rate) in filters. I always use "normalized frequency"—1.0 is the sample rate, 0.5 is the Nyquist frequency, 0.0 is DC, etc.

Nice, so we are already ok!
Quote:
Originally Posted by earlevel
Is that wide, or narrow? If the center is at 6 kHz, it's narrow. If it's at 120 Hz, it's huge. And if you're sliding the center frequency around, looking to take out a resonance, perhaps, you don't want the sharpness of that filter changing as you move it around.

Uhm, but that should be also true for Q (since it's fC/deltaf1f2 (considered at 3db from fC)). Its a matter of graphic "log" problem, which I think is compensed by the graph plotting. No?
Quote:
Originally Posted by earlevel
I suggest a stereo (or multichannel) filter. The filter would have just one set of coefficients (and obviously one set of methods to set frequency/Q/type), but two (or more) sets of delay pairs. You could have a separate left/right process routine, or a single one that accepts an index (or both). Your interface would treat it as a single filter, but you audio processing routine would view it as two processes.

Not sure (when I've finally implements Biquad TDF2 filter) if I feel good to learn immediately a new filter (stereo, as you said). My main idea was:
Code:
pFilter_Band1_Left>CalculateCoefficients();
pFilter_Band1_Right>CopyCoefficients(pFilter_Band1_Left, pFilter_Band1_Right);
...
outputLeft = pFilter_Band1_Left>Process(left) * gain;
outputRight = pFilter_Band1_Right>Process(right) * gain;
Is it so "terrible" in terms of wasting resources? Maybe can I easily add a stereo mode within your Biquad implementation? (I don't think so)
Quote:
Originally Posted by earlevel
Yes, I omitted partly because I was keeping things simple and not bringing phase into the picture (and plotting it, etc.). Some day.

Nice, no worries Thank you again!



01142017, 12:21 PM

#22

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
Uhm, but that should be also true for Q (since it's fC/deltaf1f2 (considered at 3db from fC)). Its a matter of graphic "log" problem, which I think is compensed by the graph plotting. No?

What I'm getting at is audio utility. A given Q sounds relatively the same regardless of frequency. Bandwidth does not. I'm not talking about the relative merits in plotting. note that analog parametric EQs on mixing boards aren't set by bandwidth. maybe we're talking about different things.
Quote:
Not sure (when I've finally implements Biquad TDF2 filter) if I feel good to learn immediately a new filter (stereo, as you said). My main idea was...
Is it so "terrible" in terms of wasting resources? Maybe can I easily add a stereo mode within your Biquad implementation? (I don't think so)

Of course it's not terrible—a big mistake is worrying about getting things just right, better to get things done, and good enough for the job.
But just to show you what I mean, take a look at my biquad source code at http://www.earlevel.com/main/2012/11...csourcecode/
Here I'll change from using z1, z2 to z1L, z2L, z1R, z2R—left and right sets of delays. I'll change from using one process method to a left and right version. This is a direct replacement for the corresponding code in the .h file:
Code:
class Biquad {
public:
Biquad();
Biquad(int type, double Fc, double Q, double peakGainDB);
~ Biquad();
void setType(int type);
void setQ(double Q);
void setFc(double Fc);
void setPeakGain(double peakGainDB);
void setBiquad(int type, double Fc, double Q, double peakGain);
float processL(float in); //*** changed
float processR(float in); //*** changed
protected:
void calcBiquad(void);
int type;
double a0, a1, a2, b1, b2;
double Fc, Q, peakGain;
double z1L, z2L; //*** changed
double z1R, z2R; //*** changed
};
inline float Biquad::processL(float in) { //*** changed
double out = in * a0 + z1L;
z1L = in * a1 + z2L  b1 * out;
z2L = in * a2  b2 * out;
return out;
}
inline float Biquad::processR(float in) { //*** changed
double out = in * a0 + z1R;
z1R = in * a1 + z2R  b1 * out;
z2R = in * a2  b2 * out;
return out;
}
In the .cpp file, there are two initialization lines of "z1 = z2 = 0;" Replace them with "z1l = z2L = z1R = z2R = 0;". The biquad is now stereo. The only difference is that you'll use myFilter>processL on the left sample, myFilter>processR on the right. Of course, you could make a single process function with an index value instead of two process functions, and easily extend it to more than two channels. All channels have the same filter settings. (And yes, that's transposed direct form II.)



01162017, 03:28 AM

#23

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
What I'm getting at is audio utility. A given Q sounds relatively the same regardless of frequency. Bandwidth does not. I'm not talking about the relative merits in plotting. note that analog parametric EQs on mixing boards aren't set by bandwidth. maybe we're talking about different things.

Sorry, I don't get the point here if you are talking about "audio". Both Q and BW should be (as audio) the same at any fC.
You said: "you have a filter with bandwidth 100 Hz. Is that wide, or narrow? If the center is at 6 kHz, it's narrow. If it's at 120 Hz, it's huge".
But narrow/huge refers to the plot (visually), not audio. In both, the audio cut should always be 100 Hz. Maybe, since its log, the cut will be more/less noticeable, but always 100 Hz. Why it is not the same for the Q? deltaF (f2f1 at 3dB) is always fixed at 100Hz as well :O
For example: in Kontakt EQs, the "Q" is expressed in BW (OC, octaves). Plot (and audio as well) won't be "narrow/huge" if I change fC:
Quote:
Originally Posted by earlevel
Here I'll change from using z1, z2 to z1L, z2L, z1R, z2R—left and right sets of delays. I'll change from using one process method to a left and right version. This is a direct replacement for the corresponding code in the .h file
In the .cpp file, there are two initialization lines of "z1 = z2 = 0;" Replace them with "z1l = z2L = z1R = z2R = 0;". The biquad is now stereo. The only difference is that you'll use myFilter>processL on the left sample, myFilter>processR on the right. Of course, you could make a single process function with an index value instead of two process functions, and easily extend it to more than two channels. All channels have the same filter settings. (And yes, that's transposed direct form II.)

Awesome! It has the same computing complexity of my example, but it will save on memory since I'll use just a single instance of the Filter object. Thank you very much.
Last edited by Nowhk; 01162017 at 09:18 AM.



01162017, 10:59 AM

#24

Human being with feelings
Join Date: Dec 2015
Posts: 164

I didn't catch that you were talking about octaves, I thought you were asking about bandwidth in Hz. You actually said, "would be more suitable write octave in hz as label". I don't know what you mean by octave in Hz. I think you mean just bandwidth in octaves. My filters use Q.



01162017, 11:27 AM

#25

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
I didn't catch that you were talking about octaves, I thought you were asking about bandwidth in Hz. You actually said, "would be more suitable write octave in hz as label". I don't know what you mean by octave in Hz. I think you mean just bandwidth in octaves. My filters use Q.

Oh sorry, my fault I meant octave, which are related to frequencies (i.e hz). Thats all
I see your filter use Q, thus the question remain: is there a suitable conversion from bw (in octave) to Q (as linear) that suites nice with your filter?



01162017, 12:28 PM

#26

Human being with feelings
Join Date: Dec 2015
Posts: 164

Yes, the Rane conversion you cited should work.



01172017, 08:46 AM

#27

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
Yes, the Rane conversion you cited should work.

Perfect. If anybody other will be interessed, the formula is easy (from here, a simpler formula by RBJ):
Code:
double BandwidthToQ(double value) {
value = pow(2.0, value);
return (sqrt(value)) / (value  1);
}
You give the BW value as input (i.e. 3.0 = 3 octaves) and it returns the Q for your filters.
I'm testing your filter right now If I got some trouble I'll let you know.
Thanks again for the HUGE help you are giving to me <3



01192017, 07:50 AM

#28

Human being with feelings
Join Date: Mar 2016
Posts: 176

Here we go, with another question (if you can help me again)
When I cascade 2 (or 3) of your Biquad TDF2 (making somethings like an EQ 3 Bands), how would you evalutate filter frequency response?
Tale did somethings similar in his plugin using RBJ filter.
The "# direct evaluation of coefficients at a given angular frequency" will change I guess? Do I just need to stack them (so coefs[] are coefficients of the 3 filters)?



01192017, 11:18 AM

#29

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
When I cascade 2 (or 3) of your Biquad TDF2 (making somethings like an EQ 3 Bands), how would you evalutate filter frequency response?
Tale did somethings similar in his plugin using RBJ filter.
The "# direct evaluation of coefficients at a given angular frequency" will change I guess? Do I just need to stack them (so coefs[] are coefficients of the 3 filters)?

First, I just want to reiterate that "[my] Biquad TDF2" and "RBJ filter" are essentially indistinguishable in this conversation. You probably understand this, but I just want to make sure we're framing this right. We're just talking about direct form filters, which have a particular set of coefficients.
That said, when you have two filters in series, the transfer functions multiply. Frequency responses multiply.
Think about it in terms of frequency response: Say that you have two lowpass filters. Both have unity gain at 100 Hz. The frequency response there would be 1 x 1 = 1. And let's say that both had a magnitude response of 0.5 at 2000 Hz. You'd get 0.5 x 0.5 = 0.25. No surprises—if one filter cuts the response in half, and the second cuts that in half, it's now a quarter—this is exactly what you'd expect, but this little exercise gives us a little confidence we're on the right track.
So, for a biquad, the transfer function H(z) = (a0 + a1*z^1 + a2*z2) / (b0 + b1*z^1 + b2*z2). Note that z terms range from z^0 to z^2 for a biquad. For two biquads with different coefficients and therefore two different transfer functions, you'd have to multiple the two together, collect all those terms, and simplify so that you end up with z terms from z^0 to z^4, and use those.
More practically, just evaluate the response of each biquad and multiply those responses together.
[/QUOTE]



01192017, 12:17 PM

#30

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
More practically, just evaluate the response of each biquad and multiply those responses together.

So, basically, do I only need to multiply "magdB" (calculated at each frequency I'm looking for) of each filter?
Thats looks easy. I'll try Nice again dude!



01192017, 12:39 PM

#31

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
So, basically, do I only need to multiply "magdB" (calculated at each frequency I'm looking for) of each filter?

Well, if it's dB then it's a logarithm and adding log magnitude is the equivalent of multiplying magnitude (take my 0.5 x 0.5 = 0.25 example—equivalent to 6 dB +  6 dB = 12 dB). Or multiple the complex results of the frequency responses, then take magnitude and phase.



01272017, 03:17 AM

#32

Human being with feelings
Join Date: Mar 2016
Posts: 176

Quote:
Originally Posted by earlevel
Well, if it's dB then it's a logarithm and adding log magnitude is the equivalent of multiplying magnitude (take my 0.5 x 0.5 = 0.25 example—equivalent to 6 dB +  6 dB = 12 dB). Or multiple the complex results of the frequency responses, then take magnitude and phase.

Awesome man!!! It works perfect summing magdB It looks like an almost finished task right now for me.
I've only noticed that sometimes it "mess" the response graph, introducing noise draw merged with the response, when I move the knobs (fc, q or gain). I notice this more if I automate the parameters (also at low speed, such as LFO 1hz).
Not sure if this also affect audio or its just a cairocollateral effect.
Did you ever noticed this maybe?
Thanks again for the huge help, as usual



01312017, 01:42 AM

#33

Human being with feelings
Join Date: Mar 2016
Posts: 176

Hey boss, may I ask a further question?
Usually, when I change/filter the signal in some ways, I apply to the knob/modulation source a sort of smooth.
Else, when GUI param value change, I can hear clicks into the audio.
I think you know what I'm talking about : smoothing a param. I'm using a pretty basic (one pole) filter in my VST plugin, which process in this way:
Code:
inline double Process(double input) {
z1L += a0 * (input  z1L);
return z1L;
}
Using a0 = 0.01 it gives to me a nice smooth.
Well, my question is: what about changing Freq/Q/Gain of your Biquad filter? Should I need to apply 3 different param smooth for each knob?
That's won't really be a problem. The problem is that I need to calculate coefficients for every "slight" smooth value.
For example: if I change the gain from 0.6 to 0.7, it will calculate coefficients for the next 100 samples, stressing the CPU heavily.
How do you usually deal with this? It's not a Biquad filter problem I know: more a DSP common problem. But I believe you are the right person to asking this right here
Can you help me?
Thanks again!



01312017, 02:54 PM

#34

Human being with feelings
Join Date: Dec 2015
Posts: 164

The solution is pretty much as you guessed, in general—smooth the parameters by processing their filters and calculating fresh coefficients. If that's a problem, there are things you can do to lower the processing required. You can lower the "control rate" by updating the smoothing filters and calculations every 10 samples, for instance. Or, use a parameterized filter—where the frequency, Q, and gain don't interact (state variable filter or Moogstyle filter made of onepoles and feedback). First, that means that you don't need to recalculate Q when changing frequency only. Also, it makes it more practical to smooth the coefficients instead of smoothing the parameters and calculating coefficients every time. (That is, when the frequency knob moves, calculate the new frequency coefficient and run that through a smoother that's processed in the audio routine—then use the smoothed coefficinet output directly, avoiding the calculation at audio rate.)



02032017, 06:05 AM

#35

Human being with feelings
Join Date: Mar 2016
Posts: 176

Thanks again for the support and replies.
Quote:
Originally Posted by earlevel
Also, it makes it more practical to smooth the coefficients instead of smoothing the parameters and calculating coefficients every time. (That is, when the frequency knob moves, calculate the new frequency coefficient and run that through a smoother that's processed in the audio routine—then use the smoothed coefficinet output directly, avoiding the calculation at audio rate.)

Well, can I do it on Biquad filter? I mean: can I just smooth the calculated coefficients (a0 to b2) also for Biquad filter? Or it works (for some math principles) only on SVF or Moogstyle you suggested?
If can do it, I'll both smooth only coefficients at control rate 1/8 (i.e. every 8 samples).
Last edited by Nowhk; 02032017 at 06:11 AM.



02032017, 01:07 PM

#36

Human being with feelings
Join Date: Dec 2015
Posts: 164

Quote:
Originally Posted by Nowhk
Well, can I do it on Biquad filter? I mean: can I just smooth the calculated coefficients (a0 to b2) also for Biquad filter? Or it works (for some math principles) only on SVF or Moogstyle you suggested?
If can do it, I'll both smooth only coefficients at control rate 1/8 (i.e. every 8 samples).

Yes, you can smooth the coefficients. You can find research on the subject online, but basically, if you smooth the coefficients linearly, you're guaranteed of not becoming unstable (that is, at a given point in time the filter may not have exactly the expected frequency response, but it won't blow up). The coefficient curves themselves aren't linear, but in a nutshell it will probably work for your purposes. You can cut down the number of smoothing operations depending on the filter type. For instance, a lowpass filter always has a0 = a2, so you can skip one smoothing operation.



Thread Tools 

Display Modes 
Linear Mode

Posting Rules

You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off



All times are GMT 7. The time now is 11:35 AM.
