Wednesday, 30 July 2014

65 days of static @ sub89 Reading with Nordic Giants

Thank you to my wonderful wife for getting me tickets and coming along (against her own taste) to see 65dos yesterday in Reading. Great to hear so much of Wild Light, my favourite album of the year and fantastic to see live. Full of guitars and synths.










Set list according to setlist.fm
  1. AOD  (encore) 
Support was from Nordic Giants, which I also enjoyed immensely... check them out.. They were selling their own branded ale as merch. Great video and anime supporting material. My wife did initially wonder what we'd walked into with the headmasks and strobe/drumming.











Tuesday, 29 July 2014

Digital Filters in C#/Silverlight (Part 8 - Bessel)

I didn't want to finish this little digression into digital filtering without taking a look at the last of the 'classic' analog filters, so here we are. Bessel filters are slightly different from the other analog filters we've looked at so far in a couple of respects.

Firstly in the digital implementation they're not designed to mimic the amplitude response of the analog filter as Bessel designs are intended to give a desired amplitude response whilst maintaining the most linear phase and delay characteristics.

Secondly, while the polynomial used to define these filters is easy to compute there isn't a fixed algorithm for generating the poles as we have seen with the other filters. Let's take a look at the polynomial bit first. We're not going to go into a deep maths dive here, although the Bessel polynomials and related Bessel functions are a fascinating topic (Bessel functions are used to model the deformations of oscillating 2D planes, like a drum or tabla for example). The Bessel polynomial looks like this for the first couple of terms:

$$ \begin{align}
y_1(x) & = x + 1 \\
y_2(x) & = 3x^2 + 3x + 1 \\
y_3(x) & = 15x^4 + 15x^3 + 6x + 1 \\
y_4(x) & = 105x^5 + 105x^4 + 45x^3 + 10x + 1 \\
\end{align} $$

However, in electronic engineering, we're more usually working with what is known as the reverse Bessel polynomial which is a similar form with the coefficients reversed, which looks like this:

$$ \begin{align}
\theta_1(x) & = x + 1 \\
\theta_2(x) & = x^2 + 3x + 3 \\
\theta_3(x) & = x^4 + 6x^3 + 15x + 15 \\
\theta_4(x) & = x^5 + 10x^4 + 45x^3 + 105x + 105 \\
\end{align} $$

Where this can be generally described as:

$$ \theta_n(x) = x^ny_n\left(\frac {1}{n}\right) = \sum^n_{k=0} \frac {(2n-k)!}{(n-k)!k!} \frac {x^k}{2^{n-k}} $$

Very nice... now how about the formula for the poles and we can get right on make our filter. Unfortunately there isn't one! One of the usual methods is to generate the polynomial and then use a root-finding algorithm like Jenkins-Traub, Laguerre's method or good old Newton Raphson. However, the poles for a large number of orders are easily available in many places, so we're not going to generate lots of algorithmic code and instead just put the poles for orders up to 10 into a function.

So, here are the roots:

And here's the code:


        public List<Complex> getPoles(int order)
        {
            Debug.Assert(order <= 10);
            List<Complex> poles = null;

            switch (order)
            {
                case 1:
                    poles = new List<Complex>() { new Complex(-1, 0) };
                    break;
                case 2:
                    poles = new List<Complex>() { new Complex(-1.5, 0.8660254038), 
                                                  new Complex(-1.5, -0.8660254038) };
                    break;
                case 3:
                    poles = new List<Complex>() { new Complex(-1.8389073227, 1.7543809598),
                                                  new Complex(-2.3221853546,0),
                                                  new Complex(-1.8389073227, 1.7543809598) };
                    break;
                case 4:
                    poles = new List<Complex>() { new Complex(-2.1037893972, 2.6574180419),
                                                  new Complex(-2.8962106028 , 0.8672341289),
                                                  new Complex(-2.8962106028 , -0.8672341289),
                                                  new Complex(-2.1037893972, -2.6574180419) };
                    break;
                case 5:
                    poles = new List<Complex>() { new Complex(-3.646738595, 0),
                                                  new Complex(-2.324674303, 3.57102292),
                                                  new Complex(-3.351956399, 1.742661416),
                                                  new Complex(-3.351956399, -1.742661416),
                                                  new Complex(-2.324674303, 3.57102292) };
                    break;
                case 6:
                    poles = new List<Complex>() { new Complex(-2.515932248, 4.492672954),
                                                  new Complex(-3.735708356, 2.626272311),
                                                  new Complex(-4.248359396, 0.867509673),
                                                  new Complex(-4.248359396, -0.867509673),
                                                  new Complex(-3.735708356, -2.626272311),
                                                  new Complex(-2.515932248, -4.492672954) };
                    break;
                case 7:
                    poles = new List<Complex>() { new Complex(-4.971786859, 0),
                                                  new Complex(-2.685676879, 5.420694131),
                                                  new Complex(-4.070139164, 3.517174048),
                                                  new Complex(-4.758290528, 1.739286061),
                                                  new Complex(-4.758290528, -1.739286061),
                                                  new Complex(-4.070139164, -3.517174048),
                                                  new Complex(-2.685676879, -5.420694131) };
                    break;
                case 8:
                    poles = new List<Complex>() { new Complex(-2.838983949, 6.353911299),
                                                  new Complex(-4.368289217, 4.414442501),
                                                  new Complex(-5.204840791, 2.616175153),
                                                  new Complex(-5.587886043, 0.867614445),
                                                  new Complex(-5.587886043, -0.867614445),
                                                  new Complex(-5.204840791, -2.616175153),
                                                  new Complex(-4.368289217, -4.414442501),
                                                  new Complex(-2.838983949, -6.353911299) };
                    break;
                case 9:
                    poles = new List<Complex>() { new Complex(-6.297019182, 0),
                                                  new Complex(-2.979260798, 7.291463688),
                                                  new Complex(-4.638439887, 5.317271675),
                                                  new Complex(-5.60442182, 3.498156918),
                                                  new Complex(-6.129367904, 1.737848384),
                                                  new Complex(-6.129367904, -1.737848384),
                                                  new Complex(-5.60442182, -3.498156918),
                                                  new Complex(-4.638439887, -5.317271675),
                                                  new Complex(-2.979260798, 7.291463688) };
                    break;
                case 10:
                    poles = new List<Complex>() { new Complex(-3.108916234, 8.232699459),
                                                  new Complex(-4.886219567, 6.224985483),
                                                  new Complex(-5.967528329, 4.384947189),
                                                  new Complex(-6.615290966, 2.611567921),
                                                  new Complex(-6.922044905, 0.867665196),
                                                  new Complex(-6.922044905, 0.867665196),
                                                  new Complex(-6.615290966, 2.611567921),
                                                  new Complex(-5.967528329, 4.384947189),
                                                  new Complex(-4.886219567, 6.224985483),
                                                  new Complex(-3.108916234, 8.232699459) };
                    break;
            }


            return poles;
        }

Which is pretty tedious, but works nicely. As a short digression, there are a number of vague posts looking at some algorithmic calculations for the poles but detail is too scant and although intrigued I was not able to track down any reliable description or get this to come to a conclusion myself. Diagrammatically the poles look as below when plotted on the s-plane:



















If you're interested to just check that the poles all work out to the polynomial then just put the poles into the Polynomial.expand function we developed earlier, which provides a useful cross-check.

Great, so we now have the poles. Let's have a look at the amplitude frequency response for this analog prototype. We can do this by putting some numbers for the frequency (in radians) into the magnitude equation calculated from the analog transfer function (e.g. for a low-pass filter this is the Bessel polynomial in the denominator and is a poles only equation). Take a look back at one of the previous posts where we covered this before. The equations for Bessel functions can be found here. They look like this for a first few of the orders:

$$ \begin{align}
|H_2(\omega)| &= \frac {3}{\sqrt{\omega^4+3\omega^2+9}} \\
|H_3(\omega)| &= \frac {15}{\sqrt{\omega^6+6\omega^4+45\omega^2+225}} \\
|H_4(\omega)| &= \frac {105}{\sqrt{\omega^8+10\omega^6+135\omega^4+1575\omega^2+11025}}\\
\end{align}$$

As the coefficient size ramps up pretty quickly, in tabular form I've calculated them out for orders up to 10:












Giving the following nice looking graphs















Which look wonderful. However, we have a problem (and hence the reason for describing the magnitude response equations), the cut-off frequency is shifting for each of the orders unlike the Butterworth and Chebyshev responses where it in a similar location. So, we need some way of scaling the poles to normalise the cut-off frequency before shifting into the digital domain.

As the pole calculation has been tabularised and as this requires some additional computation, rather than doing this in code I put the magnitude response equations into a spreadsheet and then did a 'goal-seek' on the frequency to get a magnitude response of 0.5 (e.g. 3dB). This gives the following frequencies (in radians):

 



We can now use these to scale the poles before returning them, so the getPoles function is adjusted to scale (divide) by these values (I'll put the code at the end).

Next step our usual Low-Pass Filter (lpf) function can be reinstated. In this case the pre-warping needs to be removed as we're not going to use the bi-linear transformation and instead the matched Z-transform is going to be used instead. There is a lot of literature on this, so I'm not going to delve into that topic in this post.

The matched Z-transform is used again from Christian d'Heureuse's Java implementation of Tony Fischer's C code and has been added to the ComplexPlane class:


        public ComplexPlane MatchedZTransform()
        {
            ComplexPlane zplane = new ComplexPlane();

            zplane.poles = new List<Complex>();
            zplane.zeros = new List<Complex>();

            for (int i = 0; i < poles.Count; i++)
            {
                // returns exponential function of the complex number
                zplane.poles.Add(Complex.FromPolarCoordinates(Math.Exp(poles[i].Real),poles[i].Imaginary));
            }

            for (int i = 0; i < zeros.Count; i++)
            {
                // returns exponential function of the complex number
                zplane.zeros.Add(Complex.FromPolarCoordinates(Math.Exp(zeros[i].Real),zeros[i].Imaginary));
            }

            return zplane;
        }


We can now generate some filters, which when plotted look like this:











And finally here's the code:



using System;
using System.Diagnostics;
using System.Numerics;
using System.Collections.Generic;

// Hondrou implementation of a Bessel filter building on the STK framework
//
// This approach uses the techniques from Tony Fischer at York in mkfilter
// http://www-users.cs.york.ac.uk/~fisher/mkfilter/
// and builds on the Java implementation by Christian d'Heureuse
// www.source-code.biz, www.inventec.ch/chdh
//

namespace AgSynth
{
    public class Bessel : Iir
    {

        public List<Complex> getPoles(int order)
        {
            Debug.Assert(order <= 10);
            List<Complex> poles = null;

            switch (order)
            {
                case 1:
                    poles = new List<Complex>() { new Complex(-1, 0) };
                    poles = Polynomial.multiply(poles, 1 / 1.731939);
                    break;
                case 2:
                    poles = new List<Complex>() { new Complex(-1.5, 0.8660254038), 
                                                  new Complex(-1.5, -0.8660254038) };
                    poles = Polynomial.multiply(poles, 1 / 1.977607);
                    break;
                case 3:
                    poles = new List<Complex>() { new Complex(-1.8389073227, 1.7543809598),
                                                  new Complex(-2.3221853546,0),
                                                  new Complex(-1.8389073227, 1.7543809598) };
                    poles = Polynomial.multiply(poles, 1 / 2.422182);
                    break;
                case 4:
                    poles = new List<Complex>() { new Complex(-2.1037893972, 2.6574180419),
                                                  new Complex(-2.8962106028 , 0.8672341289),
                                                  new Complex(-2.8962106028 , -0.8672341289),
                                                  new Complex(-2.1037893972, -2.6574180419) };
                    poles = Polynomial.multiply(poles, 1 / 2.886229);
                    break;
                case 5:
                    poles = new List<Complex>() { new Complex(-3.646738595, 0),
                                                  new Complex(-2.324674303, 3.57102292),
                                                  new Complex(-3.351956399, 1.742661416),
                                                  new Complex(-3.351956399, -1.742661416),
                                                  new Complex(-2.324674303, 3.57102292) };
                    poles = Polynomial.multiply(poles, 1 / 3.324236);
                    break;
                case 6:
                    poles = new List<Complex>() { new Complex(-2.515932248, 4.492672954),
                                                  new Complex(-3.735708356, 2.626272311),
                                                  new Complex(-4.248359396, 0.867509673),
                                                  new Complex(-4.248359396, -0.867509673),
                                                  new Complex(-3.735708356, -2.626272311),
                                                  new Complex(-2.515932248, -4.492672954) };
                    poles = Polynomial.multiply(poles, 1 / 3.725753);
                    break;
                case 7:
                    poles = new List<Complex>() { new Complex(-4.971786859, 0),
                                                  new Complex(-2.685676879, 5.420694131),
                                                  new Complex(-4.070139164, 3.517174048),
                                                  new Complex(-4.758290528, 1.739286061),
                                                  new Complex(-4.758290528, -1.739286061),
                                                  new Complex(-4.070139164, -3.517174048),
                                                  new Complex(-2.685676879, -5.420694131) };
                    poles = Polynomial.multiply(poles, 1 / 4.090078);
                    break;
                case 8:
                    poles = new List<Complex>() { new Complex(-2.838983949, 6.353911299),
                                                  new Complex(-4.368289217, 4.414442501),
                                                  new Complex(-5.204840791, 2.616175153),
                                                  new Complex(-5.587886043, 0.867614445),
                                                  new Complex(-5.587886043, -0.867614445),
                                                  new Complex(-5.204840791, -2.616175153),
                                                  new Complex(-4.368289217, -4.414442501),
                                                  new Complex(-2.838983949, -6.353911299) };
                    poles = Polynomial.multiply(poles, 1 / 4.423721);
                    break;
                case 9:
                    poles = new List<Complex>() { new Complex(-6.297019182, 0),
                                                  new Complex(-2.979260798, 7.291463688),
                                                  new Complex(-4.638439887, 5.317271675),
                                                  new Complex(-5.60442182, 3.498156918),
                                                  new Complex(-6.129367904, 1.737848384),
                                                  new Complex(-6.129367904, -1.737848384),
                                                  new Complex(-5.60442182, -3.498156918),
                                                  new Complex(-4.638439887, -5.317271675),
                                                  new Complex(-2.979260798, 7.291463688) };
                    poles = Polynomial.multiply(poles, 1 / 4.732941);
                    break;
                case 10:
                    poles = new List<Complex>() { new Complex(-3.108916234, 8.232699459),
                                                  new Complex(-4.886219567, 6.224985483),
                                                  new Complex(-5.967528329, 4.384947189),
                                                  new Complex(-6.615290966, 2.611567921),
                                                  new Complex(-6.922044905, 0.867665196),
                                                  new Complex(-6.922044905, 0.867665196),
                                                  new Complex(-6.615290966, 2.611567921),
                                                  new Complex(-5.967528329, 4.384947189),
                                                  new Complex(-4.886219567, 6.224985483),
                                                  new Complex(-3.108916234, 8.232699459) };
                    poles = Polynomial.multiply(poles, 1 / 5.022755);
                    break;
            }


            return poles;
        }

        

        // fcf is the cutoff frequency in Hz
        public void lpf(double fcf, int order)
        {
            double nfcf = fcf / PcmStreamSource.SampleRate; // normalised corner frequency

            Debug.Assert(nfcf > 0 && nfcf <= 0.5); // greater than 0 and less than Nyquist

            List<Complex> poles = getPoles(order);

            double w1 = 2 * Math.PI * nfcf;

            ComplexPlane splane = new ComplexPlane();

            splane.poles = Polynomial.multiply(poles, w1);
            splane.zeros = new List<Complex>();

            ComplexPlane zplane = splane.MatchedZTransform();

            TransferFunction transferfunction = zplane.getTransferFunction();

            Coefficients coeffs = transferfunction.getCoefficients();

            double gain = transferfunction.getGainAt(Complex.One); // DC Gain

            // normalise the gain by dividing coeffs.b by gain
            coeffs.scaleGain(gain);

            setNumerator(coeffs.b);
            setDenominator(coeffs.a);
        }

    }
}

Sunday, 27 July 2014

Martyn Ware - My Life in 20 Synths @ WOMAD

Had a fantastic day yesterday at WOMAD in the baking sun. One of the highlights was hearing a talk by Martyn Ware of The Human League and Heaven 17 fame doing a talk on his life in music using synths. It was really fascinating and the ending high-point was a a live rendition of a few bars of Being Boiled played on the original synths. You can see him doing this previously at another event on YouTube.

Friday, 25 July 2014

Digital Filters in C#/Silverlight (Part 7 - ChebyshevII)

Hot on the heels of ChebyshevI, for completeness, let's also include the ChebyshevII filter. I'm not going to jump into maths again or even try to draw the geometric positioning of the poles and zeros today, so we'll just have to do with some code and graphs.

The simple bit is creating another class called Cheby2 and adding in the lpf function and getPoles using a suitable algorithm. Thankfully not too much work is required and Vincent Falco has already done a nice job in his Collection of Useful C++ Classes for DSP. In his code implementation he creates ComplexConjugate pairs and also Pole-Zero pairs, so we need to do a bit of additional work. We also need to modify the lpf function and add a new function in the Cheby2 class to generate some zeros as the ChebyshevII design requires them.

Just for completeness if browsing other code (including Vincent's), I've put the ChebyshevI algorithm in the more usual non-geometric calculation form below:


            // Cheby1,algorithm in more usual form
            for (int i = 0; i < order / 2; ++i)
            {
                int k = 2 * i + 1 - order;
                double x = sinhA * Math.Cos(k * Math.PI / (2.0 * (double)order));
                double y = coshA * Math.Sin(k * Math.PI / (2.0 * (double)order));

                Complex pole = new Complex(x, y);
                poles.Add(pole);
                poles.Add(Complex.Conjugate(pole));
            }

And here's the Cheby2 class:


using System;
using System.Diagnostics;
using System.Numerics;
using System.Collections.Generic;

// Hondrou implementation of a Chebyshev2 filter building on the STK framework
//
// This approach uses the techniques from Tony Fischer at York in mkfilter
// http://www-users.cs.york.ac.uk/~fisher/mkfilter/
// and builds on the Java implementation by Christian d'Heureuse
// www.source-code.biz, www.inventec.ch/chdh
//
// The Chebyshev2 algorithm comes from the calculations by Vincent Falco in
// his Collection of Useful C++ Classes for DSP
// https://github.com/vinniefalco/DSPFilters 

namespace AgSynth
{
    public class Cheby2 : Iir
    {
        // ripple is stopband ripple in this case, but same calculation
        public List<Complex> getPoles(int order, double ripple)
        {
            Debug.Assert(ripple <= 0.0); // ripple must be negative

            // 10 is correct because we need the square
            double rip = Math.Pow(10.0, -ripple / 10.0); 
            double eps = Math.Sqrt(rip - 1.0);

            double a = asinh(1.0 / eps) / (double)order;

            Debug.Assert(a >= 0.0);

            double sinhA = -Math.Sinh(a);
            double coshA = Math.Cosh(a);

            List<Complex> poles = new List<Complex>();

            int k = 1;
            for (int i = order/2; --i >= 0; k+=2)
            {
                double x = sinhA * Math.Cos((k - order) * Math.PI / (2.0 * (double)order));
                double y = coshA * Math.Sin((k - order) * Math.PI / (2.0 * (double)order));

                double d2 = x * x + y * y;
                Complex pole = new Complex(x/d2, y/d2);
                poles.Add(pole);
                poles.Add(Complex.Conjugate(pole));
            }

            if (order % 2 != 0) //odd
            {
                poles.Add(Complex.FromPolarCoordinates(1 / sinhA, 0));
            }

            return poles;
        }

        public List<Complex> getZeros(int order, double ripple)
        {
            List<Complex> zeros = new List<Complex>();

            int k = 1;
            for (int i = order / 2; --i >= 0; k += 2)
            {
                double im = 1 / Math.Cos(k * Math.PI / (2.0 * (double)order));

                Complex zero = new Complex(0, im);
                zeros.Add(zero);
                zeros.Add(Complex.Conjugate(zero));
            }
            
            return zeros;
        }

        // fcf is the cutoff frequency in Hz
        public void lpf(double fcf, double ripple, int order)
        {
            double nfcf = fcf / PcmStreamSource.SampleRate; // normalised corner frequency

            Debug.Assert(nfcf > 0 && nfcf <= 0.5); // greater than 0 and less than Nyquist

            List<Complex> poles = getPoles(order, ripple);
            List<Complex> zeros = getZeros(order, ripple);

            // get warped freq
            double w1 = 2 * Math.PI * Math.Tan(Math.PI * nfcf) / Math.PI;

            ComplexPlane splane = new ComplexPlane();

            splane.poles = Polynomial.multiply(poles, w1);
            splane.zeros = Polynomial.multiply(zeros, w1);

            ComplexPlane zplane = splane.BilinearTransform();

            TransferFunction transferfunction = zplane.getTransferFunction();

            Coefficients coeffs = transferfunction.getCoefficients();

            double gain = transferfunction.getGainAt(Complex.One); // DC Gain

            // normalise the gain by dividing coeffs.b by gain
            coeffs.scaleGain(gain);

            setNumerator(coeffs.b);
            setDenominator(coeffs.a);
        }

    }
}

Generating some responses using this with the same -0.5 ripple as before for n=2,3,4
















The ripple can be seen in the stopband.

Thursday, 24 July 2014

Digital Filters in C#/Silverlight (Part 6 - ChebyshevI)

Having got the Butterworth filter under our belt it makes obvious sense to get on cracking and implement a ChebyshevI filter design as well as the design technique follows on pretty much from the established groundwork. We're not going to lurch back into theory except to explain how the ChebyshevI design comes about.
As we saw earlier, the Butterworth design places poles onto the s-plane around a unit circle in an ordered pattern. The ChebyshevI design places these on an ellipse in a similar configuration which produces a steeper cutoff characteristic but introduces ripples into the passband. The amount of ripple is adjusted by the 'squashing' of the ellipse towards the imaginary axis and it's obvious to see that a Butterworth design therefore is simply a special case of the ChebyshevI design where the ellipse is a perfect circle (e.g. the ripple parameter = 0).

The s-plane placement therefore looks something like this:













Where \( \alpha_{max} \) is the ripple and:

$$ a = \frac {sinh^{-1}\left(\frac {1}{\epsilon}\right)}{n} $$
$$ \epsilon = \sqrt {10^\frac {\alpha_{max}}{10} - 1} $$

The code makes use of  Guillemin's algorithm for pole placement which shifts the pole positions geometrically from the Butterworth positions rather than calculating them directly from the Chebyshev polynomial.

So, this makes the coding pretty easy. The lpf, bpf, bpf, bsf functions can all remain pretty much the same except we need to create a new getPoles function that has a parameter for ripple. I'm not going to show the whole class as it should be pretty much cut-and-paste, but here's the new getPoles function for Cheby1:


using System;
using System.Diagnostics;
using System.Numerics;
using System.Collections.Generic;

// Hondrou implementation of a Chebyshev1 filter building on the STK framework
//
// This approach uses the techniques from Tony Fischer at York in mkfilter
// http://www-users.cs.york.ac.uk/~fisher/mkfilter/
// and builds on the Java implementation by Christian d'Heureuse
// www.source-code.biz, www.inventec.ch/chdh

namespace AgSynth
{
    public class Cheby1 : Iir
    {
        private static double asinh(double x)
        {
            return Math.Log(x + Math.Sqrt(1.0 + x * x));
        }

        public List<Complex> getPoles(int order, double ripple)
        {
            Debug.Assert(ripple <= 0.0); // ripple must be negative

            double rip = Math.Pow(10.0, -ripple / 10.0); // 10 is correct because we need the square
            double eps = Math.Sqrt(rip - 1.0);

            double a = asinh(1.0 / eps) / order;

            Debug.Assert(y >= 0.0);

            double sinhY = Math.Sinh(a);
            double coshY = Math.Cosh(a);

            Complex[] poles = new Complex[order];

            for (int i = 0; i < order; i++)
            {
                double theta = (order / 2.0 + 0.5 + i) * Math.PI / order;
                Complex circle = Complex.FromPolarCoordinates(1, theta);
                poles[i] = new Complex(circle.Real * sinhY, circle.Imaginary * coshY);
            }

            return new List<Complex>(poles);
        }

        // the rest of the lpf, hpf, bpf, bsf functions go here
    }
}

And here are the example responses using the same values previously with the Butterworth design and ripple set to -0.5.

Lowpass (lpf)
















Bandpass (bpf)
















Bandstop (bsf)




Digital Filters in C#/Silverlight (Part 5b - more Butter)

Well, the last post was fun. One last little tidy up and we're ready to move on. In the lpf function we need to complete tying things up by setting the coefficients into the Iir filter as follows:

            setNumerator(coeffs.b);
            setDenominator(coeffs.a);

With that done, we can now make the usual other complementary functions - hpf, bpf, bsf for high-pass, bandpass and bandstop. I'm not going to go into the theory of making these prototype functions and pole/zero placement in this post, but will come on to that later. Let's keep with code for the moment.

High-pass (hpf)
Following a similar procedure to that for lpf, the function is realised as follows:

        // fcf is the cutoff frequency in Hz
        public void hpf(double fcf, int order)
        {
            double nfcf = fcf / PcmStreamSource.SampleRate; // normalised corner frequency

            Debug.Assert(nfcf > 0 && nfcf <= 0.5); // greater than 0 and less than Nyquist

            List<Complex> poles = getPoles(order);

            // get warped freq
            double w1 = 2 * Math.PI * Math.Tan(Math.PI * nfcf) / Math.PI;

            ComplexPlane splane = new ComplexPlane();
            splane.zeros = new List<Complex>();
            splane.poles = new List<Complex>(new Complex[order]);

            for (int i = 0; i < order; i++)
            {
                splane.poles[i] = w1 / poles[i];
                splane.zeros.Add(new Complex(0, 0));
            }

            ComplexPlane zplane = splane.BilinearTransform();

            TransferFunction transferfunction = zplane.getTransferFunction();

            Coefficients coeffs = transferfunction.getCoefficients();

            double gain = transferfunction.getGainAt(new Complex(-1,0)); //gain at Nyquist

            // normalise the gain by dividing coeffs.b by gain
            coeffs.scaleGain(gain);

            setNumerator(coeffs.b);
            setDenominator(coeffs.a);
        }

This generates some good-looking sample graphs for different orders with the corner frequency again at 10kHz.
















Bandpass (bpf)
This time we need to set two frequencies for the upper and lower frequencies

        // fcf1 and fcf2 are the two cutoff frequencies in Hz. fcf1 < fcf2
        public void bpf(double fcf1, double fcf2, int order)
        {
            double nfcf1 = fcf1 / PcmStreamSource.SampleRate; // normalised corner frequency
            double nfcf2 = fcf2 / PcmStreamSource.SampleRate; // normalised corner frequency

            Debug.Assert(nfcf1 > 0 && nfcf1 <= 0.5); // greater than 0 and less than Nyquist
            Debug.Assert(nfcf2 > 0 && nfcf2 <= 0.5); // greater than 0 and less than Nyquist
            Debug.Assert(fcf1 < fcf2);

            List<Complex> poles = getPoles(order);

            // get warped freq
            double w1 = 2 * Math.PI * Math.Tan(Math.PI * nfcf1) / Math.PI;
            double w2 = 2 * Math.PI * Math.Tan(Math.PI * nfcf2) / Math.PI;

            double w0 = Math.Sqrt(w1 * w2);
            double bw = w2 - w1;

            ComplexPlane splane = new ComplexPlane();

            splane.poles = new List<Complex>(new Complex[order * 2]);
            splane.zeros = new List<Complex>();

            for (int i = 0; i < order; i++)
            {
                Complex hba = poles[i] * (bw / 2.0);
                Complex temp = Complex.Sqrt(1 - Complex.Pow(w0 / hba, 2));
                splane.poles[i] = hba * (1 + temp);
                splane.poles[order + i] = hba * (1 - temp);
                splane.zeros.Add(new Complex(0, 0));
            }

            ComplexPlane zplane = splane.BilinearTransform();

            TransferFunction transferfunction = zplane.getTransferFunction();

            Coefficients coeffs = transferfunction.getCoefficients();

            double centrefreq = (nfcf1 + nfcf2) / 2.0;
            Complex w = Complex.FromPolarCoordinates(1, 2 * Math.PI * centrefreq);
            double gain = transferfunction.getGainAt(w); // gain at centre freq

            // normalise the gain by dividing coeffs.b by gain
            coeffs.scaleGain(gain);

            setNumerator(coeffs.b);
            setDenominator(coeffs.a);
        }

Which looks like this for a bandpass with the lower frequency at 8kHz and the upper frequency at 12kHz.

















Note, this has double the number of poles/zeros for the same order of filter compared to lpf or hpf.

Bandstop (bsf)
And finally a bandstop function:


        // fcf1 and fcf2 are the two cutoff frequencies in Hz. fcf1 < fcf2
        public void bsf(double fcf1, double fcf2, int order)
        {
            double nfcf1 = fcf1 / PcmStreamSource.SampleRate; // normalised corner frequency
            double nfcf2 = fcf2 / PcmStreamSource.SampleRate; // normalised corner frequency

            Debug.Assert(nfcf1 > 0 && nfcf1 <= 0.5); // greater than 0 and less than Nyquist
            Debug.Assert(nfcf2 > 0 && nfcf2 <= 0.5); // greater than 0 and less than Nyquist
            Debug.Assert(fcf1 < fcf2);

            List<Complex> poles = getPoles(order);

            // get warped freq
            double w1 = 2 * Math.PI * Math.Tan(Math.PI * nfcf1) / Math.PI;
            double w2 = 2 * Math.PI * Math.Tan(Math.PI * nfcf2) / Math.PI;

            double w0 = Math.Sqrt(w1 * w2);
            double bw = w2 - w1;

            ComplexPlane splane = new ComplexPlane();

            splane.poles = new List<Complex>(new Complex[order * 2]);
            splane.zeros = new List<Complex>(new Complex[order * 2]);

            for (int i = 0; i < order; i++)
            {
                Complex hba = (bw / 2.0)  / poles[i];
                Complex temp = Complex.Sqrt(1 - Complex.Pow(w0 / hba, 2));
                splane.poles[i] = hba * (1 + temp);
                splane.poles[order + i] = hba * (1 - temp);

                // 2n zeros at +-w0
                splane.zeros[i] = new Complex(0, w0);
                splane.zeros[order + i] = new Complex(0, -w0);
            }

            ComplexPlane zplane = splane.BilinearTransform();

            TransferFunction transferfunction = zplane.getTransferFunction();

            Coefficients coeffs = transferfunction.getCoefficients();

            double dcgain = transferfunction.getGainAt(Complex.One); // gain at dc
            double hfgain = transferfunction.getGainAt(new Complex(-1,0)); // gain at hf
            double gain = Math.Sqrt(dcgain * hfgain);

            // normalise the gain by dividing coeffs.b by gain
            coeffs.scaleGain(gain);

            setNumerator(coeffs.b);
            setDenominator(coeffs.a);
        }

Which looks like this:















Note, this has double the number of poles/zeros for the same order of filter compared to lpf or hpf.

















That's a full complement of useful filter functions with the Butterworth characteristic.


Digital Filters in C#/Silverlight (Part 5a - added Butter)

We're there! Time to get stuck into some code. I'm going to walk through this bit-by-bit rather than dumping a whole lot of code so we can examine each of the functions, link it to the theory and also build a set of tools from which we can go on further if wanted.

First of all remember back a couple of weeks we were playing with the STK filters? The idea is that we're going to fit into that framework, so I'm going to create a new Butterworth filter called Butter and sub-class from Iir:

using System;
using System.Diagnostics;
using System.Numerics;
using System.Collections.Generic;

// Hondrou implementation of a Butterworth filter building on the STK framework
//
// This approach uses the techniques from Tony Fischer at York in mkfilter
// http://www-users.cs.york.ac.uk/~fisher/mkfilter/
// and builds on the Java implementation by Christian d'Heureuse
// www.source-code.biz, www.inventec.ch/chdh

namespace AgSynth
{
    public class Butter : Iir
    {
        
    }
}

First of all, it's important to give credit when building on the work of others. In this case the theoretical approach follows that described in a C implementation called mkfilter by Tony Fischer at York. More closely we're going to build some C# code flavoured by Christian d'Heureuse's Java implementation of this approach. We'll depart from both implementations by trying to put the functions into a set of more general functions that can be used to build further filter designs in the spirit of STK which was developed to understand audio synthesis and digital filtering rather than make optimised code.

Second point to note, we're using the C# System.Numerics library to bring in the Complex class for implementing some of the maths. If you haven't already you'll need to import the reference into your project (it works under Silverlight too). The really good thing here is that the 'out-of-the-box' Complex class pretty much matches Christian's Complex class. Where there are differences it's pretty easy to resolve.

So, first function, let's jump in at the deep end and make a method to generate those Complex poles for the Butterworth polynomial.


        public List<Complex> getPoles(int order)
        {
            Complex[] poles = new Complex[order];

            for (int i = 0; i < order; i++)
            {
                double theta = (order / 2.0 + 0.5 + i) * Math.PI / order;
                poles[i] = Complex.FromPolarCoordinates(1, theta);
            }

            return new List<Complex>(poles);
        }

Pretty easy! I'm going to continue to use List<Complex> rather than Complex[] for no other particular reason than I used List<double> for the a,b coefficients in the Filter class. It means a couple of times we'll need to use Complex[] to set arrays, but all the methods will use List<Complex>.

It's worth checking this, so using some test code below to dump out the list of Complex numbers:


        private void Dump(string message, List<Complex> cs, bool polar)
        {
            String text = message;

            foreach (Complex c in cs)
            {
                if (polar)
                {
                    text += String.Format("[r={0:0.00} p={1:0.00}] ", c.Magnitude, c.Phase);
                }
                else
                {
                    text += String.Format("{0:0.00}+i{1:0.00} ", c.Real, c.Magnitude);
                }
            }
            Debug.WriteLine(text);
        }

        private void test()
        {
            Butter b = new Butter();

            Dump("n=1 ",b.getPoles(1),false);
            Dump("n=2 ",b.getPoles(2),false);
            Dump("n=3 ",b.getPoles(3),false);
            Dump("n=4 ",b.getPoles(4),false);
        }

Gives this result:

n=1 -1.00+i1.00
n=2 -0.71+i1.00 -0.71+i1.00
n=3 -0.50+i1.00 -1.00+i1.00 -0.50+i1.00
n=4 -0.38+i1.00 -0.92+i1.00 -0.92+i1.00 -0.38+i1.00


Or in polar format:

n=1 [r=1.00 p=3.14]
n=2 [r=1.00 p=2.36] [r=1.00 p=-2.36]
n=3 [r=1.00 p=2.09] [r=1.00 p=3.14] [r=1.00 p=-2.09]
n=4 [r=1.00 p=1.96] [r=1.00 p=2.75] [r=1.00 p=-2.75] [r=1.00 p=-1.96]


Where the angle is in radians. Which matches what we would expect.

Now, let's create a new function in the Butter class to make a low-pass filter with a specified corner frequency (fcf) and order. This is the design function that we'll be working through and then building on so I'm going to do it bit-by-bit. To be consistent with the rest of the classes fcf is specified in Hz referenced to the SampleRate (which is 44100). We're going to first calculate the normalised corner frequency, check it and then use it to pre-warp the poles in preparation for a bilinear transform.


         // fcf is the cutoff frequency in Hz
        public void lpf(double fcf, int order)
        {
            double nfcf = fcf / PcmStreamSource.SampleRate; // normalised corner frequency

            Debug.Assert(nfcf > 0 && nfcf <= 0.5); // greater than 0 and less than Nyquist

            List<Complex> poles = getPoles(order);

            // get warped freq
            double w1 = 2 * Math.PI * Math.Tan(Math.PI * nfcf) / Math.PI;

            List<Complex> warpedpoles = Polynomial.multiply(poles, w1);

            // ..... more to come

The Polynomial class is a helper class that we'll progressively add functions to. While it's tempting to make this a class that replaces List<Complex> it does not paricularly add much in aesthetic and makes a whole lot more code, so we're going to use this as a class with a bunch of static helper functions that are aimed at working with a list of complex numbers which represent a polynomial. I'm not really going to distinguish here whether they are a list of coefficients or roots.

Multiply is pretty easy, it scales each of the complex numbers:


        public class Polynomial
        {
            public static List<Complex> multiply(List<Complex> a, double d)
            {
                Complex[] result = new Complex[a.Count];

                for (int i = 0; i < a.Count; i++)
                {
                    result[i] = a[i] * d;
                }

                return new List<Complex>(result);
            }
        }

We'll add to this class progressively as the functions are needed.

Great! So we now have a set of pre-warped poles that effectively represent our s-plane, so I'm going to create a class to represent the ComplexPlane:


    public class ComplexPlane
    {
        public List<Complex> poles;
        public List<Complex> zeros;
    }

And we'll update the lpf function to create it. In the case of a lpf as we've seen, the transfer function in the s-plane does not have any zeros, so let's plug that in as well. So, from the pre-warping it now looks like this:


            // get warped freq
            double w1 = 2 * Math.PI * Math.Tan(Math.PI * nfcf) / Math.PI;

            ComplexPlane splane = new ComplexPlane();

            splane.poles = Polynomial.multiply(poles,w1);
            splane.zeros = new List<Complex>();

            ComplexPlane zplane = splane.BilinearTransform();

Where we've already added the next step, which is a function in our ComplexPlane class to transform it and generate another ComplexPlane object which is the zplane. The BilinearTransform method is below, and as I promised to not do more maths in this post, please look at the previous post explaining the method and we'll maybe come back to this again if needed. For now, we get a set of poles + zeros in the zplane.

First of all, here's the function in ComplexPlane:

        public ComplexPlane BilinearTransform()
        {
            ComplexPlane zplane = new ComplexPlane();

            zplane.poles = Polynomial.BilinearTransform(poles);
            List<Complex> a = Polynomial.BilinearTransform(zeros);

            // extend zeros with -1 to the number of poles
            zplane.zeros = Polynomial.extend(a, poles.Count, new Complex(-1,0));

            return zplane;
        }

Which requires these supporting functions in Polynomial:


        public static List<Complex> BilinearTransform(List<Complex> a)
        {
            Complex[] a2 = new Complex[a.Count];

            for (int i = 0; i < a.Count; i++)
            {
                a2[i] = doBilinearTransform(a[i]);
            }

            return new List<Complex>(a2);
        }

        private static Complex doBilinearTransform(Complex x)
        {
            return ((2.0 + x) / (2.0 - x));
        }

And the extend function which adds a number of Complex roots to an existing root (pole or zero) representation of a factored polynomial. In this case the Bilinear Transformation adds a number of zeros:


         public static List<Complex> extend(List<Complex> a, int n, Complex fill)
        {
            List<Complex> result = null;

            if (a.Count >= n)
            {
                result = a;
            }
            else
            {
                Complex[] a2 = new Complex[n];

                for (int i = 0; i < a.Count; i++)
                {
                    a2[i] = a[i];
                }
                for (int i = a.Count; i < n; i++)
                {
                    a2[i] = fill;
                }

                result = new List<Complex>(a2);
            }

            return result;
        }

Wow! That's a lot easier than the maths. We're now on the downhill run.

From the z-plane root representation we really want to move this into a transfer function and calculate some coefficients, so we're going to create a new class TransferFunction and be able to transform a ComplexPlane object into a TransferFunction object. The Transfer function in this case only has real coefficient values in comparison to the Complex roots in the ComplexPlane.

    public class TransferFunction
    {
        public List<double> numerator;
        public List<double> denominator;
    }

And a function in the ComplexPlane object to generate a TransferFunction object:


        public TransferFunction getTransferFunction()
        {
            List<Complex> numeratorComplexCoeff = Polynomial.expand(zeros);
            List<Complex> denominatorComplexCoeff = Polynomial.expand(poles);

            TransferFunction tf = new TransferFunction();
            double eps = 1e-10;

            tf.numerator = Polynomial.toDouble(numeratorComplexCoeff, eps);
            tf.denominator = Polynomial.toDouble(denominatorComplexCoeff, eps);

            return tf;
        }

This uses two supporting functions in Polynomial. First expand, which takes the roots of a polynomial and expands them into a list of Complex coefficients of a polynomial.


        public static List<Complex> expand(List<Complex> zeros)
        {
            List<Complex> result = null;
            int n = zeros.Count;

            if (n == 0)
            {
                result = new List<Complex>() { Complex.One };
            }
            else
            {
                List<Complex> a = new List<Complex>() { Complex.One, Complex.Negate(zeros[0]) };
                for (int i = 1; i < n; i++)
                {
                    List<Complex> a2 = new List<Complex> { Complex.One, Complex.Negate(zeros[i]) };
                    a = multiply(a, a2);
                }
                result = a;
            }
                
            return result;
        }

The expand function itself makes use of a function of Polynomial to multiple two List<Complex> together:


        public static List<Complex> multiply(List<Complex> a1, List<Complex> a2)
        {
            int n1 = a1.Count - 1;
            int n2 = a2.Count - 1;
            int n3 = n1 + n2;

            //List<Complex> a3 = new List<Complex>(n3+1);
            Complex[] a3 = new Complex[n3 + 1];

            for (int i = 0; i <= n3; i++)
            {
                Complex t = Complex.Zero;
                int p1 = Math.Max(0, i - n2);
                int p2 = Math.Min(n1, i);

                for (int j = p1; j <= p2; j++)
                {
                    Complex tmul = a1[n1 - j] * a2[n2 - i + j];
                    t = t + tmul;
                }
                a3[n3 - i] = t;
            }

            return new List<Complex>(a3);
        }

As this expansion function is pretty interesting and important in itself, it's worth checking to see that it is doing what we'd expect. So, let's take the factored representation of the poles before pre-warping and expand them as a check which should give us the Butterworth polynomial coefficients:


            Dump("n=1 ", Polynomial.expand(b.getPoles(1)), true);
            Dump("n=2 ", Polynomial.expand(b.getPoles(2)), true);
            Dump("n=3 ", Polynomial.expand(b.getPoles(3)), true);
            Dump("n=4 ", Polynomial.expand(b.getPoles(4)), true);

This gives the following polar values:

n=1 [r=1.00 p=0.00] [r=1.00 p=0.00]
n=2 [r=1.00 p=0.00] [r=1.41 p=0.00] [r=1.00 p=0.00]
n=3 [r=1.00 p=0.00] [r=2.00 p=0.00] [r=2.00 p=0.00] [r=1.00 p=0.00]
n=4 [r=1.00 p=0.00] [r=2.61 p=0.00] [r=3.41 p=0.00] [r=2.61 p=0.00] [r=1.00 p=0.00]

Which correspond with the polynomial coefficients described previously. Great! It's worth noting, as we would hope that the angle in all cases is zero meaning that these are only real coefficients, which leads us on to the second function which converts the Complex polynomial coefficients into real values. The detail here is to compensate for a slight difficulty in the trig function implementation in the standard library where Math.Sin(Math.PI) does not quite return 0. The same is true for Excel by the way, just try it! SIN(PI()) gives the value 1.22515E-16.


        // Converts complex array into real double array, verifies that all imaginary parts 
        // are equal or below eps</code.
        public static List<double> toDouble(List<Complex> a, double eps)
        {
            double[] a2 = new double[a.Count];

            for (int i = 0; i < a.Count; i++)
            {
                double absIm = Math.Abs(a[i].Imaginary);
                Debug.Assert(absIm < eps && absIm < Math.Abs(a[i].Real * eps));
                a2[i] = a[i].Real;
            }

            return new List<double>(a2);
        }

We're very, very nearly there. Now we have our transfer function in the z domain with real coefficients we need to deal with the huge constant created in our approach to the Bilinear Transform. The easiest way to do this is to side-step it completely. The way to do this is to consider that we will want to normalise the amplitude at a particular frequency. We can then use this to scale the coefficients appropriately. So, let's add a function to TransferFunction to enable the calculation of gain at a particular frequency.


        public double getGainAt(Complex w)
        {
            Debug.Assert(numerator.Count > 0);

            Complex sumn = new Complex(numerator[0], 0);
            for (int i = 1; i < numerator.Count; i++)
            {
                sumn = (sumn * w) + numerator[i];
            }

            Debug.Assert(numerator.Count > 0);

            Complex sumd = new Complex(denominator[0], 0);
            for (int i = 1; i < denominator.Count; i++)
            {
                sumd = (sumd * w) + denominator[i];
            }

            return (sumn / sumd).Magnitude;
        }

Note that as this is a z-domain transfer function in the z-plane the frequency is given as a Complex number, which is a rotating frequency vector. e.g. in the z-plane the \( \pi \) radians is the Nyquist frequency and corresponds to a vector of -1 (which is why we only consider points in the top half of the plane as frequencies in the bottom half of the plane are above the Nyquist frequency, it also explains the aliasing symmetry need to have conjugate pairs). Anyway, moving swiftly on...

And finally we can add a function to get the filter design coefficients from the transfer function, which effectively is returning a scaled version of the same transfer function.


    public class Coefficients
    {
        public List<double> a;
        public List<double> b;
    }

And the function of TransferFunction:


        public Coefficients getCoefficients()
        {
            double scale = denominator[0];

            Coefficients coeffs = new Coefficients();
            coeffs.a = new List<double>();
            coeffs.b = new List<double>();

            coeffs.a.Add(1.0); // a[0]
            for (int i = 1; i < denominator.Count; i++)
            {
                coeffs.a.Add(denominator[i] / scale);
            }
            for (int i = 0; i < numerator.Count; i++)
            {
                coeffs.b.Add(numerator[i] / scale);
            }

            return coeffs;
        }

All that needs to be done is scaling those values by gain, so let's add a function to Coefficients:


    public class Coefficients
    {
        public List<double> a;
        public List<double> b;

        public void scaleGain(double gain)
        {
            for (int i = 0; i < b.Count; i++)
            {
                b[i] = b[i] / gain;
            }
        }
    }

Which gives us our completed lpf function in the new Butter filter.


        // fcf is the cutoff frequency in Hz
        public void lpf(double fcf, int order)
        {
            double nfcf = fcf / PcmStreamSource.SampleRate; // normalised corner frequency

            Debug.Assert(nfcf > 0 && nfcf <= 0.5); // greater than 0 and less than Nyquist

            List<Complex> poles = getPoles(order);

            // get warped freq
            double w1 = 2 * Math.PI * Math.Tan(Math.PI * nfcf) / Math.PI;

            ComplexPlane splane = new ComplexPlane();

            splane.poles = Polynomial.multiply(poles,w1);
            splane.zeros = new List<Complex>();

            ComplexPlane zplane = splane.BilinearTransform();

            TransferFunction transferfunction = zplane.getTransferFunction();

            Coefficients coeffs = transferfunction.getCoefficients();

            double gain = transferfunction.getGainAt(Complex.One); // DC Gain

            // normalise the gain by dividing coeffs.b by gain
            coeffs.scaleGain(gain);
        }

et voilĂ  we have a new Butter filter which can specify low-pass functions.

So, let's take a look at the results. Generating some values for the following:

            Butter b = new Butter();

            b.lpf(10000, 2);
            b.lpf(10000, 3);
            b.lpf(10000, 4);

Plugging them into the Excel spreadsheet to graph the response, we get the following coefficients:



And a very nice looking graph:


Notice how the corner frequency is at 10kHz which is where the curves intersect and where the gain = \( \frac {1}{\sqrt 2} \) as expected.