Given a vector and we want to compute the convolution . Where
If you where to just do those computations, it would take you time as you need to do steps computing up to terms for each step. However, this can be improved to .
Applications
When you want to reduce noise or add a visual effect to some data - normal approaches are to take an average of close by terms.
Mean filter
You replace by the average of the closest terms for some . i.e.
This is the same as calculating
Gaussian filtering
Similarly you could take a Gaussian distribution instead of using a uniform one and switch out for
with being a normalisation constant.
Guassian blur
There are 2-dimensional analogies of this used to generate a blur effect in games.
Polynomial representations
Any polynomial has two natural representations:
The coefficients , or
the values for some distinct .
Lemma
Polynomials of degree is uniquely determined by its values at any distinct points.
The fast Fourier transform is just a nice way of going between these two representations. Though it determines what it uses.
Why polynomials are quick to multiply when represented by values
Given and then to uniquely determine we just calculate .
Note here we used points. The plan of attack will be to convert polynomials from coefficients into values using fast Fourier transforms, multiply them, then transform back using the same technique.
Picking the points
We get to pick the points we want to evaluate our polynomial around. So we will pick points such that for . That way we know
and
.
So it is meaningful to split up the polynomial into odd and even terms and defining
so we have . This will begin our divide and conquer approach.
We have only reduced the degree of and .
We have not reduced the number of points we need to evaluate or at. Though this is where our choice of comes in.
Note that
so given we and for for we get for .
This halves the number of points we need to consider.
Summary
To get of degree at points we
Define and as above of degree .
Recursively evaluate at points. .
In time, we get at .
This has running time
which by masters theorem is .
We need to also have which will start to get interesting!
Complex numbers
To achieve this we will need to look at the polynomials in the complex numbers, which we denote as .
To work with complex numbers remember we set and think of them as sums .
These can be represented as or we can represent them using Polar Coordinates using and , where
This can also be summaries using Euler’s formula by saying
Roots of unity
The ‘th roots of unity are those such that . For these are simply however for they are .
More generically these are for . In the notation of this course set and express the roots of unity as for .
FFT(a, w): input: coefficents a = (a_0, a_1, ..., a_{n-1}) for polynomial A(x) where n is a power of 2 and w is a nth root of unity. output: A(w^0), A(w), A(w^2), ...., A(w^{n-1}) if n = 1 return A(1) Let A_even = (a_0, a_2, ..., a_n-2) and A_odd = (a_1, a_3, ..., a_{n-1}) A_even(w^0), A_even(w^2), ..., A_even(w^{n-2}) = FFT(A_even, w^2) A_odd(w^0), A_odd(w^2), ..., A_odd(w^{n-2}) = FFT(A_odd, w^2) Set: A(w^j) = A_even(w^2j) + w^j A_odd(w^2j) A(w^{n/2 + j}) = A_even(w^2j) - w^j A_odd(w^2j) Return (A(w^0), A(w^1), ..., A(w^{n-1}))
If is the run time of our algorithm - lets analyse this. The steps to split up the polynomial and glue the answers back together takes for each of them. Then we make two recursive calls that both take so the run time is like we said above.
How do we go backwards?
Linear algebra of FFT
The linear algebra of what we are doing here is important to how we compute the inverse.
For a point we have
So to compute it for points we do this via matrices using the following form
when we denote this as . So computing the inverse of the FFT is simply calculating ,
Lemma
Proof
Lets examine
For these terms we have
Misplaced && = \sum_{i=0}^{n-1} \omega_n^{i(j-k)}\\ & = \sum_{i=0}^{n-1} \left ( \omega_n^{j-k} \right )^n \end{align*}$$ Which splits into cases if $j = k$ then $\omega_n^{j-k} = 1$ and we have this sum is $n$. Whereas if $j \not = k$ from the [[Sum of roots of unity|sum of roots of unity]] we have this sum is 0. Therefore $$ M_n(\omega_n^{-1}) M_n(\omega_n) = n I_n$$ giving us the desired result. $\square$ Now the problem once again is of the form $n a = M_n(\omega_n^{n-1}) A$ we can use the [[Fast Fourier Transform|FFT]] algorithm detailed before to solve the inverse of the [[Fast Fourier Transform|FFT]]. ## Pseudocode for Polynomial Multiply ``` pseudocode FFT(a, w): input: coefficents a = (a_0, a_1, ..., a_{n-1}) for polynomial A(x) and coefficents b = (b_0, b_1, ..., b_{n-1}) for polynomial B(x) output: coefficents c = (c_0, c_1, ..., c_{2n-2}) for polynomial C(x) = A(x) B(x) Let w_{2n} be the first 2n'th root of unity (r_0, r_1, ..., r_{2n - 1}) = FFT(a, w_{2n}) (s_0, s_1, ..., s_{2n - 1}) = FFT(b, w_{2n}) for j = 0 -> 2n-1 t_j = r_j x s_j (c_0, c_1, ..., c_2n-1) = 1/2n FFT(t, w_{2n}^{2n-1}) ``` The running time of this algorithm is $O(2n\log(2n)) = O(n \log(n))$ for the three FFT steps. then $O(n)$ for calculating the entries for $t_j$ giving that it runs in $O(n\log(n))$.