Motivated by the excellent work of Bill Davis and Jerry Uhl’s Differential Equations & Mathematica [1], we present in detail several little-known applications of the fast discrete Fourier transform (DFT), also known as FFT. Namely, we first examine the use of the FFT in multiplying univariate polynomials and integers and approximating polynomials with sines and cosines (also known as the fast Fourier fit or FFF). We then examine the use of the FFF in solving differential equations with Laplace transforms and rediscovering trigonometric identities.
Introduction
We begin with a review of the basic definitions needed.
Let be a ring, , , and be a primitive root of unity; that is, and is not a zero divisor (or, ) for any prime divisor of . We represent the polynomial , of degree less than by the coefficient list, in reverse order, .
In other words, the DFT is a special multipoint evaluation at the powers 1, , … , of a primitive root of unity . The fast implementation of the DFT is known as the fast DFT, or simply as FFT; it can be performed in time . Details can be found in the literature [2]. Keeping it simple, we mention in passing that the inverse DFT is defined as the problem of interpolation at the powers of and is easily solved.
In Mathematica the map and its inverse are implemented—for the complex numbers—by the functions Fourier and InverseFourier. The FFT is implemented in Fourier. So, for example, the definition is verified by
There is an equivalence between convolution and polynomial multiplication in the ring . Please note for the given polynomials , that the coefficient of their product, , is ; whereas the corresponding coefficient of their convolution, , is and hence . Moreover, if , then . We will exploit this equivalence to develop a fast polynomial multiplication algorithm. The following theorem holds.
Example 1. Let , and . Then the cyclic convolution of the polynomials and (or the cyclic convolution of the vectors and ) is the polynomial
- Use Mathematica‘s function ListConvolve.
- Take successive inner products of the first row of the table with each one of the following rows. Note that we have reversed the order of and appended its first 3:
5. Use the formula .
Fast Fourier Transform for Fast Polynomial and Integer Multiplication
We begin by discussing a topic that is well known and much talked about, but for which there is little, if any at all, “hands-on” experience.
It is well known that a polynomial of degree less than over an integral domain , such as the integers or the rationals, can be represented either by its list of coefficients , taken in reverse order here, or by a list of its values at distinct points , where for we have ; is a primitive root of unity.
The reason for considering the value representation is that multiplication in that representation is easy. To wit, if and are the values of two polynomials and , evaluated at distinct points, with , then the values of the product at those points are . Hence, the cost of polynomial multiplication in the value representation is linear in the degree, whereas in the list of coefficients representation we do not know how to multiply in linear time.
Therefore, a fast way of doing multipoint evaluation and interpolation leads to a fast polynomial multiplication algorithm. Namely, evaluate the two input polynomials, multiply the results pointwise, and interpolate to get the product polynomial.
The multipoint evaluation is performed with FFT as implemented by the function Fourier, whereas interpolation is performed with the inverse FFT, implemented by the function InverseFourier.
Example 2. Suppose we are given the two polynomials and , whose product we want to compute.
We will now compute this product using FFT. Keeping in mind that FFT works best for inputs which are powers of 2, we consider the degree of the product to be less than .
Having fixed the value of , we then form the lists of coefficients of and —padding them with zeros until their lengths equal 8.
We next apply Fourier to these two lists and pointwise multiply the results.
Recall, from Definition 1 and the verification following it, that what we have done here is equivalent, within a scaling factor, to evaluating each polynomial at the points (where , ) and pointwise multiplying the results.
Interpolating the result with InverseFourier and taking care of the scaling factor, we obtain the coefficients of the product polynomial.
This is exactly what we obtained with the classical multiplication.
These ideas can be incorporated in an algorithm to do just polynomial multiplication. However, in order to avoid duplication of code—since integer FFT multiplication is very similar—we implement the function generalFFTMultiply, which will be used in both cases. This function is written in such a way that it computes in reverse order either the coefficients of the product of two polynomials with integer coefficients, or the integer digits—to a certain base —of the product of two integers.
So, to multiply the polynomials and we define the function
and their product is
The cost of doing polynomial multiplication this way is operations, which is the cost of computing the FFT and its inverse. This is a big improvement over the cost of the classical algorithm.
Before we move on to integer multiplication it is worth mentioning that ListConvolve also gives us, in reverse order, the coefficient list of the product .
We next present the integer multiplication algorithm using FFT.
As we know, every integer can be represented as a “polynomial” in some base , that is, for an integer we have . Therefore, integer multiplication can be considered as polynomial multiplication, where in the final result we replace the variable by the base .
Adjusting polyFFTMultiply accordingly we obtain this function.
Then the product of the integers 123456789 and 987654321 is
Fast Fourier Transform Is the Basis of Fast Fourier Fit
We next turn our attention to the problem of FFF, that is, the problem of approximating functions with sines and/or cosines.
The FFF problem has attracted the attention of some of the best scientific minds of all time. Gauss came up with an FFF algorithm in 1866. The modern version of the FFF is due to John Tukey and his cohorts at IBM and Princeton [4].
We will be using the function FastFourierFit taken from Bill Davis and Jerry Uhl’s Differential Equations & Mathematica [1] to compute the approximating complex trigonometric polynomials mentioned in Definition 3.
The code works as follows: the functions jump and Fvalues produce a list of equally spaced data points off the plot of the function between and . Then, the function numtab creates a list of integers from 1 to , which is used by coeffs to concatenate two lists. The first of these lists is the Fourier transform (taken in reversed order) of the first points, while the second list is the inverse Fourier transform (with the first element removed) of the same points. The list generated by coeffs has a total of points.
Finally, the function FastFourierFit takes the dot product of the list generated by FourierFitters and the list concatenated by coeffs. (All numbers in the list with magnitude less than are rounded to 0.)
FastFourierFit takes four arguments: the first one is the periodic function or, in general, the list of data points which we want to fit; the second argument is the period of the function; the third argument is the number for the equally spaced data points; and the last argument is the variable we want to use. Note that FastFourierFit uses the built-in functions Fourier and InverseFourier, with computational cost .
Example 3. To see how the function FastFourierFit is used, consider the periodic function with period . A plot is given in Figure 1.
Figure 1. The periodic function .
Approximating with we obtain
or its real (noncomplex) version
Note that the coefficients of and satisfy the relations mentioned in Definition 3. Moreover, has pure cosine fit. This was expected because the function is even; that is, for the function , defined on the extended interval , we have , , and , . See also its plot in Figure 1. Later on we will meet odd functions as well; those have pure sine fits.
The functions and are plotted together in Figure 2. As we see, FastFourierFit picks equally spaced data points off the plot of between and ; it then tries to fit these points with a combination of complex exponentials.
Figure 2. The dashed red plot is that of the approximating function.
As we mentioned before, the coefficients of the approximating polynomial in Definition 3 are computed using the FFT—incorporated in the function FastFourierFit. Another way of computing those coefficients is to use the integrals
which results in the integral Fourier fit.
This formula for the coefficients is obtained if we assume that for a fixed , the function is being approximated by the function
where , and we set
Then, we will definitely have
But
and, hence, the formula for the coefficients.
The two approximations resulting from the FFF and the integral Fourier fit are fairly close, and almost identical for large values of .
The disadvantage of the integral Fourier fit is that the integrals that need to be computed sometimes are very hard and impracticable even for numerical integration. Nonetheless, the method is useful for hand computations, whereas doing FFF by hand is completely out of the question.
The advantage of the integral Fourier fit is that, in theoretical situations, it provides a specific formula to work with. However, after the theory is developed and calculations begin, people switch to the FFF.
Recapping, note that FastFourierFit is a “double” approximation. It first uses sines and cosines to approximate a continuous periodic function and then uses discrete Fourier transform to approximate integrals involving these trigonometric polynomials—in effect replacing numerical integration by sampling.
Fast Fourier Fit Meets Laplace Transform
We recall that the Laplace transform of a given function is another function given by . The functions appropriate for the Laplace transform are all functions with the property that as for large positive . The functions , , , , as well as any quotient of polynomials, are all appropriate candidates for the Laplace transform.
For instance, here is the Laplace transform of .
Indeed:
This is Mathematica‘s way of saying that if is real and then the Laplace transform of is . On the other hand, if Mathematica is given the Laplace transform of , it can often recover the formula for :
Laplace transforms are used in solving differential equations by algebraic means. Suppose, for example, that we are given the differential equation , with starting values and :
The solution of this differential equation can be found algebraically if we replace all the functions involved in it by their Laplace transforms. In this way, we obtain the equation
and solve it for the Laplace transform of to obtain the formula:
This tells us that if and are the Laplace transforms of the functions and , respectively, then . The solution of the differential equation can be obtained by taking the inverse Laplace transform of —which is possible in many cases.
In this section we combine FastFourierFit and Laplace transforms to come up with good approximate formulas for periodically forced oscillators. To our knowledge, save for the work by Bill Davis and Jerry Uhl [1], this topic is totally absent from textbooks on differential equations! As a matter of fact, Fourier transforms, when discussed at all, appear only when dealing with the heat and wave equations [5].
Recall that the differential equation of the form , with given starting values and , and a periodic function, can be solved either by evaluating a convolution integral of or by taking its Laplace transform. However, in both cases, it may happen that the integrals involving are too complicated and Mathematica (or any other similar computer algebra package) cannot handle them.
What we want to do then is to first find a good FFF of (using sines and/or cosines) and then to use any of the methods mentioned to get an approximate formula of the solution. That is, instead of solving we will be solving the differential equation with the starting values and .
Example 4. Let us say that we have to solve the differential equation with and . The periodic function can be seen in Figure 3.
Figure 3. The periodic function .
It is impossible to find an exact solution of . Mathematica‘s built-in function DSolve bogs down because the integrals are too complicated.
As mentioned earlier, what we do in such cases is to first find fApproximationReal[t], a good FFF of .
Then, we easily obtain , the Laplace transform of the approximate formula of the solution of with and .
Finally, the formula for the approximate solution is obtained using the inverse Laplace transform.
In Figure 4 we compare the plot of the “unknown” solution obtained by NDSolve with the plot of the approximate formula for the solution (red, thicker dashed line). They are identical!
Figure 4. The red dashed line is the approximate solution.
Fast Fourier Fit for Discovering Trigonometric Identities
Another interesting application of FastFourierFit is in helping us “discover” trigonometric identities. Again, to our knowledge, this is mentioned only in the exercises of the work by Bill Davis and Jerry Uhl [1].
We know, for example, the trigonometric identity . Suppose for the moment that this identity is unknown to us and that we are faced with the expression . How can we simplify it? Of course we can use Mathematica‘s built-in function
but let us write our own trigIdentityFinder function using FFF.
Our function trigIdentityFinder is based on FastFourierFit, which is used to approximate for various values of , until the result no longer changes. The final result is then the desired identity. So we have
and the required identity for the problem at hand is found in 11 iterations.
We end this subject with one more problem from [1], comparing our identity with the result obtained from Mathematica.
Conclusions
Our goal has been to put together several difficult to access applications of the fast Fourier transform (FFT) for use in the classroom. Hopefully, the programs provided here will be of help for experimentation and further development.
Acknowledgments
We would like to thank two unknown referees for their most helpful comments which improved our presentation.
References
[1] | B. Davis and J. Uhl, Differential Equations & Mathematica, Gahanna, OH: Math Everywhere, Inc., 1999. Part of the Calculus & Mathematica series of books. |
[2] | H. J. Weaver, Applications of Discrete and Continuous Fourier Analysis, New York: John Wiley & Sons, 1983. |
[3] | W. Strampp, V. G. Ganzha, and E. Vorozhtsov, Höhere Mathematik mit Mathematica, Band 4: Funktionentheorie, Fouriertransformationen and Laplacetransformationen, Braunschweig/Wiesbaden: Vieweg Lehrbuch Computeralgebra, 1997. |
[4] | D. K. Kahaner, C. Moler, and S. Nash, Numerical Methods and Software, Englewood Cliffs, NJ: Prentice Hall, 1989. |
[5] | W. E. Boyce and R. C. DiPrima, Elementary Differential Equations and Boundary Value Problems, 6th ed., New York: John Wiley & Sons, 1997. |
A. G. Akritas, J. Uhl, and P. S. Vigklas, “On Some Applications of the Fast Discrete Fourier Transform,” The Mathematica Journal, 2011. dx.doi.org/doi:10.3888/tmj.11.1-2. |
About the Authors
Alkiviadis G. Akritas taught at the University of Kansas for twenty years before he moved to Greece, where he has been teaching and doing research in the Department of Computer and Communication Engineering at the University of Thessaly, in Volos, since 1998. His research interests are in the field of symbolic and algebraic computations (a field in which he has published extensively) and in using computer algebra systems to improve the teaching of mathematics. Based on Vincent’s theorem of 1836, Akritas has developed the two fastest methods for isolating the real roots of polynomial equations; these methods have been incorporated, respectively, in the computer algebra systems Maple and Mathematica.
Jerry Uhl is a professor of mathematics at the University of Illinois at Urbana-Champaign. He is the author or coauthor of a number of research papers. During the 1980s, Uhl served as real analysis editor of the research journal Proceedings of the American Mathematical Society. He also served one term as managing editor of the same journal, as well as one term on the Council of the American Mathematical Society. Since 1988, Uhl has devoted nearly all his energies to Calculus&Mathematica. In 1998, he received an award for distinguished teaching from the Mathematical Association of America.
Panagiotis S. Vigklas is a Ph.D student in the Department of Computer and Communication Engineering at the University of Thessaly, in Volos. He is currently working on his dissertation under the supervision of A. G. Akritas.
Alkiviadis G. Akritas
University of Thessaly
Department of Computer and Communication Engineering
37 Glavani & 28th October
GR-38221, Volos
Greece
akritas@uth.gr
inf-server.inf.uth.gr/~akritas/index.html
Jerry Uhl
University of Illinois at Urbana-Champaign
Department of Mathematics
273 Altgeld Hall (mc 382)
1409 W. Green
Urbana, IL 61801
USA
juhl@cm.math.uiuc.edu
Panagiotis S. Vigklas
University of Thessaly
Department of Computer and Communication Engineering
37 Glavani & 28th October
GR-38221, Volos
Greece
pviglas@uth.gr