
Last update to this page: 15 February 2000
French Version of this page, translated by Jean Debord (many thanks!)
IMPORTANT NOTE for MATLAB users!
List of other pages which link to this one:
fft.zip - [last update: 15 March 1998.]
This source code contains one implementation for arrays of type float and one for arrays of double. Both functions can perform the transform or the inverse transform, depending on a flag passed as a parameter.
fftdom.cpp - This is a sample C++ function which shows how to find the peak power frequency in the output arrays produced by the FFT. This function returns the frequency in Hz of the part of the frequency spectrum which has maximum power. Note that the returned value can actually be between the integer multiples of the Fourier base frequency f0 = samplingRate / numSamples. This code calculates a weighted average of the power spectrum around the peak frequency, to more accurately determine the true peak frequency. This technique is similar to calculating the center of gravity of a horizontal beam with continuously varying linear density along its length.
fourier.pas - [last update: 11 December 1996.]fft to perform the forward transform, and ifft for the inverse transform.
testfft.pas - This is a little test program for fourier.pas, which also serves as a demo of how to use the code.
testfft.zip - This zip file contains testfft.exe and egavga.bgi. Download this if you want to run the testfft program but do not have access to Borland's Turbo Pascal compiler.
vbfft.bas - [last update: 1 August 1999.]
fftdll.zip - [last update: 1 August 1999.]
This ZIP file contains a DLL which can be linked into a VB project, along with the C source code to the DLL. Instructions are included in the file. Be sure to expand with subdirectory information when you unzip this file (e.g. 'pkunzip -d fftdll').
Please note: I, Don Cross, do not know anything about Visual Basic. If you have any questions or comments about this VB code, please email Murphy McCauley at MurphyMc@Concentric.NET, or visit his web site. Whenever Murphy has a new version, he will get it to me and I will post it here. Thanks!
adafft.zip - [last update: 13 December 1999.]adafft.tar.gz - [last update: 13 December 1999.]The most common type of digital audio recording is called pulse code modulation (PCM). Pulse code modulation is what compact discs and most WAV files use. In PCM recording hardware, a microphone converts a varying air pressure (sound waves) into a varying voltage. Then an analog-to-digital converter measures (samples) the voltage at regular intervals of time. For example, in a compact disc audio recording, there are exactly 44,100 samples taken every second. Each sampled voltage gets converted into a 16-bit integer. A CD contains two channels of data: one for the left ear and one for the right ear, to produce stereo. The two channels are independent recordings placed "side by side" on the compact disc. (Actually, the data for the left and right channel alternate...left, right, left, right, ... like marching feet.)
The data that results from a PCM recording is a function of time. It often amazes people that a sequence of millions of integers on a compact disc recording can yield music and speech. People tend to wonder, "How can a stream of numbers sound like an entire orchestra?" It seems magical, and it is! Yet the magic is not in the digital recording; it's in your ear and your brain. To understand why this is true, imagine that you could place a microscopic movie camera in your ear to film your ear drum in slow motion. Suppose the movie camera was so fast that it could take a picture every 1/44,100 of a second. Also, suppose that the images this camera captured on film were so crisp and sharp that you could discern 65,536 (64K) distinct positions of the ear drum's surface as it moved back and forth in response to incoming sound waves. If you used this hypothetical technology to film your ear drum while listening to your best friend saying your name, then took the resulting movie and wrote down the numeric position of your ear drum in every frame of the movie, you would have a digital PCM recording. If you could later make your ear drum move back and forth in accordance with the thousands of numbers you had written down, you would hear your friend's voice saying your name exactly as it sounded the first time. It really doesn't matter what the sound is - your friend, a crowded party, a symphony - the concept still holds. When you hear more than one thing at a time, all the distinct sounds are physically mixed together in your ears as a single pattern of varying air pressure. Your ears and your brain work together to analyze this signal back into separate auditory sensations. It's literally all in your head!
Of course, we cannot do an infinite summation of any kind on a real computer, so we have to settle for a finite set of sines and cosines. It turns out that this is easy to do for a digitally sampled input, when we stipulate that there will be the same number of frequency output samples as there are time input samples. Also, we are fortunate that all digital audio recordings have a finite length. We can pretend that the function x(t) is periodic, and that the period is the same as the length of the recording. In other words, imagine the recording repeating forever, and call this repeating function x(t). The duration of the repeated section defines the base frequency f0 in the equations above. In other words, f0 = samplingRate / N, where N is the number of samples in the recording.
As a concrete example, if you are using a sampling rate of 44100 samples/second, and the length of your recording is 1024 samples, the amount of time represented by the recording is 1024 / 44100 = 0.02322 seconds, so the base frequency f0 will be 1 / 0.02322 = 43.07 Hz. If you process these 1024 samples with the FFT, the output will be the sine and cosine coefficients ak and bk for the frequencies 43.07Hz, 2*43.07Hz, 3*43.07Hz, etc. To verify that the transform is functioning correctly, you could then generate all the sines and cosines at these frequencies, multiply them by their respective ak and bk coefficients, add these all together, and you will get your original recording back! It's a bit spooky that this actually works!
realOut array holds the coefficients of the cosine waves in the Fourier formula.
imagOut array holds the coefficients of the sine waves in the Fourier formula.
realOut and imagOut) are a little bit strange, because they contain both positive and negative frequencies. Both positive and negative frequencies are necessary for the math to work when the inputs are complex-valued (i.e. when at least one of the inputs has a non-zero imaginary component). Most of the time, the FFT is used for strictly real-valued inputs, and this is especially the case in digital audio analysis. The FFT, when fed real-valued inputs, gives outputs whose positive and negative frequencies are redundant. It turns out that they are complex conjugates of each other, meaning that their real parts are equal and their imaginary parts are negatives of each other. If your inputs are all real-valued, you can get all the frequency information you need just by looking at the first half of the output arrays.
realOut[0] and imagOut[0], contains the average value of all the input samples. For the output indicies i = 1, 2, 3, ..., n/2, the value of the frequency expressed in Hz is f = samplingRate * i / n. The negative frequency counterpart of every positive frequency index i = 1, 2, 3, ..., n/2 - 1, is i' = n - i. Here's an example. Suppose the sampling rate is 44,100 Hz, and we are using buffers of 1024 complex numbers for both the inputs and outputs. The frequency at i = 1 would be (44,100 Hz) * 1 / 1024 = 43.07 Hz. The negative frequency counterpart would be at i = 1024 - 1 = 1023 (i.e. the last slot in the array), with a frequency of -43.07 Hz. Likewise, i = 17 would correspond to a frequency of (43.07 Hz)*17 = 732.13 Hz, while i = 1024 - 17 = 1007 would correspond to -732.13 Hz, etc.
#include <math.h> double magnitude = sqrt ( realOut[i]*realOut[i] + imagOut[i]*imagOut[i] ); double angle = atan2 ( imagOut[i], realOut[i] );If you are interested in doing the inverse conversion, from magnitude and angle to real and imaginary, use the following code:
#include <math.h> double real = magnitude * cos(angle); double imag = magnitude * sin(angle);

CalcFrequency.
The main problem with using the FFT for processing sounds is that the digital recordings must be broken up into chunks of n samples, where n always has to be an integer power of 2. One would first take the FFT of a block, process the FFT output array (i.e. zero out all frequency samples outside a certain range of frequencies), then perform the inverse FFT to get a filtered time-domain signal back. When the audio is broken up into chunks like this and processed with the FFT, the filtered result will have discontinuities which cause a clicking sound in the output at each chunk boundary. For example, if the recording has a sampling rate of 44,100 Hz, and the blocks have a size n = 1024, then there will be an audible click every 1024 / (44,100 Hz) = 0.0232 seconds, which is extremely annoying to say the least.
I have had some success getting rid of the discontinuities using the following method. Assume the buffer size is n = 2^N. On the first iteration, read n samples from the input audio, do the FFT, processing, and IFFT, and keep the resulting time data in a second buffer. Then, shift the second half of the original buffer to the first half (remember that n is a power of 2, so it is divisible by 2), and read n/2 samples into the second half of the buffer. Do the same FFT, processing, IFFT. Now, do a linear fade out on the second half of the old buffer that was saved from the first (FFT,processing,IFFT) triplet by multiplying each sample by a value that varies from 1 (for sample number n/2) to 0 (for sample number n - 1). Do a linear fade in on the first half of the new output buffer (going linearly from 0 to 1), and add the two halves together to get output which is a smooth transition. Note that the areas surrounding each discontinuity are virtually erased from the output, though a consistent volume level is maintained. This technique works best when the processing does not disturb the phase information of the frequency spectrum. For example, a bandpass filter will work very well, but you may encounter distortion with pitch shifting.
Here is an example C++ program of the preceeding method. Notice especially how the functions FadeMix and ShiftData are called from main.
If you really want to do clean sounding algorithmic filters on digital audio, you should check out time-domain filters (also known as linear filters), which process the input audio samples one at a time, instead of processing blocks of samples.
[back to Digital Audio page]
[Don Cross home page]
Number of visits to this page: