Digital Audio Effects

Guide: Prof. V.M. Gadre Advisor: Ritesh Kolte

   
Chetan Rao Abhishek Badki SatyaPrakash Pareek 
1   

Contents:
Introduction Section A: Getting familiar with the tools DSP Kit overview………………………………………………………………………………3 CCS: usage and programming………………………………………………………………… 6 Matlab vs CCS………………………………………………………………………………...12 Section B: Digital Audio Effects DAFX………………………………………………………………………………………….13 Delay Based effects: …………………………………………………………………….........16 Echo……………………………………………………………………................................19 Chorus………………………………………………………………………………………..24 Flanging……………………………………………………………………...........................25 Reverberation………………………………………………………………………………..25

Filtering based effects…………………………………………………………………………31 Equalizer……………………………………………………………………………………..32 Using cascade technique – peak and shelving filters………………………………….33 Using Band filters……………………………………………………………………...34 Wah wah effect……………………………………………………………………………....38

Modulation based effects………………………………………………………………………42 Ring modulation………………………………………………………………………..42

Other effects……………………………………………………………………………………44 Distortion Section C: References…………………………………………………….. ………………….46 Section D: Appendix ……………………………………………………………………….....47
2   

Getting familiar with the tools
DSP Kit overview

Fig. 5510 DSP Starter Kit (DSK) The primary features of the DSK are: • • • • 200 MHz TMS320VC5510 DSP AIC23 Stereo Codec Four Position User DIP Switch and Four User LEDs On-board Flash and SDRAM

Digital Signal processor: The TMS320VC5510 DSP is the heart of the system. Codec: Codec stands for coder/decoder. The DSK includes an on-board codec called the AIC23.The job of the AIC23 is to code analog input samples into a digital format for the DSP to process, and then decode data coming out of the DSP to generate the processed analog output.

3   

Code Composer sends commands to the emulator through its USB host interface to check on any data the user is interested in.Four Position User DIP Switch and Four User LEDs: The DSK has 4 light emitting diodes (LEDs) and 4 DIP (Dual-In-Line Package) switches that allow users to interact with programs through simple LED displays and user input on the switches. There are many softwares that can be used for creating audio effects: • • • MATLAB CCS : Code Composer Studio LABView We have used CCS for programming and MATLAB for finding filter coefficients. 4    . it uses a 32-bit wide external memory interface. The DSK implements the logic necessary to tie board components together in a complex programmable logic device (CPLD). Flash and SDRAM: The 5510 has a significant amount of internal memory so typical applications will have all code and data on-chip. When a user wants to monitor the progress of his program. But when external accesses are necessary. The DSK includes an external non-volatile Flash chip to store boot code and an external SDRAM to serve as an example of how to include external memories in your own system. JTAG emulator: The 5510 DSK includes a special device called a JTAG emulator on-board that can directly access the register and memory state of the 5510 chip through a standardized JTAG interface port. In addition to random glue logic. the CPLD implements a set of 4 software programmable registers that can be used to access the on-board LEDs and DIP switches as well as control the daughter card interface.

5    .

It consists of an editor for creating source code. A project records: • • • Filenames of source code and object libraries Compiler. organized by the types of files associated with the project. and it gives you immediate feedback on your application's performance and lets you optimize your code. assembler. The IDE is responsible for calling other components such as the compiler and assembler so developers don’t have to deal with the hassle of running each tool manually. A project keeps track of all information that is needed to build a target program or library.CCS: Usage and Programming CODE COMPOSER STUDIO: CCS CCS is TEXAS Instrument’s software development tool. and linker options Include file dependencies Program management is most easily accomplished using the Project View window. You can determine. The project environment speeds development time by providing a variety of commands for building your project. Use the Compile File command to compile an individual source file. This is called profiling. Code Composer Studio provides integrated program management using projects. how much 6    . If the project contains many source files and only few of the files are edited since the project was last built. All project operations can be performed from within the Project View window. use the Incremental Build command to recompile only the files that have changed. The Project View window displays the entire contents of the project. The Rebuild All command forces all files to be compiled. a project manager to identify the source files and options necessary for your programs and an integrated source level debugger that lets you examine the behavior of your program while it is running. for instance. It consists of: • • An assembler A C compiler The Code Composer IDE is the piece you see when you run Code Composer. Code Composer Studio allows you to collect execution statistics about specific areas in your code.

The audio signal which is given at the LINEIN/MICROPHONE terminal of the DSK is converted into samples by the ADC. loaded and run. or interrupts taken. a sample skeleton is provided. it must be specifically loaded onto the 5510 on the DSK. Things to be aware of: • • DSK is a different system than a PC. This is how the input is modulated and various effects are generated. To open sample_test_skeleton: Click Project--> open --> open the sample_test. The software while starting recognizes the DSK through USB and allows to work with it using instructions. To start using code composer studio. To stop it use Debug -> Halt. To restart the program. when recompiling a program in Code Composer on PC. It uses the AIC23 codec module of the 5510 DSK Board Support Library to read data in and write data out through AIC23 codec and serial port. • After a program starts running it continues running on the DSP indefinitely. first we have power the DSK and connect it to computer using USB port connection. The codec operates at 48 KHz 7    .CPU time algorithms use. Basic body of the program: For beginners. which will be similar to: sample_test_skeleton given in the Appendix. These samples are modulated by the written program in the DSP. Now the modulated samples are converted by the DAC and given to the HEADPHONE and LINEOUT.pjt in sample_skeleton folder. When program in Code Composer is run. such as the number of branches. You can also profile other processor events. subroutine calls. it simply starts executing at the current program counter. program counter must be reset by using Debug ->Restart or re-loaded which sets the program counter implicitly. which includes all the basic instructions to properly interface or load program on the DSK. Now we start CCS software. Open the C file in the source folder on the left hand side box. This can be used for generation of sine waves of different frequencies. This also contains pre-calculated sine wave data which is commented. The program is written in C language. Sine wave data is stored in an array called sinetable.

For example a decimal number: 25595 can be represented in hex format as: 0x658B Int16 sinetable[SINE_TABLE_SIZE] = 8    .2 -0.4 -0.8 -1 0. DSP is configured using the DSP configuration tool.5 4 4.2 0 -0.5 4 4.h" To use the BSL (Board Support Library).5 3 3.5 1 1.6 0. The different sampling frequencies available are also written inside comments.2 -0.4 -0. Settings for this example are stored in a configuration file called sample_test.8 -1 0 0.6 -0.5 2 2. we have to write this instruction in the program.2 0 -0.5 2 2.5 1 1.8 0.5 1 1. Code Composer will autogenerate DSP/BIOS related files based on these settings.h. Contains the results of the auto generation and must be included for proper operation.5 5 0 0. digital audio effects and reconstruction by DAC. #include "dsk5510.2 0 -0.8 0.4 0.6 0.5 1 1. The name of the file is taken from sample_test.5 3 3.4 0.6 0.by default.4 -0.8 -1 0.4 0.6 -0.h" To set the Length of sine wave table #define SINE_TABLE_SIZE 48 Pre-generated sine wave data.cdb.2 0 -0.6 0.5 4 4.5 5   Fig.6 -0.5 5 0 0.5 5 0 0. But here we have changed the sampling frequency to 24 KHz. #include "sample_testcfg.cdb and adding cfg.5 2 2.5 3 3.2 -0.5 4 4.8 0. 16-bit signed samples.         0.5 3 3.5 2 2. Sampling and quantizing by ADC.8 -1 ADC  DAFX  DAC  1 1 1 0.4 -0.4 0.h" To use the AIC23 codec module #include "dsk5510_aic23.6 -0. At compile time.2 -0.8 0.5 Depth   1 0.

Frequency Definitions #define DSK5510_AIC23_FREQ_8KHZ #define DSK5510_AIC23_FREQ_16KHZ #define DSK5510_AIC23_FREQ_24KHZ 1 2 3 Codec configuration settings (Used to increase or decrease input or output) DSK5510_AIC23_Config config = { 0x0017. // 1 DSK5510_AIC23_RIGHTINVOL Right line input channel volume 0x00d8. Main code routine. Effect of Sampling on Frequency Response: The sampling frequency must be at least twice as high as the highest frequency(Nyquist criteria) that we wish to reproduce because we must have at least 1 data point for each half cycle of the audio waveform. read samples. 0x3fff. 0x2120. 0x5a81. Hence for 8khz sampling rate we hear mostly less of high frequencies and hence the quality of output is less compared to the 16khz sampling frequency. // 0 DSK5510_AIC23_LEFTINVOL Left line input channel volume 0x0017. The higher sampling rates require twice as much hard drive space. void main() 9    .{ 0x0000. perform operations on the samples and write the samples. The highest frequency that we can record with a sampling rate of 8khz is 4000hz. 0x30fb. we can record up to 22khz but the filters used in the D/A conversion process have a very high rate slope at 20khz which will allow nothing higher than 20khz to get through. 0x10b4. 0x4dea. At a sampling rate of 44k. and twice as much CPU processing power hence to save space and to allow for more DSP processing low sampling rates are used in recording sound. // 2 DSK5510_AIC23_LEFTHPVOL Left channel headphone volume 0x00d8. here we initialize BSL. 0x658b… }. // 3 DSK5510_AIC23_RIGHTHPVOL Right channel headphone volume }. Larger Sample rates may technically sound better.

3). /*-------------------------------------Enter Signal Processing Code here which will be loaded on to the DSK for processing the signal on the chip 10    . Int16 leftsample. &config). Read a sample from the right channel while (!DSK5510_AIC23_read16(hCodec. it must be called first before any other BSL function DSK5510_init(). Set Sampling frequency of the codec to 24KHz . DSK5510_AIC23_setFreq(hCodec. To initialize the board support library. To start the codec hCodec = DSK5510_AIC23_openCodec(0. rightsample. but for this we have to include the frequency definitions.{ DSK5510_AIC23_CodecHandle hCodec.) { There are mainly two types of jack connectors: Balanced Mono Stereo Tip Ring Sleeve Positive / Hot Negative / Cold Ground Left channel Right channel Ground Mono has only one channel hence in this case the rightsample and leftsample will be the same but in case of Stereo there are two channels and hence the rightsample and leftsample may or may not be same depending on the input. It starts when the program is loaded and run and stops when the program is halted. &leftsample)). Read a sample from the left channel while (!DSK5510_AIC23_read16(hCodec. The above statement can be also written as DSK5510_AIC23_setFreq(hCodec. &rightsample)). while(TRUE) (This while loop is an infinite loop. DSK5510_AIC23_FREQ_24KHZ).

} Close the codec DSK5510_AIC23_closeCodec(hCodec).---------------------------------------*/ Send content of leftsample to the left channel. Send content of rightsample to the right channel. leftsample)). rightsample)). } 11    . while (!DSK5510_AIC23_write16(hCodec. while (!DSK5510_AIC23_write16(hCodec.

000 length array. In CCS. very few elements are stored (order of hundreds). so that user can get a feel of tools being used. The audio files are played using a media player and simultaneously we can hear the processed sounds. We have used CCS in most of audio processing. making code a bit complex. Method of processing: Matlab takes an audio file as input (using wavread function) and converts it into an array. Simplicity of code: Matlab codes are very simple. In CCS.(sampling frequency 16 kHz). Here we compare both schemes. A predefined number of samples are stored and processed.60. If the required difference equation is known. matlab code is a slight modification of the same but in CCS a little bit of thinking is required. For example a typical one minute length audio file is stored in 9. audio files are processed online.Matlab vs CCS Comparison of Matlab and CCS Signal processing codes can be written in CCS (in C programming language) and Matlab. How CCS stores and processes the audio files has to be known. Both methods have their advantages and shortcomings. So all the processing is done offline. Second array is written into audio file and returned to user. Then it works on the processing code and writes into another array. 12    . C code includes many for loops. taking a very small memory space. Memory concerns: Matlab stores all the samples in array that takes a lot of memory space.

Sampling at 4khz and quantization on 4bits(16 levels) The process involves use of Digital to Analog Converter (DAC). This signal is converted into voltage signals by means of some hardware say microphone. The process includes discretization and then quantization of analog signals. It takes samples of analog signals after fixed times intervals. Process is called sampling. An utterance of the vowel "a" in analog. Sampling frequency is very carefully chosen since for the reconstruction of digital signals sampling frequency must be at least twice the maximum frequency present in the original signal (Nyquist criterion). This analog signal is difficult to process as it is (for various reasons). After 13    . The process is depicted in the following figure:   Fig.Digital Audio Effects Basics of Digital Audio Signals: An audio signal consists of variations in air pressure as a function of time so that it represents a continuous time signal x(t). discrete-time and digital format. it needs to be converted to corresponding digital signal. To process it on a computer.

DAFX are boxes or software tools with input audio signals or sounds which are modified according to some sound control parameters and deliver audio signals or sounds.sampling signal is passed through a quantizer in which the discrete time signal x[n] approximated by a digital signal xd[n] R is A with only a finite set A of possible levels. for understanding the algorithms of DAFX knowledge of DSP is required. The number of possible representation levels in set A is hardware defined. The output signal is then converted to analog signal using DAC. The most important and popular ones are Digital Audio Effects (DAFX). Typical WAVE files are sampled at 44. Input and output signals are monitored by loudspeakers. The most basic changes can be thought in time domain. sound modifications. Digital signal thus produced is then passed through a system which processes it. So. in layman’s words. Almost all the audio effects were first played by musicians (either accidentally or knowingly). headphones or some visual representation such as time signal. filtering some frequencies. Not all the sound modifications are very useful as audio effects.1 kHz and resolution is 16 bits. then analyzed. the signal level and the spectrum. of bits in a word. The settings of control parameters are often done by sound engineers or musicians. Changing the sound levels of some particular samples. The modifications which alter the properties of sound such that it appears to be some other natural sound are generally useful. Most important task is to set the control parameters according to what modifications we want to achieve. The digital audio effects are basically digital signal processing. changing pitch. enhancing or diminishing some particular frequencies are some examples. (See figure). They start with special playing tricks by musicians. merge to the use of special microphone techniques and migrate to effect processors for synthesizing. Both input and output signals are trains of digital signals which represent corresponding analog audio signals. Digital Audio Effects (DAFX): Audio effects are used by every individual involved in the generation of music signals. Audio effects are. typically 2b where b is no. recording. production and broadcasting of music signals. Similarly in frequency domain too we can 14    . Properties of sound can be can both in analog and digital domain.

Delay based effects 3. And then similarly the anticlockwise effect is produced. Filtering effects 4. for clock-anticlock-wise effect a fixed number of samples are sent to the right speaker and then the next set sent to both and then again the next samples are sent to left speakers. The function used is an exponential function. Here we mainly focus on the time domain signal processing and classify DAFX into following categories: 1. 15    . This program sends the samples to the two headphones according the function for each headphone (or the function which modifies leftsample and rightsample). Simple Effects 2.Input Signal Acoustic and visual representation DAFX Output Signal Acoustic and visual representation Control Parameters Listener Fig. Similarly. For example the pendulum effect (Section-D . Digital Audio Effects and its control have the similar changes. Modulation-Demodulation based effects 5.Reference). This process is clockwise. These programs are best for understanding the CCS working. Others Some simple effects Using only C programming and the CCS software we can create simple effects which do not use any filters.

M = Γ * fs M – Number of samples delayed Γ .Delay Based Digital Audio Effects Introduction: Delays can be experienced in acoustical spaces. Equivalents of these acoustical phenomena have been implemented as signal processing units. The difference equation is given by: y (n) = x (n) + g x (n – M) where. This effect has two tuning parameters: • The amount of delay time (Γ) • The relative depth of the delayed signal to that of the reference signal (g) These parameters will account for the type of audio effect observed at the output signal. flanging…etc. The basic structure for delay based effects is comb filter. chorus. This given time delay can be constant or varying with respect to time. vibrato. The input signal is delayed by a given time duration.sampling frequency 16    . A sound wave reflected by a wall will be superimposed on the sound wave at the source. which will be discussed later.The amount of time delay fs . which will act as a reference signal. If the wall is far away. Most Basic Delay Structure FIR Comb Filter: The network that simulates a single delay is called the FIR comb filter. we will notice the reflections through a modification of the sound colour. The effect will be audible only when the processed signal is combined to the input signal. such as a cliff. If the wall is close to us. Other delay based effects are doubling. we will hear an echo.

1 0.2 0.4 0. the transfer equation is given by: H (z) = 1 + g z –M x (n)  + y (n)  z‐M g Fig. etc. the filter amplifies all frequencies that are multiples of 1 / Γ and attenuates all frequencies that lie in between. Frequency response of FIR Comb Filter The time response of this filter is made up of the direct signal and the delayed version. For positive values of g. 17    .5 0. etc. 1500 Hz.3 Magnitude Response (dB) 0. For example.9 Fig. For negative values of g.6 Normalized Frequency 0. The frequency response shows notches at regular frequencies and looks like a comb. for a delay of 1 millisecond and a positive value of g. 2500 Hz.7 0.8 0. That is why this type of filter is called a comb filter. Block diagram for FIR Comb Filter 6 4 2 0 Magnitude (dB) -2 -4 -6 -8 -10 -12 -14 0 0. The gain varies between 1 + g and 1 – g. 2000 Hz. the filter will amplify the frequencies 1000Hz. while it will attenuate the frequencies 500 Hz. the filter attenuates frequencies that are multiples of 1 / Γ and amplifies those that lie in between.Hence.

  0  i 2047 Fig. represented by an array. Implementation of delay based effects: For online processing of signal. As mentioned earlier. For example.  .                                 .  . delayline [2048]. has to be stored in an array. we can hear a delayed signal that is distinct from the direct signal. That action defines the delay line. For small values of Γ. It is represented as follows. so formed. while the oldest sample is discarded off the right-hand side. Representation of delayline [2048] 18    . the input signal. Hence.  . The delay-line output is rarely the last sample. For large values of Γ. This external signal is called low frequency oscillator. To realize chorus or flanging.  . as it is accepted at the input port. The frequencies by the comb are so close to each other that we barely identify the spectral effect. The length of the delay line would define the maximum delay that could be achieved in this way. Our ear is more sensitive to the one aspect or the other according to the range where the time delay is set. would contain the delayed signals and hence is called as the delay line. FIR comb filter has an effect both in time and frequency domains. our ear can no longer isolate the time events but can notice the spectral effect of the comb. to realize doubling or echo. Consider the following delay line. at every sample time n the newest input sample is accepted into the left-hand side of the delay line (sample 0). This stored signal has to be shifted to allow the next sample at the input port to be stored. the delay line has to be tapped at i th sample to get the desired delay output. If we want a delay of i samples. the given parameters will describe the type of audio effect at the output. somewhere within the body of the delay line. the amount of time delay has to be varied around an average value of time delay with a signal of low frequency such as 1 Hz. y (n) x(n)  delayline [2048]               .Thus. to get the delay based effects. the amount of time delay has to be constant over the time interval. The array. so the output is shown above the block as an arbitrary tap.

} delayline1[0] = leftsample. Doubling involves quick repetition of the reference signal. This is explained further while discussing chorus and flanging effect. The implementation of the above block diagram in CCS is as follows: /* important note: all the elements of the arrays.If i is constant. k--) { delayline1[k] = delayline1[k-1] . we will hear an echo. The delay is in the range 10 to 25 milliseconds. To get the time varying delay. If the delay is greater than 50 milliseconds. the delay of the output signal is constant. DOUBLING and ECHO: Theory: Doubling and echo are constant delay effects. k>0. delayline2[0] = leftsample. The amount of time delay is constant for doubling. i have to be varied as a function of n which will result in fractional delay. leftsample = delayline1[i]. have to be initialized with zero */ /* the following for loop has to be implemented for each input sample */ /* the leftsample and rightsample are processed separately */ for (k=2047. delayline1 and delayline2 . delayline2[k] = delayline2[k-1] . rightsample = delayline2[i]. transfer function and block diagram for the constant delay effects are shown below: 19    . The difference equation.

greater than 50 milliseconds for echo) • • • The number of elements of the delay line has to be greater than or equal to M. Block diagram for constant delay based effects Implementation: • • Set the sampling frequency appropriately according to input signal To calculate the number of samples to be delayed M = Γ * fs where. The value of g is to be chosen appropriately. CHORUS. Γ = amount of delay time (10 – 25 milliseconds for doubling. The input sample has to be combined with the delayed sample to get the output sample. VIBRATO AND FLANGING: Theory: Chorus comprises of the combination of the reference signal and it’s delayed and pitch modulated version.y (n) = x (n) + g x (n – M) H (z) = 1 + g z –M x (n)  + y (n)  z‐M g Fig. to determine the relative depth of the delayed signal to that of the reference signal. This delayed and pitch modulated version can be obtained by varying the 20    .

the only difference between chorus and flanging is of delay ranges. As discussed earlier. for our application. Chorus is combination of vibrato and the input signal. which is generally referred to as a “whooshing” sound. Vibrato comprises of the delayed and pitch modulated version of the input signal. This aspect of the flanging is covered separately in later section. if the amount of time delay is small. transfer function and the block diagram for the time varying delay effects are as follows: y (n) = x (n) + g x (n – M (n)) H (z) = 1 + g z – M (n) x (n)  + y (n)  z ‐ M (n) g Fig. The delay time is in the range 20-30 milliseconds and is varied with the help of low frequency oscillator (LFO). flanging is also created by mixing the reference signal with a slightly delayed copy of itself. Flanging has a very characteristic sound. The delay for flanging usually ranges from 1 millisecond to 10 milliseconds.delay time around the average value with the help of low frequency oscillator (LFO). Like chorus. When a car is passing by. Block diagram for time varying delay based effects 21    . The difference equation. From the point of view of implementation of the effect. our ear can no longer isolate the time events but can notice the spectral effect of the comb filter. where the delay time is constantly changing. Varying the distance is. we hear a pitch deviation due to the Doppler Effect. This pitch variation is due to the fact that the distance between the source and our ears is being varied. The delay time for chorus is in the range 20-30 milliseconds. equivalent to varying the time delay. If we keep on varying periodically the time delay we will produce a periodical pitch variation.

To understand how the pitch is changed, picture the delay as a recording device. It is storing an exact copy of the input signal as is arrives, much like a cassette recorder, and it then outputs that a little later, at the same rate. To increase the amount delay, you want a longer segment of the signal to be stored in the delay before it is played back. To do this, you want to read out of the delay line at a slower rate than its being written (the recording rate is unchanged, so more of the signal is being stored). Reading back at a slower rate is just like dragging your fingers on the wheel of the cassette, which we know lowers the pitch. Similarly, to reduce the delay time, we can just read back faster, analogous to speeding up a playing cassette, which increases the pitch. Fractional delay line: The changing delay time will require delay times that are not integer multiples of the sampling period (and the input signal is being sampled at multiples of this sampling period). That is, there is a need for fractional delay. The computation of fractional delay will require the delay line interpolation technique. This way, the effective delay is not discretized, thus avoiding signal discontinuities when the desired delay time is continuously swept or modulated. The most common methods for interpolation are linear interpolation and all pass interpolation. Implementation of vibrato using delay line interpolation: The desired output, v (n) (vibrato) dynamically points (i.frac) to a place between two discrete samples. The index i, an integer, is defined as the current computed whole relative index into our delay line, relative to the beginning of the delay line. The integer i requires computation because we want it modulated by the LFO w (n), oscillating as a function of discrete time n. The integer range of i, ± CHORUS_WIDTH, is centered about the nominal tap point into the delay line, NOMINAL_DELAY, the fixed positive integer tap center. i.frac = i + frac i.frac = NOMINAL_DELAY + CHORUS_WIDTH * w (n) For linear interpolation: v (n) = frac * delayline [i + 1] + (1 - frac) * delayline [i] For all-pass interpolation: v (n) = delayline [i + 1] + (1 - frac) * delayline [i] - (1 - frac) * v (n – 1)
22   

2 * CHORUS_WIDTH  x(n)  delayline [2048]               .  .  .                                 .  .  .  0 
NOMINAL_DELAY  (the tap center) 

2047

i.frac 

frac i 

1 ‐ frac i + 1

i.frac 

Fig. Implementation of Vibrato effect using interpolation of delayline

Parameters: LFO Waveform: The LFO waveform shows how the delay changes over time. When the waveform reaches a maximum, then the delay is at its largest value. When the waveform (and total delay time) is increasing, the total delay time is increasing. The rate of storing the input in the delay line is unchanged. This results in writing the output at slower rate and hence resulting in lowering the pitch. Similarly, when waveform is decreasing, the total time delay is reducing. Since the rate of storing the input is unchanged, the output is written at faster rate, resulting in increasing the pitch. Refer [ ], for the derivation of pitch change ratio. Some of the commonly used waveforms are sinusoidal, triangular, logarithmic and saw tooth.
23   

Following points are worth noting: • • • The pitch change ratio varies sinusoidally with time and proportional to modulation frequency and sample period for the sinusoidal waveform. The pitch change ratio is piecewise constant in case of triangular waveform. To get constant pitch change ratio the LFO waveform has to be linear. Unfortunately i.frac will eventually pass one or the other delay line boundary, so this technique cannot be used indefinitely. NOMINAL_DELAY: It is the average value of time delay required to implement the effect. This delay value should be within the desired delay range. CHORUS_WIDTH: The amount of pitch modulation introduced by the chorus is related to how quickly the LFO waveform changes - the steepest portions on the waveform produce a large amount of pitch modulation, while the relatively flat portions have very little or no effect on the pitch. We can use this view to understand how the CHORUS_WIDTH varies the pitch. If we increase the CHORUS_WIDTH, we are effectively stretching the waveform vertically, which makes it steeper, and thus, the pitch is altered more. This value is to be chosen, so that net delay should be within the specified ranges. Implementation: • Set the sampling frequency appropriately according to the input signal • Set the value of the NOMINAL_WIDTH midway between the specified delay ranges • Set the value of the CHORUS_WIDTH, so that net delay does not exceed the specified delay ranges • Select the appropriate LFO waveform whose frequency (F) is less than 3 Hz • Find out number of samples (MAX_COUNT), each period of the waveform would cover MAX_COUNT = fs / F
24   

We know that. there are M notches at intervals of fs / M Hz. Hence. and 2 . What comes back. As M changes over time. 1. there are M peaks in the frequency response. Between these peaks. an instantaneous sound that carries equal energy at all frequencies. . in the form of reverberation. The impulse response tells everything about the room. for k = 0. Due to this reason. the value of M is also variable. our ear can no longer isolate the time events but can notice the spectral effect of the comb filter. As the delay time (Γ) is variable for the flanging effect.• Compute the maximum and minimum delays as MAX_DELAY and MIN_DELAY MAX_DELAY=NOMINAL_DELAY + CHORUS_WIDTH MIN_DELAY = NOMINAL_DELAY – CHORUS_WIDTH • Find out the number of samples that will correspond to MAX_DELAY and MIN_DELAY max = MAX_DELAY * fs min = MIN_DELAY * fs • The value of g is to be chosen appropriately. is the room's response to that instantaneous. M – 1. the delay time ranges for the flanging effect are small. to determine the relative depth of the delayed signal to that of the reference signal Spectral analysis of flanging: As already mentioned. in its ideal form. Its effect on the overall sound that reaches the listener depends on the room or environment in which the sound is played. . centered about the frequencies: Ω = (2 * Π * k) / M. REVERBERATION: Theory: Reverberation occurs when copies of an audio signal reach the ear with different delays and different amplitudes. The reason this works is that an impulse is. for g > 0. all-frequency 25    . the peaks and notches of the comb response are compressed and expanded. we hear a characteristic ‘whooshing’ sound in case of flanging. The spectrum of a sound passing through the flanger is thus accentuated and deaccentuated by frequency region in a time-varying manner. after taking different paths and having bounced against surrounding objects. Reverb is a time-invariant effect. Timeinvariant systems can be completely characterized by their impulse response.

The first acceptable form 26    .burst. The late reverberation. EDR(t. this method is computationally extremely expensive. But. Fig. and perceived as discrete echoes. 1998). Late reverberation is characterized by a dense collection of echoes. This concept. The early reflections. a dense collection of echoes travelling in all directions. The late reverberation in an artificial reverberation should have sufficient echo density in the time domain and sufficient density of maxima in the frequency domain. To properly simulate the late reverberation. If we have a fixed w0. The early reflections and late reverberation have different physical and perceptual properties. travelling in all directions. Plotting the impulse response of natural acoustic spaces. we observe normally an exponentially decaying late reverberation. normally the first sound to arrive to the listener’s ears. in a typical reverberation pattern we can distinguish three main parts: • • • The direct sound. EDR(t. usually showing an exponentially decaying curve. formalized by Jot (1992) is known as the energy decay relief. Hence. the most convenient way to obtain the reverberation effect is by building a digital filter that will simulate the impulse response of a room. both of which are functions of frequency. Impulse response of a concert hall Thus. caused by the reflection of the sound off large nearby surfaces. It is also possible to represent the reverberant pattern of a room as a function of time and frequency. For a fixed t0. EDR(t0. The time required for the reverberation level to decay 60 dB below the initial level is defined as the reverberation time (Tr). w). w0) gives the decaying energy curve at frequency w0 (Gardner. it is important to consider carefully the frequency response envelope and the reverberation time. produced by a very large number of reflected waves. w) gives the energy of each frequency at this moment.

y(n) = x(n . the frequency response and impulse response of this filter is shown below. COMB  COMB  x (n)  ALL PASS COMB  ALL PASS  y (n) COMB  Fig.M) H(z) = z –M / (1 – g z –M) 20 Magnitude Response (dB) 15 10 Magnitude (dB) 5 0 -5 -10 0 1 2 3 4 Frequency (kHz) 5 6 7 Figure: Frequency response of Schroeder’s Comb Filter 27    . transfer function. Block diagram for Schroeder’s Reverberator Comb filters in Schroeder’s reverberator: The comb filter used here is a combination of IIR filter and FIR filter. The difference equation.M) + g y(n .of digital signal producing device to produce artificial reverberator is Schroeder’s reverberator.

Two density criteria for impulse response should be satisfied to give natural sounding reverb. pulses will coincide. We should have sufficient echo density in the time domain and sufficient density of maxima in the frequency domain.1 0 0 5 10 Impulse Response 15 Time (mseconds) 20 25 30 Fig. 2/Γ…etc. Impulse response of Schroeder’s Comb Filter Hence. If Tr is the reverberation time (in seconds) and fs the sampling frequency (in Hertz) we have: 28    . the peak height is given by the feedback factor. Schroeder suggested that the delays of the comb filters should be chosen such that ratio of largest to smallest be about 1. Choose loop times that are relative prime to each other so the decay is smooth. the density of maxima in the frequency domain can be increased.9 0.2 0.3 0. By increasing the delay time (Γ). Hence. In time domain it acts as signal repeater and in frequency domain it acts as multimodal resonator.4 0.5 (in particular between 30 and 45 milliseconds). The total number of pulses produced is the sum of the pulses produced by the individual comb filters.8 0. we will use larger Γ with dense frequency response and connect such filters in parallel to obtain dense impulse response in time domain.5 0. producing increased amplitude resulting in distinct echoes and an audible frequency bias. but impulse density falls. On the other side for the short delay times (Γ) we have dense impulse response. the impulse response consists of impulses with equal distance Γ (the amount of time delay) and magnitudes exponentially decayed. 1/Γ. The frequency response is characterized by series of peaks equally spaced at 0. but the density of maxima in the frequency domain will fall. This decay rate depends on feedback factor g.7 Amplitude 0.6 0.1 0. We have two points of view to Comb filter. If the delay line loop times have common divisors.

the frequency response and impulse response of this filter is shown below. All pass filters in Schroeder’s reverberator: Unlike a comb filter. the amplitudes of frequency components are not changed by the filter. the gains g of the comb filters are adjusted to obtain the desired reverberation time.g = 10 (– 3 M) / (fs Tr) Typical concert halls have reverb time ranging from 1.6 0. y(n) = −g x(n) + x(n −M) + g y(n −M) H (z) = (.8 -1 Impulse Response 0 1 2 3 4 5 Time (mseconds) 6 7 8 9 10 29    . This makes it ideal for modeling frequency dispersion.8 0.4 Magnitude 1.8 1.2 1 0. That is. has substantial effect on the phase of individual signal components. The all-pass filter however.6 1. Schroeder reverb time is about equal to the longest of the 4 comb filter reverb times.2 0 0 1 2 3 4 Frequency (kHz) 5 6 7 Magnitude Response Fig.4 0.3 seconds. that is.6 0.6 -0.M) / (1 – g z .2 Amplitude 0 -0. the time it takes for frequency components to get through the filter. Hence.5 . Frequency response of Schroeder’s All Pass Filter 0. the all-pass filter passes signal of all frequencies equally. The difference equation.g + z .2 -0. transfer function.M) 2 1.4 0.4 -0.

producing much denser response.041.031. . The number of pulses produced is the product of the number of pulses produced by individual units. Number of samples (M) to be delayed.03 .. The outputs of the comb filters are added.7. The proposed all pass delays are about 5 milliseconds and 1.Fig. 30    . The output of the first all pass filter serves as input for the delay line for the second all pass filter.043).7 milliseconds.5 – 3 seconds g = 10 (– 3 M) / (fs Tr) • • • • Consider the Schroeder reverberator (figure[ ]). Comb filter delay times are taken in range . The second all-pass adds another layer of echoes.037. This serve as input for the delay line of the first all pass filter. the impulse response of one unit triggers the response of the next. Impulse response of Schroeder’s All Pass Filter When placed in series (cascade). Implementation: • • • • • The sampling frequency is selected according to the input signal.. and both all pass gains are adjusted to 0. .g. The gain for each comb filter is adjusted according to the below formula to give reverb time in the range 1. . Two separate delay lines are used for two all pass filters. The first all-pass turns each comb echo into a string of echoes. . A common delay line is used for all the comb filters. is calculated by multiplying delay time (Γ) with sampling frequency (fs). in each filter.05 seconds and relatively prime to one another (e.

This gives us the filter coefficients and scale factors. At the first interface we select the type of filter (highpass.).analysis filter coefficients. ripple magnitude etc. To view the coefficients follow the steps. Now the filter has been designed. its design method (IIR or FIR/ Butterworth or Chebyshev etc. A typical transfer function of a second order IIR filter is of the form: H(z) = b0 + b1 z-1 + b2 z-2 1 + a1 z-1 + a2 z-2 Y(z) = b0 + b1 z-1 + b2 z-2 X(z) 1 + a1 z-1 + a2 z-2 Y(z) (1+ a1 z-1 + a2 z-2 ) = X(z)(b0 + b1 z-1 + b2 z-2) Taking inverse z-transform both sides: y(n) + a1 y(n-1) + a2 y(n-2) = b0 x(n) + b1 x(n-1) + b2 x(n-2) this gives y(n) = -∑ak y(n-k) + ∑ bk x(n-k) Output depends both on inputs and previous outputs. We have used FIR filters in our designs. lowpass or bandpass). we have used filter design and analysis tool (fdatool).Filtering based effects Filter Design: Filter Design in itself is very vast topic but here for our purpose we use it in a very limited way. This form is not used directly. After the selection of filter parameters. This can be explained using the block diagrams also as following: 31    . The coefficients shown are those which we use in our C code. get the coefficients of transfer function and then accordingly code in C language. click on ‘design filter’. gain. For designing the filters in matlab. We use the coefficients of this filter in our C code. Now go to edit Convert to Single Section. We design the filter in matlab.) and filter parameters (like cut off frequency.

i<= order.i] * a[i]. As the name suggests it tries to equalize the effects of all frequencies. y[2] +=w.b0 x(n) x(n-1) z-1 b1 + x(n-2) z-1 b2 + a2 z-1 y(n-2) + + a1 z-1 y(n-1) y(n) Fig. x[2] = leftsample. for (i=0. w=0. y[2] = 0. leftsample = y[2]. i++) y[2] += y[order . Some frequencies are boosted while 32    . This is basic design. This design method is known as Direct Form 1. for (i=0. So we restrict ourselves to the use of Direct Form 1. i < order. i++) x[i] = x[i+1]. In most audio players it can be easily found as bass and treble knobs. The process is discussed using C code: for (i=0. // all numerator coefficients are multiplied // similar process of storage for y array // y[2] is value to be obtained as output // denominator coefficients are multiplied // the current value goes to x[2] and then it is //looped and sent to lower indices continuously for (i=1. i++) y[i] = y[i+1]. In this effect different frequencies are treated differently. i<order. Direct form 2 is more efficient but more complex. i++) w += x[order-i]*b[i]. A typical second order filter – block diagram This process can be used for transfer function of filter of any order. i<=order. EQUALIZER: Introduction: Equalizer is a very well known audio effect.

High pass and Band pass nature.some other frequencies are cut and some others remain unaffected i.5) For getting bass.2 1. Bandpass. 0. If high frequencies are boosted.2 0 0 2 4 6 Frequency (kHz) 8 10 12 0 0 0 2 4 6 Frequency (kHz) 8 10 12 0 2 4 6 Frequency (kHz) 8 10 12 Fig.5) then output will be: (10 LP + 0.2 0. The three filters are of Low pass.6 M a g n itu d e 0. Fighpass Filters 33    . 2.4 0. Parallel filters Series filters (cascade design) Magnitude Response of Low Pass Filter 1.6 0. 0. One such example is: (Low pass: 500 Hz. Magnitude Response of Lowpass. The parameters of three filters are set in such a way that no frequencies have zero gain which means that the pass bands of two adjacent filters must have some frequencies in common. high pass: 1900 Hz).5 + 0.5. If low frequencies are boosted.2 1 1 1 M a g n itu d e M a g n itu d e 0.8 0.2 0. ‘a’ should be high.e. Implementation: Equalizer can be implemented in two ways: 1. To get the equalizer effect gains of the three filters are set accordingly but this involves change in filter coefficients for every minute change.8 0. In this effect all the filters are controlled independently.4 0.4 Magnitude Response Magnitude Response 1. effect is called bass. negative (cut) or zero. gains and cut off frequencies of one filter are independent of the parameters of the other two filters.e.8 0. Band pass: 450-2000 Hz.4 0.5 BP + 0.6 0. effect is called treble. Similarly for treble ‘b’ should be high.4 1. So we can design all these filters for unity gain and then multiply by the suitable gain factor at the time of addition.4 1. for different frequencies filters have different gains which may be positive (boost if gain>1).2 1. All these filters are controlled independently i. If gain factor triplet is (10. Parallel Filter Equalizer: In this design method. The output would be weighted average of the three filtered signals. three filters are connected in parallel as shown in the figure.5 HP) / (10 + 0.

.... Series of peak filters 34    .Filter HP Shelving Filter out Fig.... The first filter is Low pass shelving filter followed by several peak filters and a High pass shelving filter.... These are peak filters and shelving filters. The special types of filters are used for the same. The mathematics of shelving and peak filters is very deep.. Parallel Equalizer Series (Cascade) Equalizer: Apart from parallel connection of band filters there is another method of realizing equalizer. Here we briefly explain the characteristics and formulae for coefficients of these filters..Low Pass Filter a Input Band Pass Filter b c High Pass Filter + output Fig.. in LP Shelving Filter Peak Peak Filter .. First and second order shelving and peak filters are connected in series and are controlled independently.

The desired band of frequencies is cut or boost with respect to unity gain. There are two designs for shelving filters depending on the order of filters. Series Equalizer Shelving Filters: There are two types of Shelving Filters: Low pass shelving filter and High pass shelving filter. Both First and second order filters are shown in the figure given below: Fig. The unaffected part has unity gain. First and second order filters First Order Design: Transfer function for the first order shelving filters is as follows: H(z) = 1+ H0 /2 [1 + A(z)] 35    . The parameters of the filter are cut off frequency fc and gain G.Fig. They boost or cut low or high frequency bands.

to be used for High Shelving. The cut-off Frequency parameter aB for boost and aC for cut can be calculated as follows: aB = [tan(πfc/fs) .1] / [tan(πfc/fs) + 1] aC = [tan(πfc/fs) .Where A(z) is All Pass transfer function (z-1 + aB/C ) / (1 + aB/C z-1) H0 = V0 .V0] / [tan(πfc/fs) + V0] Second Order Design: The second order filters can also be realized similarly.1 with V0 = 10G/20 G = Pass Band Gain in dB + to be used for Low Shelving and . The coefficients to be used in the transfer function are given in the table below: Here K = tan(πfc/fs) fc is cut-off frequency and fs is sampling frequeny   Table: Second Order Shelving Filter Design 36    .

This is just an example.Table: Peak Filter Design Parameter setting in an equalizer: parameters to be set are the cut off frequencies of two shelving filters and the peak frequencies of several peak filters.25 Hz (Low pass shelving). 37    . 500Hz.5 Hz. Gains of the filters are set as per the effect required. Gains are set similarly for other cases. 62. A typical octave equalizer has the cut off frequencies: 31. 500 Hz. 2000 Hz. 8000 Hz (9 peak filters) and 16000 Hz (high pass shelving). 125 Hz. In case of bass gain of Low pass shelving filter is kept relatively higher than other filters’ gains. 1000 Hz. The frequencies can be chosen by user as per the effect required. 4000 Hz. 250 Hz.

the wah effect merely accentuates a different section of frequency at a different point in time. bandpass(BP). However. Moving the pedal back and forth changes the bandpass cut-off center frequency. a set of samples are passed through a lowpass(LP). Wah Wah effect implementation The wah audio effect produces an impression of motion in pitch. Over time. For example. Time-varying filters are filters whose coefficients change with time. see graph below. In the end of each passage through the filter. To accomplish this effect. the center frequency of a filter that moves from lower to higher frequencies causes the wah audio effect. The effect of creates a WAH WAH sound is obtained. Each set of filter coefficients corresponds to a band pass filter around a specific center frequency. The effect amplifies a small band of frequencies. the wah effect creates the perception that the input signal is raising its pitch. In wah-wah effect. The wah-wah effect is produced mostly by foot-controlled signal processors containing a bandpass filter with variable central resonant frequency and a small bandwidth. rotation of filter coefficients is implemented. and as time passes. and the next pass through BP filter and the next passed through HP filter and then again in opposite manner. or a highpass(HP) filter.WAH WAH: Theory : Wah Wah effect is a time varying filter effect.   Fig. In other words. 38    . the original sample is added. the amplified band shifts toward higher frequencies. first set of samples pass through LP filter.

essentially the wah effect.HP.…. • The filter is designed and the samples are passed through a filter in the order LP. Band pass filter coefficients were made in MATLAB.BP2. By controlling when these filter coefficients execute upon the input signal. the wah effect can be produced. This motion will create the illusion that the pitch is changing.….The wah code focuses on loading a different set of filter coefficients at increments of time.BP1. For more prominent and realistic effect the white noise can be passed through the wah wah effect producer twice. hence the sampling frequency has to be set equal to the one used during the designing of filter..Gain    Pass‐band  1  0  Frequency  Fig. Note filter coefficients are designed for a specific sampling frequency.. Decide the central frequencies of the BP filters and accordingly find the coefficients using MATLAB.LP.HP.. 39    . For this we have to just provide the input to the wah wah effect producer as white noise.BPn.BP1. Using Wah Wah effect we can produce wind effect. This is executed by first creating all of the filter coefficients in MATLAB. Frequency Response for an Ideal Bandpass Filter  As the filter coefficients change the pass band will travel right into the higher Frequencies. Implementation: • • Set the sampling frequency appropriately according to input signal. respectively.

• For the manner in which the samples are passed through the filters. 40    . a program is written which increments or decrements the filter number according to the value of m which changes when first filter or last filter is obtained.

Ring modulation 41    . greater than 50 milliseconds for echo ) The number of elements of the delay line has to be greater than or equal to M. The input signal x(n) (called as the modulator) is multiplied by a sinusoid m(n) (called as the Carrier) with carrier frequency fc. They consisted of a ring of diodes which were in the shape of a circle. The name ‘Ring Modulator’ comes from the way the original Ring Modulators were created. The combination of two waveforms mixed together to create a new waveform is Ring Modulation.m(n) or [(Input 1) .Modulation based effects: RING MODULATION: Theory : Modulation is the process by which parameters of a sinusoidal signal (amplitude. The value of g is to be chosen appropriately. (Input 2)]   Fig. Especially the variation of control parameters for filters or delay lines can be regarded as an amplitude modulation or phase modulation of the audio signal. or ring. The input sample has to be combined with the delayed sample to get the output sample. to determine the relative depth of the delayed signal to that of the reference signal. Γ = amount of delay • • • time (10 – 25 milliseconds for doubling. In the field of audio processing modulation techniques are mainly used with very low frequency sinusoids. Output: y(n)=x(n). frequency and phase) are modified or varied by an audio signal. Implementation: • • Set the sampling frequency appropriately according to input signal To calculate the number of samples to be delayed M = Γ * fs where.

the spectrum of the output y(n) is made of two copies of the input spectrum: the lower sideband and the upper sideband. Ring modulation of a signal x(n) by a sinusoidal carrier‐signal m(n). LSB is reversed in frequency and both sidebands are centered around fc. When carrier and modulator are sine waves of frequency fc and fx respectively we hear the sum and the difference frequencies fc+fx and fc-fx.  42    .If m(n) is a sine wave of frequency fc. |X(f)|  |X(f)|  LSB  USB  f LSB USB f fc   0    fc    Fig.  The spectrum of the modulator is shifted around the carrier frequency.

Other effects: DISTORTION: Theory : Distortion is an effect which is often applied to electric guitars though it is not limited to any one instrument. etc. it refers to various forms of clipping. All we need is to distort the original (input) audio signal. the result would be the fundamental frequency (100 Hz) and its higher order harmonics. Realizing distortion in matlab is fairly simple. we have an array to process. The word distortion refers to any aberration of the waveform of an electronic circuit's output signal from its input signal. If we play a 100 Hz sine wave through a speaker. We have got desired effect mainly using following two methods: 43    . ie. Typically. distorting amplitude is the best and easiest we can do. When we play a signal on your speaker. the amplitude of these harmonics decrease as the harmonic itself increases. the third harmonic is 300 Hz. In time domain. As soon as we have the input signal converted into digital signal. third is higher than fourth. the 4th is 400 Hz. The similar distortions in frequency domain can also be realized by playing with frequency content. Distortion can be done in any arbitrary manner. this effect adds additional harmonics and overtones to the signal. The presence of any of these additional harmonics is considered distortion as they were not present in the original signal. This effect doesn’t require knowledge deep mathematics. we can say that the second harmonic is 200 Hz. Assuming a fundamental frequency of 100 Hz. then the fundamental frequency is 100 Hz. Clipping is itself a form of distortion. Now. In the context of musical instrument amplification. creating a richer sound. If we were to increase the gain until we have fully clipped the signal. which is the truncation of the part of an input signal that exceeds certain voltage limits. second harmonic distortion is often higher than third. we may also see response at other frequencies that are typically a given order higher than the fundamental. We know the maximum and minimum of the sequence. We will modify the amplitude of discrete signals coming in. It can be accomplished by electronically altering the dynamic range compression or clipping the input signal. the resulting sound consists of a number of frequencies with various amplitudes. and so on.

transfer function is not smooth. for i=1:1:length(x) if(abs(x(i)) > t*d) if(x(i) > 0) new(i)= t*d.32.5 it means that all the amplitudes below half of the maximum will remain as they are but all higher amplitudes will be clipped to half the amplitude. The following is code for clipping and it is self explanatory: function [new] = clip(x. It is called parabolic distortion because transfer function (plot of y[n] vs.f. Here     -30 -20 -10 0 0 -2 -4 -6 10 20 30 Fig. Parabolic distortion does not clip at threshold but the increment for higher amplitudes is gradual.   Graph y2= x   we try to keep the transfer function smooth (we have chosen parabola. if we want to clip with a factor of 0. The motive again is same as clipping but in clipping. 44    . end if(x(i) < 0) new(i)= -1*t*d.'clipped. user can choose any other similar transfer function).wav') 6 4 2 // t is threshold fraction Parabolic Distortion: The effect is similar to compression but slightly different. Hence noise is more in case of clipping.t) d = max(x). x[n]) looks like a parabola. A threshold can be defined with respect to the maximum of all the input amplitudes i.Clipping: The input signal is clipped so that only low amplitude values remain in the output sequence.e. new = x.f. end end end wavwrite(new.

end end wavwrite(new. for i=1:1:length(x) if(x(i)>=0) new(i)= sqrt(4*a*x(i)). which implies output is clipped. The code for the same is shown below and it is self explanatory.For realization. all the positive values (x[n] > 0) in input signal are replaced by and all the negative values are replaced by ) ). • Hence the output is proportional to the square root of the input. Parameter ‘a’ decides the rate of rise for high amplitudes. function [new]= distortion(x.'distortion. This condition for clipping is given by the parabola equation in the program. end if (x(i)<0) new(i)= -1*sqrt(4*a*-1*x(i)). x.wav') Implementation: • • Set the sampling frequency appropriately according to input signal Depending on the amplitude of the sample the output is clipped if input exceeds some fixed value. so should be chosen as per requirement.f. 45    .f.a) new = x.

Smith III “DAFX .harmony-central.buzzle. Chorus.Section C: References References: • • • • • • • • • • • • • • ``Physical Audio Signal Processing''. PLAUGER Harmonycentral.wikipedia.php?act=ST&f=1&t=4949 http://www.com/Effects/effects-explained.com http://www.J. Phasing.effects explained http://www.wikipedia. Smith III Comparative Performance Analysis of Artificial Reverberation Algorithms: Norbert Toma.org/wiki/Guitar_effects 46    . Fernando A. Beltrán.org/wiki/Audio_effects http://en.com.hydrogenaudio.php • Wikipedia http://en.com/articles/audio-effects-compression-ring-modulation.com/groups/c55x/1. Victor Popescu. JOHN WILEY & SONS. by Julius O.music.dsprelated.com/groups/code-comp/1.ca: Audio Effects in MATLAB http://www.org/forums/index. Bohumil Bohunicky Matlab Implementation of Reverberation Algorithms: José R.html www. LTD: Udo Zolzer Effect Design Part 2: Delay-Line Modulation and Chorus: JON DATTORRO Computational Acoustic Modeling with Digital Delay: Julius O.html dsprelated. Artificial Reverb: Tamara Smyth SIGNAL AND NOISE IN PROGRAMMING LANGUAGE P.mcgill.Digital Audio Effects”. Beltrán Delay Effects: Flanging. Erwin Szopos Optimization of delay lines in Schroeder’s reverberator structure: Ing.php http://www.dsprelated. Marina Dana Ţopa.

while(TRUE) { for(i=0. while (!DSK5510_AIC23_write16(hCodec. 47    . while (!DSK5510_AIC23_read16(hCodec. rightsample= rightsample* 1000/i. &rightsample)). /* Start the codec */ hCodec = DSK5510_AIC23_openCodec(0. &leftsample)). while (!DSK5510_AIC23_write16(hCodec. int m. /* Initialize the board support library.2* rightsample)).Section D: Appendix SOME SIMPLE EFFECTS: Clock-anticlock-wise effect: double i. DSK5510_AIC23_FREQ_24KHZ). /* Ser Sampling frequency of the codec to 24KHz */ DSK5510_AIC23_setFreq(hCodec.i++) { while (!DSK5510_AIC23_read16(hCodec. rightsample. if(m==0) { leftsample=i/10000* leftsample. rightsample= rightsample* i/10000. 2* leftsample)). while (!DSK5510_AIC23_read16(hCodec. &rightsample)). } else { leftsample= leftsample* 1000/i. &config). &leftsample)).2* leftsample)).i++) { while (!DSK5510_AIC23_read16(hCodec. Int16 leftsample. 2* rightsample)). void main() { DSK5510_AIC23_CodecHandle hCodec. must be called first */ DSK5510_init().i<15000. while (!DSK5510_AIC23_write16(hCodec. } for(i=0.i<15000. } while (!DSK5510_AIC23_write16(hCodec.

i>0. &leftsample)). 2* leftsample)). while (!DSK5510_AIC23_read16(hCodec. must be called first */ DSK5510_init(). } else { leftsample= leftsample* 1000/i. } while (!DSK5510_AIC23_write16(hCodec. 48    . else m=0. rightsample= rightsample* 1000/i. /* Start the codec */ hCodec = DSK5510_AIC23_openCodec(0. &rightsample)). int m. } /* Close the codec */ DSK5510_AIC23_closeCodec(hCodec). /* Initialize the board support library. } if(m==0) m=1. while (!DSK5510_AIC23_write16(hCodec. rightsample. Int16 leftsample. if(m==0) { leftsample=i/10000* leftsample. } Pendulum effect: double i. rightsample= rightsample* i/10000.} for(i=15000.i--) { while (!DSK5510_AIC23_read16(hCodec. &config).2* rightsample)). void main() { DSK5510_AIC23_CodecHandle hCodec.

&rightsample)). DSK5510_AIC23_FREQ_24KHZ). } while (!DSK5510_AIC23_write16(hCodec. while(TRUE) { for(i=0. while (!DSK5510_AIC23_read16(hCodec. rightsample= rightsample* 1000/i.i<15000. rightsample= rightsample* 1000/i. rightsample)). leftsample)). if(m==0) { leftsample=i/10000* leftsample. rightsample)).i++) { while (!DSK5510_AIC23_read16(hCodec. } else { leftsample= leftsample* 1000/i.i>0. rightsample= rightsample* i/10000. } else { leftsample= leftsample* 1000/i. 49    . if(m==0) { leftsample=i/10000* leftsample. leftsample)). while (!DSK5510_AIC23_write16(hCodec. } for(i=15000. rightsample= rightsample* i/10000./* Ser Sampling frequency of the codec to 24KHz */ DSK5510_AIC23_setFreq(hCodec. &rightsample)). } while (!DSK5510_AIC23_write16(hCodec. } if(m==0) m=1. &leftsample)). &leftsample)). while (!DSK5510_AIC23_read16(hCodec. while (!DSK5510_AIC23_write16(hCodec.i--) { while (!DSK5510_AIC23_read16(hCodec.

i<8.-6.-25.h" #include "dsk5510. hCodec = DSK5510_AIC23_openCodec(0.0001. while (!DSK5510_AIC23_write16(hCodec. } /* Close the codec */ DSK5510_AIC23_closeCodec(hCodec).0.0002.i<8.4320.12. y[8]=0. DSK5510_AIC23_setFreq(hCodec.0.i++) y[8]+= a[i] * y[8-i]. DSK5510_AIC23_FREQ_8KHZ).h" #include "dsk5510_aic23.0. for(i=0. } CHEBYSHEV FILTER: #include "sample_testcfg.0004.0641. rightsample.0001.0.33.0005. float y[9]={0.i<9.0.i++) w += b[i] * x[8-i].&rightsample)). while(TRUE) { while (!DSK5510_AIC23_read16(hCodec.0.0.0004.0. x[8]=leftsample.0.1340.else m=0.0.0. float b[9]={0. for(i=0. while (!DSK5510_AIC23_read16(hCodec. 50    .&config). DSK5510_init().0.3858.-29.y[8])). float x[9]={0.0.0.0.i<9.&leftsample)). float a[9]={1.3028.0.0.-3.0.0002.0}.i++) x[i]=x[i+1]. for(i=0.i++) y[i]=y[i+1]. int i.0}. Int16 leftsample.0.0.i++) a[i]=-a[i]. for(i=0.0.17. y[8]+=w. float w=0.i<9.6111.7955.4860}.0.0000}.0. for(i=1.h" void main() { DSK5510_AIC23_CodecHandle hCodec.

for(i=0. ECHO: void main() { DSK5510_AIC23_CodecHandle hCodec. else DSK5510_AIC23_setFreq(hCodec. hCodec = DSK5510_AIC23_openCodec(0.i++) x[i]=0.y=0. else if (DSK5510_DIP_get(1) == 0) DSK5510_AIC23_setFreq(hCodec. } DSK5510_AIC23_closeCodec(hCodec). DSK5510_AIC23_FREQ_24KHZ). DSK5510_AIC23_setFreq(hCodec.k.while (!DSK5510_AIC23_write16(hCodec. } DIP INTERFACING: Header file: #include "dsk5510_dip. if (DSK5510_DIP_get(0) == 0) DSK5510_AIC23_setFreq(hCodec.i. DSK5510_AIC23_FREQ_16KHZ).i<800.j=0. DSK5510_DIP_init(). k=0. 51    . DSK5510_AIC23_FREQ_16KHZ). &config). rightsample. DSK5510_AIC23_FREQ_16KHZ).h" Inside the main(): DSK5510_DIP_init(). Int16 leftsample. DSK5510_AIC23_FREQ_8KHZ).x[800].y[8])). DSK5510_init(). else if (DSK5510_DIP_get(2) == 0) DSK5510_AIC23_setFreq(hCodec.

} else if (DSK5510_DIP_get(2) == 0) { while (!DSK5510_AIC23_write16(hCodec. x[0]=leftsample+y. if(k==0) y=leftsample+ 0. &leftsample)). } else 52    . &rightsample)). leftsample)).rightsample)).0. while (!DSK5510_AIC23_write16(hCodec.while(TRUE) { while (!DSK5510_AIC23_read16(hCodec.005* leftsample)). for(i=800.65* x[800].echo if(k==1) y=leftsample+ x[320]. } else if (DSK5510_DIP_get(1) == 0) { while (!DSK5510_AIC23_write16(hCodec.doubling leftsample=y. while (!DSK5510_AIC23_write16(hCodec. while (!DSK5510_AIC23_read16(hCodec. leftsample)). // 20 milliseconds delay .005* leftsample)). if (DSK5510_DIP_get(0) == 0) { while (!DSK5510_AIC23_write16(hCodec.// 50 milliseconds delay .0.i--) x[i]=x[i-1].rightsample)). while (!DSK5510_AIC23_write16(hCodec.i>0.

10* y)). rightsample. } } DSK5510_AIC23_closeCodec(hCodec). DSK5510_AIC23_FREQ_16KHZ). while (!DSK5510_AIC23_write16(hCodec. } EQUALISER: PARALLEL IMPLEMENTATION void main() 53    . &rightsample)). while (!DSK5510_AIC23_write16(hCodec. hCodec = DSK5510_AIC23_openCodec(0. 10* y)). leftsample)). } DSK5510_AIC23_closeCodec(hCodec). &leftsample)). &config).y. while(TRUE) { while (!DSK5510_AIC23_read16(hCodec. Int16 leftsample. y=sqrt(4* leftsample). while (!DSK5510_AIC23_read16(hCodec. leftsample)). DSK5510_AIC23_setFreq(hCodec. while (!DSK5510_AIC23_write16(hCodec.{ while (!DSK5510_AIC23_write16(hCodec. DSK5510_init(). } DISTORTION: main() { DSK5510_AIC23_CodecHandle hCodec.

1.i<filter.10} . coff[i][0]=low[i]/sum[i].4208.. Int16 leftsample.bandpass cut off-800-4000Hz.-0. /*samp freq=16k.x[3][3].-0.9303.8416.i++) { for(j=0. rightsample.0.order=2. int i. float high[3]={0. } for(i=0. float b[3][3].4208. float coff[3][3].0200.10.i++) { sum[i]=low[i]+mid[i]+high[i]. for(i=0.0.low pass cut off-800Hz.1} .15838. float y[3][3].0200.46515.10.-0.1.-0..0.high pass cut off-4000Hz*/ float z[9]={0..i<3.0.{ DSK5510_AIC23_CodecHandle hCodec.out=0. float sum[3].1} . a[i][j]=-d[k]. float w=0.0.0.j<order+1.6202. float d[9]={1..k=0.1.a[3][3].0401.j++) { b[i][j]=z[k].46515}. float low[3]={10. float mid[3]={5...0.0.0.0.64135.1. coff[i][2]=high[i]/sum[i].5610.2404}. coff[i][1]=mid[i]/sum[i].filter=3.-1. 54    .j=0.0.

k=0.j<filter.j++) { w=0. hCodec = DSK5510_AIC23_openCodec(0. else k=0. while(TRUE) { while (!DSK5510_AIC23_read16(hCodec. for(j=0.i<order. out=0. else if (DSK5510_DIP_get(1) == 0) k=1. } } DSK5510_init(). y[i][j]=0. DSK5510_DIP_init(). while (!DSK5510_AIC23_read16(hCodec. else if (DSK5510_DIP_get(2) == 0) k=2. if (DSK5510_DIP_get(0) == 0) //DIP interfacing to change the filter //type : CUT/BOOST LP/MP/HP.x[i][j]=0.&leftsample)). k++.&rightsample)).i++) 55    .&config). DSK5510_AIC23_setFreq(hCodec. for(i=0. DSK5510_AIC23_FREQ_16KHZ).

Int16 leftsample.i++) x[i]=0.i<order+1. 56    .k. while (!DSK5510_AIC23_write16(hCodec. k=0.x[j][i]=x[j][i+1].i++) /*direct form 1 to find output*/ y[j][order]+= a[j][i] * y[j][order-i].i<order+1. for(i=0.i. } leftsample=out.leftsample)).j=0. for(i=0.i<1280. y[j][order]=0.i++) y[j][i]=y[j][i+1]. x[j][order]=leftsample.y. for(i=1. rightsample. DSK5510_init().i++) w += (b[j][i] * x[j][order-i]). y[j][order]+=w. while (!DSK5510_AIC23_write16(hCodec. hCodec = DSK5510_AIC23_openCodec(0. } DSK5510_AIC23_closeCodec(hCodec). } ECHO: void main() { DSK5510_AIC23_CodecHandle hCodec. &config).x[1280]. out+=coff[k][j]* y[j][order].i<order. leftsample)). for(i=0.

i++) x[i]=0. } FLANGING: void main() { DSK5510_AIC23_CodecHandle hCodec.k=20.i>0. &leftsample)). DSK5510_init().x[161].65* x[1280]. x[0]=leftsample.echo if(k==1) y=leftsample+ x[320]. while (!DSK5510_AIC23_write16(hCodec. while (!DSK5510_AIC23_write16(hCodec.frac.// 80 milliseconds delay .leftsample)). // 20 milliseconds delay . rightsample. DSK5510_AIC23_FREQ_16KHZ).DSK5510_AIC23_setFreq(hCodec.i. &rightsample)). 57    .m.j=0. if(k==0) y=leftsample+ 0. float y. &config). while (!DSK5510_AIC23_read16(hCodec. Int16 leftsample. } DSK5510_AIC23_closeCodec(hCodec). hCodec = DSK5510_AIC23_openCodec(0. for(i=0. while(TRUE) { while (!DSK5510_AIC23_read16(hCodec. DSK5510_AIC23_setFreq(hCodec.doubling leftsample=y.leftsample)).i<161.i--) x[i]=x[i-1]. for(i=1280. DSK5510_AIC23_FREQ_16KHZ).

if(k==160) m=0. x[0]=leftsample. &leftsample)). for(i=160. if(k==20) m=1. } TREMBLING: void main() 58    . } DSK5510_AIC23_closeCodec(hCodec). if(j==600) { j=0. if(m==0) k--. if(m==1) k++. } while (!DSK5510_AIC23_write16(hCodec. while (!DSK5510_AIC23_write16(hCodec. while (!DSK5510_AIC23_read16(hCodec. y)).i>0. j++. frac= j/600. y)). &rightsample)).i--) x[i]=x[i-1]. y=leftsample+ frac* x[k]+ (1-frac) * x[k+1].while(TRUE) { while (!DSK5510_AIC23_read16(hCodec.

DSK5510_init(). hCodec = DSK5510_AIC23_openCodec(0. if(j==30) { j=0.2* leftsample. k++. while (!DSK5510_AIC23_write16(hCodec.6* leftsample. Int16 leftsample. while(TRUE) { while (!DSK5510_AIC23_read16(hCodec. else if(k==1) leftsample=0.k=0. &rightsample)). &config). if(k==0) leftsample=leftsample. &leftsample)). k=k%5.4* leftsample.{ DSK5510_AIC23_CodecHandle hCodec. DSK5510_AIC23_setFreq(hCodec. leftsample)). DSK5510_AIC23_FREQ_8KHZ). 59    . } j++. while (!DSK5510_AIC23_read16(hCodec. else if(k==3) leftsample=0. else if(k==2) leftsample=0.8* leftsample. int j=0. rightsample. else if(k==4) leftsample=0.

hCodec = DSK5510_AIC23_openCodec(0.768788101059395. 0. 0.292893218813453}. 0. 1.order=2. int i.while (!DSK5510_AIC23_write16(hCodec.0}. 1.387199211860068.836916476010566. double y[3]={0.-0. rightsample.072959657268267. 0.072959657268267.072959657268267.585786437626905.000000000000000. 0.-0.165910681040351. 0.612800788139932. k++.-0.136728735997319.171572875253810}. double a[6][3]. 60    . 1. } } DSK5510_init(). Int16 leftsample.165910681040351.d[18]={ 1.000000000000000. a[i][j]=-d[k]. 0. 0.000000000000000.s=0.490469659645659.samples=1000.-1. 0.j=0.-0.m=0.854080685463466.193599605930034. 0.1. leftsample)). double x[3]={0.z[18]={ 0.136728735997319.1.i++) { for(j=0.filter=6.193599605930034.000000000000000.0. for(i=0.000000000000000. 0. } WAH-WAH: void main() { DSK5510_AIC23_CodecHandle hCodec. double w=0.j++) { b[i][j]=z[k].-1.072959657268267. 0.-1. 0.000000000000000. } DSK5510_AIC23_closeCodec(hCodec).668178637919299.854080685463466.000000000000000. 0.i<filter. double b[6][3].0}. 1.000000000000000.052992251962794.0.j<order+1.-0.&config).k=0.000000000000000.292893218813453.726542528005361. 0.000000000000000.-0.-0.000000000000000.0. 0.000000000000000. 0.

s++. while (!DSK5510_AIC23_write16(hCodec.&rightsample)).i++) y[i]=y[i+1]. while (!DSK5510_AIC23_write16(hCodec.i<order. x[order]=leftsample.05* leftsample)).i<order.i++) w += (b[j][i] * x[order-i]). for(i=0.i++) y[order]+= a[j][i] * y[order-i].&leftsample)). y[order]+=w. if(s==samples) { s=0. y[order]=0.i<order-1. for(i=0. while (!DSK5510_AIC23_read16(hCodec. if(m==0) 61    . DSK5510_AIC23_FREQ_16KHZ).i++) x[i]=x[i+1]. for(i=0. leftsample=y[order]. If(j==0) m=0.05* leftsample)).0. if(j==filter-1) m=1.DSK5510_AIC23_setFreq(hCodec.0. for(i=1. while(TRUE) { while (!DSK5510_AIC23_read16(hCodec.i<order+1. w=0.

} 62    .j++. if(m==1) j--. } } DSK5510_AIC23_closeCodec(hCodec).

Sign up to vote on this title
UsefulNot useful