You are on page 1of 46

First bachelor thesis

"Introduction to VST Plug-in Implementation


by means of a
Schroeder’s Reverberator Sound Effect"

Completed with the aim of graduating with a


Bakkalaureat (FH) in Telecommunications and Media
from the St Pölten University of Applied Sciences Telecommunications and Media degree
course

under the supervision of

DI (FH) Matthias Husinsky

Completed by

Julian Rubisch
tm041063

St. Pölten, on ……………….. Signature:


Declaration

I declare that

- this thesis is all my own work, that the list of sources and aids is complete and that I
have received no other unfair assistance of any kind,

- this thesis has not been assessed either in Austria or abroad before and has not been
submitted for any other examination paper.

This piece of work corresponds to the work assessed by the appraiser.

............................... .........................................
Place, Date Signature
Table of Contents

Abstract.................................................................................................................................................................. 3
1. Introduction....................................................................................................................................................... 4
1.1 Motivation............................................................................................................................................ 5
1.2 About this Research Paper ................................................................................................................... 6
2 Background & Theory....................................................................................................................................... 7
2.1 Physical Aspects of Reverberation ............................................................................................................... 7
2.1.1 Plane Waves – Frequency Domain....................................................................................................... 7
2.1.2 Spherical Waves – Time Domain ......................................................................................................... 9
2.2 Reverberation Algorithms – Schroeder and Moorer..................................................................................... 9
3. The VST SDK v2.4 .......................................................................................................................................... 13
3.1 Effect Plug-in Technologies for Software Sequencers ............................................................................... 13
3.1.1 Definition: VST Plug-ins.................................................................................................................... 13
3.1.2 Real-Time and Non-Real-Time Software Effects............................................................................... 14
3.1.3 Contemporary Plug-in Formats .......................................................................................................... 14
3.1.4 Virtual Instruments ............................................................................................................................. 15
3.2 Overview of the Framework....................................................................................................................... 16
3.3 Class Hierarchy .......................................................................................................................................... 17
3.3.1 AudioEffect ........................................................................................................................................ 17
3.3.2 AudioEffectX ..................................................................................................................................... 21
4. GUI Implementation....................................................................................................................................... 25
4.1 Approaches................................................................................................................................................. 25
4.2 The AEffectGUIEditor & CControlListener classes .................................................................................. 25
4.3 Useful GUI Elements ................................................................................................................................. 26
5. Example: Schroeder’s Reverberation Plug-in .............................................................................................. 28
5.1 Plug-in Design............................................................................................................................................ 28
5.2 MATLAB Implementation......................................................................................................................... 29
5.3 Plug-in Implementation .............................................................................................................................. 31
5.3.1 Schroeder Algorithm .......................................................................................................................... 31
5.3.2 GUI Design......................................................................................................................................... 35
6. Discussion of Results & Conclusion............................................................................................................... 40
7. References ........................................................................................................................................................ 42
7.1 Bibliography............................................................................................................................................... 42
7.2 List of URLs............................................................................................................................................... 43
8. List of Figures & Listings ............................................................................................................................... 44
Appendix A – CD-ROM Contents ..................................................................................................................... 45

2
Abstract

With the rapid increase of computer resources within the last decade, several real-time
applications have become more and more popular, such as sound editing and processing ones.
This paper focuses on the Virtual Studio Technology (VST), introduced by Steinberg Media
Technologies GmbH in 1996, which for the first time provided a platform for sound effect
developers to implement their own effect plug-ins. Subsequently, in 1999 also a standard
framework (version 2.0 of VST) for software sequencers, so called VST instruments, was
released.
While the task of designing digital signal processing algorithms can indeed be a most
challenging one, its complexity is reduced to a minimum when it comes to implementing a
VST sound effect. It is therefore also a major objective of this paper to show the simple
structure of the VST API and the possibilities to design a graphical user interface (GUI) for
any given plug-in as well.
To achieve this, a Schroeder’s Reverberator is realized by means of the VST software
development kit (VST SDK), illustrating all the major issues that have to be considered while
implementing such a sound effect. Also a brief introduction into the basic physical and
acoustic theories relating to reverberation is given.

3
1. Introduction

Audio production systems have experienced two major enhancements throughout the 1990’s
until today, pushed forward by the rapid development and decreasing costs of high-end
computer systems: Hard Disk Recording and MIDI Programming.
The former has gained significance only within the last few years, because hardware
prerequisites, especially the hard disk space required here, limited the utilizability for home
use, whereas the latter was introduced in 1983 by the MIDI Manufacturers Association, when
the first hardware sequencers and synthesizers became available.1

Steinberg Cubase was one product among others that was first introduced as a MIDI software
sequencer, but in the end it adopted many Hard Disk Recording functions as well. In software
sequencing programs, MIDI control information is assembled, and then played back, in the
early years only via external hardware synthesizers, but as computer processing power
increased during the last decade, also software synthesizers became a reasonable alternative.
Therefore, in 1996 Steinberg Media Technologies GmbH recognized the need to introduce a
standard for digital audio effects, which was adapted to meet the requirements of virtual
instruments as well in 1999, and thus created the Virtual Studio Technology (VST) interface,
in order to set up an open standard for audio programmers to implement their own digital
audio effect processors and sound synthesizers – so-called “plug-ins”.2

This VST Software Development Kit (VST SDK) primarily consists of a framework written
in C++ and enables multi-platform programming. Basically, it is used to process whatever
algorithm on every information (sound samples or MIDI control data) the plug-in reads from
its input (which is of course connected to the sequencer’s output). The output of this operation
is then again sent back to the sequencing software, which routes it either to the computer’s
sound card, or again processes it according to the user’s settings.
It is also possible to customize graphical user interfaces (GUIs) by means of the included
VSTGUI-library, consisting of various types of controls that can be used for GUI design.

1
see Raffaseder, 2002, p. 133
2
see http://www.steinberg.de/Steinberg/Company/default5b09.html?Langue_ID=7, 4.3.2006
and
http://www.steinberg.de/docloader_sb136c.html?DocLink=/webvideo/Steinberg/Support/doc/glossary_en.htm&t
empl=200&Langue_ID=7, 4.3.2006

4
Mainly, as mentioned above, there are two major purposes for VST plug-ins: For use as a
software instrument in connection with MIDI control information, or as an audio effect which
processes sound samples that the underlying software provides. This paper will provide an
insight into both fields, but the implementation example will focus on a plug-in used as a
spatial sound effect, namely reverberation.

1.1 Motivation

The reasons to write a research paper on such a complex field as audio algorithms and their
implementations are manifold: On the one hand, it affords an opportunity to deal with digital
audio processing, whereas on the other hand it can also be a magnificent chance to improve
one’s programming skills.

As there are many sound editing products available, and as the music industry has need for
many different sound effect algorithms, it is essential to narrow this huge field of research
possibilities down to a suitable, well-defined scope. Therefore, the first decision to arrive at
was to select a plug-in for implementation that is complex enough to be a researching
challenge, but also simple enough so as not to draw the focus on its respective underlying
theories and digital signal processing algorithms, but to allow for a distinct depiction of the
field of VST programming and a close view on the framework that supports it.

The sound effect of choice here is reverberation, because the core of this matter is not at all a
simple one, but the primary solution to synthesize this effect will in most cases contain a grid
of filters, which is not too complicated to put into effect.

Finally, the most important motive to write this paper was to offer a thorough tutorial for
planning and realizing a VST based software plug-in, covering the process of finding and
researching an apt algorithm as well as putting it into practice via the VST SDK framework.

5
1.2 About this Research Paper

Objective of this Research Paper

This research paper is meant to give a comprehensive introduction into programming a VST
plug-in, by means of a practical description of how to implement a Schroeder’s reverberation
algorithm. To achieve this, it will also deal with signal processing, spatial sound effects basics
and their respective MATLAB3 implementations upon which this algorithm rests.

Structure

First, to provide the basis for the implementation of the example plug-in, the physical aspects
of reverberation will be illustrated. Afterwards, the second chapter will close by a description
of some fundamental reverberation algorithms, covering especially Schroeder’s and Moorer’s
algorithms.
In the following section, an introduction into the fundamentals of software sequencing and
effect plug-in technologies will be given, where a certain stress will be laid on Steinberg’s
Virtual Studio Technology and how it interconnects with sequencing applications. The VST
SDK will be illustrated by means of the classes and methods it consists of, whereas chapter 4
will also give an overview of how to design and apply a graphical user interface (GUI) using
the methods and classes supplied by the VSTGUI-framework. Eventually, a short example of
actual VST programming will be given in chapter 5, where the above mentioned methods to
design a VST-plug-in and its GUI will be used to point out the difference between the
underlying DSP approach (by means of a MATLAB implementation) and the actual C++
algorithm, and it will be lined out how to bridge this gap.
In chapter 6, a discussion of the results will be enclosed, featuring also the conclusion of this
paper and the prospects for further research.

3
http://www.mathworks.com/

6
2 Background & Theory

2.1 Physical Aspects of Reverberation

Predominantly, reverberation can be summed up as the outcome of acoustic waves


propagating in enclosed rooms. To describe the effects of reflected waves, the most efficient
way is to take a (hypothetical) point source of sound and place it in a rectangular room. It can
be shown that in the far field of the source, the spreading waves can be approximated as plane
waves, whereas in the near field, the spherical component of the waves’ movement must not
be neglected.4

2.1.1 Plane Waves – Frequency Domain

As described above, viewed from a point in the room far away from the source of sound, the
acoustic waves can be considered to be plane.

The acoustics of an enclosed space can be described as a superposition of normal


modes, which are standing waves that can be set up in the gas filling the box5.

Thus, it becomes a simple task to explain the behavior of the room’s acoustics in the
frequency domain, if you approximate all sound waves in it as planes. The frequencies of the
normal modes can be calculated by solving the differential equation describing the room:

c  n x 2
2 2 2
  n y   nz 
f =   +   +    (1)
2  l x   
   l y   lz  

The triplet [lx, ly, lz] represents the dimension of the enclosure, while nx, ny, nz are non-
negative integer numbers describing the normal modes. Each normal mode can also be
characterized by a plane wave moving along the vector

4
see Zölzer et al., 2002, p. 170 f.
5
Zölzer et al., 2002, p.171

7
 n ny n 
v =  x , , z  . (2)
l l l 
 x y z 

As indicated above, each normal mode has a triplet of coefficients that characterize it. It can
now easily be concluded that multiples of one specific triplet will also be multiples of a
fundamental frequency f0. Therefore, the harmonic series of normal frequencies could easily
be replicated by means of a comb filter, implemented by a delay line with

1
d= . (3)6
f0

Fig. 2.1 Comb filter structure

Fig. 2.1 depicts the mode of operation of a feedforward comb fiter: an input signal is delayed
and then again added to the original signal, thus producing constructive and destructive
interference in order to create notches in the frequency spectrum which model the spatial
impression of the sound. In Fig. 2.2 such an output can be viewed.

Fig. 2.2 Comb filter frequency response

6
see Zölzer et al., 2002, p. 171 f.

8
2.1.2 Spherical Waves – Time Domain

Another approach to describe spatial acoustics deals with acoustic rays, which are analogous
to light rays in optics, and their reflections. The ray’s pressure decreases inversely
proportional to the square of distance from the source, since the acoustic energy has to be
preserved along the way the ray travels, and the area taken up by the wave front increases as
the square of distance.

Smooth and rigid surfaces produce mirror reflections; the angle of the incoming ray formed
with the normal on the surface is then equal to that of the outgoing ray. Generally, reflection
is connected to filtering, depending on the nature and material of the surface. This can be
expressed by a complex reflection function

Z cos θ − 1
R= (4),
Z cos θ + 1

where θ stands for the reflection angle and Z represents the surface’s characteristic impedance
and rises directly proportional to the smoothness of the wall. Consequently, in the case of a
perfectly even wall, Z would approach infinity, and the function R would be unitary at all
frequencies and angles.7

Finally, it can be stated that sound waves hitting surfaces may result in diffusion of the sound
with increasing roughness of the wall, and that the acoustic absorptance of a surface depends
on its material and the frequency of the incoming signal. In fact, for most materials this will
lead to a higher level of attenuation at higher frequencies than at lower ones.8

2.2 Reverberation Algorithms – Schroeder and Moorer

Within the early sixties of the twentieth century, Manfred Schroeder at Bell Laboratories
began his fundamental work on spatial acoustics and artificial reverberation. His main
suggestion was to use recursive comb filters and delay-based allpass filters as an inexpensive
method to simulate the sound effect produced by echoes in enclosures, i.e. reverberation.

7
see Zölzer et al., 2002, p. 172 ff.
8
see Raffaseder, 2002, p. 74

9
The allpass filter structure is illustrated in Fig. 2.3, where A(z) takes the place of a delay line.
Thus, the output is given by the difference equation

y(n) = -g . x(n) + x(n-m) + g . y(n-m) (5)

where m is the amount of samples used to generate the delay and g is the effect’s gain.

Fig. 2.3 Allpass filter structure

In the following years, this assembly became a typical component in artificial reverberation
tools, because of the general assumption that it does not affect the output sound concerning its
timbre. Yet, this is only the case as long as the delay generated by the effect remains longer
than the human ear’s integration time (about 50 ms). Above this boundary, time-domain
effects considerably influence the coloration of the output sound.9

Andy Moorer extended Schroeder’s work by adding tapped delay lines, comb and allpass
filters to the latter’s simpler structure. In particular, it was recognized that early spatial
reflections have great influence on the human perception of reverberation and can be modeled
precisely by using FIR filters. Typically, these filters are implemented as tapped delay lines,
meaning that the original signal is delayed and weighted many times and afterwards summed
together to amount to a single output. In Moorer’s design, a line of allpass filters and a group
of parallel comb filters is fed by this signal; moreover, the simple gain of delay lines is
replaced by lowpass filters simulating the sound losses of air absorption and reflections.

9
see Zölzer et al., 2002, p. 177

10
Fig. 2.4 Moorer’s reverberator

Fig. 2.4 illustrates the general architecture of Moorer’s reverberator, commencing with a
tapped delay line (a) to model the early reflections generated by the spatial conditions
specified by the user. Its output is routed directly to the effect’s output on the one hand and to
a delayed, attenuated diffuse reverberator (b) on the other hand. This reverberator’s output is
delayed so as to enable the last early-echo-samples coming from (a) to reach the output before
the first samples coming out of (b). In Moorer’s preferred structure, the diffuse reverberator is
implemented as a group of six parallel comb filters (representing six walls where the sound is
reflected) along with a first-order lowpass filter, followed by a single allpass filter. Moorer
suggests setting the allpass delay to 6 ms and the allpass coefficient to 0.7.

As pointed out above, all-pass filters do not affect the input signal as regards its frequency
spectrum, but a certain delay can render the output signal metallic and rough. The sound
decay is reproduced by the lowpass filters and the feedback attenuation coefficients of the
comb filters. To achieve a desired decay time Td (set for an attenuation level of 60 dB, which
is the definition of reverberation time by W.C. Sabine10), the comb filters’ gain parameters

10
see Zölzer et al., 2002, p. 175

11
have to be adjusted according to the following formula, where fs represents the sampling rate
and mi is the number of delayed samples:

Td f s
−3
mi
g i = 10 (6)

As indicated by Moorer, the comb filters’ delay times should be spread over an area between
50 and 80 ms.11

11
see Zölzer et al., p. 178 ff.

12
3. The VST SDK v2.4

3.1 Effect Plug-in Technologies for Software Sequencers

3.1.1 Definition: VST Plug-ins

Essentially, a VST Plug-in is a pure audio processing component, and not an


audio application: It is a component that is utilized within a host application. This
host application provides the audio streams that are processed by the plug-in's
code.12

In other words, the host application is responsible for providing all the basic functionalities,
like audio input and output (nowadays on most systems represented by the Audio Stream
Input/Output (ASIO) architecture13) through an audio interface, the possibility to read and
write files from and to a hard disk, and waveform editing14.

By compiling the source code into byte code and deploying the resulting file into a directory
specified by the host application (or, as on Microsoft Windows, by the operating system’s
registry15), the plug-in becomes available for use within the application, whereby its
functionality is expanded by the algorithms the plug-in uses. For this, the host application and
the plug-in only need to agree about the format they convey audio samples and control data
(such as parameters changed by the user during the plug-in is executed) to one another16.

As for the use of VST plug-ins on different platforms, it has to be mentioned that the source
code is platform independent, however the byte code compiled from it depends on the
platform architecture:

12
Steinberg, VST SDK 2.4 Documentation, 2006, D:\VST SDK\vstsdk2.4\doc\html\intro.html
13
see Pastl, 2003, p. 41
14
see Ekeroot, 2003, p. 10
15
see Steinberg, VST SDK 2.4 Documentation, 2006, D:\VST SDK\vstsdk2.4\doc\html\intro.html
16
see Ekeroot, 2003, p. 10

13
• On the Windows platform, a VST Plug-In is a multi-threaded DLL (Dynamic
Link Library). A standard (default) folder for the VST Plug-Ins is defined in
the registry under
"HKEY_LOCAL_MACHINE\SOFTWARE\VST\VSTPluginsPath".
• On Mac OS X, a VST Plug-In is a Bundle. You define the Plug-In's name in the
plist.info file with the CFBundleName key.
• On BeOS and SGI (under MOTIF, UNIX), a VST Plug-In is a Shared Library. 17

3.1.2 Real-Time and Non-Real-Time Software Effects

Regarding their computing behavior, two types of effect plug-ins have to be distinguished:

Real-Time Plug-ins
process audio samples that the sequencer provides one after another and send them back to the
output continuously. Thus, the sound editor can manipulate all the parameters the effect offers
while the sound file is played back, and so affect the plug-in’s output according to his ideas.

Recalculating Plug-ins
carry out their algorithm on any audio file so as to create a new copy. The original file
remains untouched so that it can be used elsewhere in the audio set-up. The advantage of this
procedure over real-time effects is given by the fact that it helps saving computer resources
(CPU time, memory etc.), yet on the other hand the output of the sound effect cannot be
listened to until the computing process has finished.18

3.1.3 Contemporary Plug-in Formats

Several audio manufacturers have responded to the newly arisen need for highly sophisticated
digital audio effects and so developed their own formats aside VST. The most widespread are:

17
Steinberg, VST SDK 2.4 Documentation, 2006, D:\VST SDK\vstsdk2.4\doc\html\intro.html
18
see Borne Claude, http://www.macmusic.org/articles/view.php/lang/EN/id/62?vRmtQjpAznOhMaS=1,
5.4.2003

14
AudioSuite and RTAS:
The AudioSuite format was introduced by DigiDesign to comply with ProTools, Audio Logic
and Peak, first only via a recalculating process. Later, the format was extended to meet also
real-time audio processing requirements and named Real-Time AudioSuite (RTAS).

AudioUnit (AU):
This format was established by Apple and intended to be usable by all audio applications
using MacOS X’s CoreAudio.

DirectX:
is the common standard for audio/video-applications on the Microsoft Windows platform.

MAS (Mark of the Unicorn Audio System):


was created by Mark Of The Unicorn (MOTU) as their native standard, and the resulting
effects can be used either as real-time or postponed ones.

Premiere:
This system was set up by Adobe to support its video editing software and is also recognized
by Logic Audio, Digital Performer, Studio Vision in recalculation mode, and by Peak in real-
time mode.

TDM (stands for Time Division Multiplexing)19:


The TDM format is used by DigiDesign’s ProTools audio editing software and depends on
the native DSP hardware featured by DigiDesign’s sound cards. These plug-ins are thus of a
professional audio quality.20

3.1.4 Virtual Instruments

The idea behind virtual instruments (VSTi) was to replace bulky hardware synthesizers and
samplers with software applications designed to serve the same purpose. They can be

19
see http://en.wikipedia.org/wiki/TDM, 14.6.2006
20
see Borne Claude, http://www.macmusic.org/articles/view.php/lang/EN/id/62?vRmtQjpAznOhMaS=1,
5.4.2003

15
constructed in a modular way, so that oscillators, filters and envelopes can be combined
easily, as well as designed to simulate famous hardware instruments. Due to the rapid
development of computer resources, these VSTis can nowadays be used even on consumer
PCs, controlled by a MIDI interface, and are thus an important piece of a semi-professional
home studio.21

3.2 Overview of the Framework

Basically, the VSTSDK is made up of an object-oriented framework written in C++ and


enables development of VST plug-ins. Although the Application Programming Interface
(API) consists of many classes, only two of them are relevant to the programmer as they
handle the transfer of audio samples from the host application to the plug-in and vice versa:
AudioEffect, which is the base class and was already included in the VSTSDK 1.0, and
AudioEffectX, which was introduced along with version 2.0 of the API.

Abstractly speaking, this API lies on top of the host application (e.g. Cubase SX) and frees the
programmer from linking his plug-in directly to the audio hardware’s drivers. Since the
VSTSDK shares information (that is, audio samples and control settings changed by the user)
with the host application in a well-defined way, the developer is able to concentrate fully on
the realization of the complex DSP algorithms.22

21
see Borne Claude, http://www.macmusic.org/articles/view.php/lang/EN/id/62?vRmtQjpAznOhMaS=1,
5.4.2003
22
see Ekeroot, 2003, p. 24

16
3.3 Class Hierarchy

Fig. 3.1 illustrates that AudioEffectX extends the base class, AudioEffect, and thus inherits all
of its public and protected attributes and methods, of which the most important will be
described and explained in the following two sections.

3.3.1 AudioEffect

The following code samples are taken from the class declaration in audioeffect.h. Some
methods and attributes have been omitted since they are not directly relevant for plug-in
implementation, but may be of a certain use to some programmers, such as converter
methods.23

Protected Attributes

As these variables are marked as protected, they may be directly accessed from classes that
extend the AudioEffect class.
audioMasterCallback audioMaster;
the host callback, which is the plug-in’s internal representation of the host application.

float sampleRate;
the current sample rate as defined by the host.

23
see Steinberg, VST SDK 2.4 Documentation, 2006, D:\VST SDK\vstsdk2.4\doc\html\class_audio_effect.html

17
VstInt32 numPrograms;
the amount of programs (i.e., presets) the plug-in provides.

VstInt32 curProgram;
an integer index of the program currently loaded.

VstInt32 numParams;
the number of parameters that can be changed by the user.

Public Methods

Constructor/Destructor

AudioEffect (audioMasterCallback audioMaster, VstInt32 numPrograms,


VstInt32 numParams);
Constructor, initializes an instance of AudioEffect, setting the host callback, the number of
programs as well as the number of parameters (see above).

virtual ~AudioEffect ();


Destructor, destroys an AudioEffect instance.

State Transitions

The following methods cover changes in the state of the plug-in, such as whether it is
bypassed or switched to active, or if project settings (such as the sample rate) have been
changed.

virtual void suspend () {}


method called when the plug-in is switched to off.

virtual void resume () {}


method called when the plug-in is switched to on.

18
virtual void setSampleRate (float sampleRate) { this->sampleRate =
sampleRate; }
sets the sample rate if changed by the host application. This is only carried out when the plug-
in is in suspend state.

virtual float getSampleRate () { return sampleRate; }


returns the value of the current sample rate.

Processing

The following two functions are the core methods of the VSTSDK v2.4 concerning signal
processing. While until version 2.0 of the API, the method process() had to be implemented,
in version 2.4 this method is deprecated. Instead, it is now mandatory to implement
processReplacing(), whereas processDoubleReplacing() is optional.

virtual void processReplacing (float** inputs, float** outputs,


VstInt32 sampleFrames) {}
processes 32-bit floating-point (single-precision) audio samples. The method takes an array of
pointers to the input data (inputs), applies the signal processing algorithm and writes its
outcome to an output buffer (outputs), and is always called in resume state. Mostly, this will
be carried out by a loop (while, for), iterating over every sample frame.

virtual void processDoubleReplacing (double** inputs, double**


outputs, VstInt32 sampleFrames) {}
performs the same actions as processReplacing(), but with double-precision floating-point
audio samples.

19
Parameter Settings

In the subsequent section, methods intended for use when changes in parameters take place,
will be described.

virtual void setParameter (VstInt32 index, float value) {}


sets the value of a plug-in parameter specified by index.

virtual float getParameter (VstInt32 index) {}


returns the value of a plug-in parameter specified by index.

virtual void setParameterAutomated (VstInt32 index, float value);


is called when a parameter is changed that should be automated by the host application.

virtual void getParameterLabel (VstInt32 index, char* label) {}


label is to be filled with the units by which the parameter specified by index is measured
(e.g. dB, milliseconds…).

virtual void getParameterDisplay (VstInt32 index, char* text) {}


text has to be stuffed with a string representing the value that corresponds with the parameter
index.

virtual void getParameterName (VstInt32 index, char* text) {}


text is to be stuffed with the name of parameter index (e.g. “Gain”, “Attack Time”, …).

Program Settings

These functions are responsible for the handling of plug-in “programs”; these are presets
which can be selected by the user.

virtual VstInt32 getProgram () {}


returns the integer index of the current preset/program.

virtual void setProgram (VstInt32 program) {}


sets the program according to the program index.

20
virtual void getProgramName (char* name) {}
returns a character array of the current program’s name.

virtual void setProgramName (char* name) {}


sets the name attribute of the current program to name.

3.3.2 AudioEffectX

The AudioEffectX class and the methods included in it were added when VST 2.0 was
released in 1999. Most of them deal with the handling of MIDI events to provide an interface
to implement virtual instruments (VSTi’s), i.e. software synthesizers. Again, some minor
functions have been left out to point out the most significant parts of the API.24

Public Methods

Parameter Automation

The functions described below deal with user-defined automation of parameters controlled by
the sequencing software.

virtual bool canParameterBeAutomated (VstInt32 index)


{}
returns true, if the parameter defined by index can be automated by the host application.

virtual bool beginEdit (VstInt32 index);


is called before setParameterAutomated() (see above, on Mouse Down on the parameter’s
control), to tell the host that it should start recording automation data for the parameter
specified by index.

virtual bool endEdit (VstInt32 index);


is called after setParameterAutomated() (on Mouse Up on the parameter’s control), and
tells the host that this parameter is no longer subject to changes by the user.
24
see Steinberg, VST SDK 2.4 Documentation, 2006, D:\VST
SDK\vstsdk2.4\doc\html\class_audio_effect_x.html

21
Input/Output Information & Settings

The succeeding code samples refer to changes and settings of the input/output configuration
of the host application and the bypass functionality of the plug-in.

virtual bool ioChanged ();


is called when the number of inputs or outputs (numInputs, numOutputs) or the initial delay
(initialDelay) have been changed; returns true to the host if this is the case.

virtual VstInt32 getInputLatency ();


virtual VstInt32 getOutputLatency ();
These two methods return the ASIO input and output latency values. Whilst input latency may
be of little concern to the developer, output latency might be somewhat relevant concerning
real-time plug-in issues: It determines the amount of samples that pass by between the input
of audio samples to the plug-in and the moment when processing has finished and they are
sent back to the host.

virtual bool setBypass (bool onOff) { return false; }


returns true if the plug-in supports SoftBypass, which allows the process to be called, even if
the plug-in was bypassed. This is extremely useful if the plug-in should be able to maintain a
processing state even when turned off, e.g. surround decoders/encoders. Moreover, this
feature may be automated by the host.

Speaker Arrangement

The methods setSpeakerArrangement(), getSpeakerArrangement(),

matchArrangement() and copySpeaker() handle the plug-ins specific settings concerning


surround (and other) mixes. Above all, they control which speaker arrangements are in fact
supported by the plug-in, how they can be matched, and how single speakers can be
manipulated.
Additionally, there are two helper functions, namely allocateArrangement() and
deallocateArrangement() who take care of the memory-specific issues (assignment and
unassignment of memory to the plug-in concerning speaker arrangements).

22
MIDI Handling

These functions handle the reception of MIDI events and how they are to be processed by the
plug-in. Unsurprisingly, these are only applicable if the plug-in to develop is a software
synthesizer (VSTi).

virtual VstInt32 getNumMidiInputChannels () {} and


virtual VstInt32 getNumMidiOutputChannels () {}
determine the amount of available MIDI input or output channels.

virtual VstInt32 processEvents (VstEvents* events) {}


is called when the plug-in receives MIDI events from the host application.

bool sendVstEventsToHost (VstEvents* events);


returns MIDI events (events) to the host.

Plug-in Properties

Some methods returning properties of the plug-in in use should also be mentioned, but they
will not be covered in detail. Within the AudioEffectX class, it is possible to request the
effect’s name (getEffectName()), the product name (getProductString()), the vendor
name (getVendorString()), the vendor-specific version (getVendorVersion()) and also to
query whether the plug-in is a software synthesizer or not (isSynth()).

23
Namespaces (as listed in audioeffectx.cpp)

HostCanDos

Here, the host’s abilities to communicate with the plug-in are listed, such as whether the host
supports offline editing, whether it can send MIDI or VST events to the plug-in, or whether
the host accepts changes of input/output-settings (e.g. sample rate) from the plug-in.25

PlugCanDos

Similar to the HostCanDos namespace, this section contains the plug-in’s capabilities to
interconnect with the host application. It includes variables to indicate whether the plug-in is
able to receive and send VST as well as MIDI events, or whether the plug-in supports soft-
bypassing and offline functions.26

25
see Steinberg, VST SDK 2.4 Documentation, 2006, D:\VST
SDK\vstsdk2.4\doc\html\namespace_host_can_dos.html
26
see Steinberg, VST SDK 2.4 Documentation, 2006, D:\VST SDK\
vstsdk2.4\doc\html\namespace_plug_can_dos.html

24
4. GUI Implementation

4.1 Approaches

Principally, there are three approaches to develop a VST Graphical User Interface:

1. Unless a GUI is implemented by the programmer, the host will provide a default
interface, consisting only of faders, set up using the parameter names and values
defined by the plug-in itself.
2. An editor, derived from the class AEffEditor, is used to apply arbitrary elements, like
knobs, faders and so on, to the GUI. The main disadvantage of this technique lies in
the fact that no platform independent GUIs can be designed.
3. The most often used method is to employ the VSTGUI-libraries, which contain a class
named AEffGUIEditor. Here, like above, several interface elements are provided, yet
their utilization is less complicated as regards platform independency, since the
libraries take care of this task.27

4.2 The AEffectGUIEditor & CControlListener classes

This classes are implemented in the files AEffectGUIEditor.h and AEffectGUIEditor.cpp


and vstcontrols.h. Any GUI editor to be implemented has to extend both classes.
Basically, there are three major tasks to bear in mind when designing and implementing a
GUI for a plug-in:

At first, one method of the base effect class has to be overwritten, namely setParameter().
Within this function, the index of the changed parameter defines which GUI control has been
changed, and the parameter’s value is assigned to the respective control.

Furthermore, CControlListener::valueChanged(CDrawContext *pContext, CControl


pControl) has to be implemented, so that changes in the GUI will also have effect on the
actual values of the plug-in’s parameters.

27
see Pastl, 2003, p. 50 f.

25
Finally, it has to be mentioned that, since AEffectGUIEditor also inherits methods and
attributes from AEffectEditor, there are some methods dealing with event processing, such
as open(*ptr) or close(), which have to be considered as well. A detailed example of how
this is to be put into action will be given in chapter 5.

4.3 Useful GUI Elements

Essentially, nearly all GUI elements to be used by the designer are derived from the class
CReferenceCounter. Besides, CFileSelector, CPoint and CRect could be helpful. A
simplified inheritance diagram is shown in Fig. 4.1 (several classes have been omitted):

Fig. 4.1: Simplified inheritance diagram of the VSTGUI-framework

26
In most cases, the procedure of planning a GUI will include the following tasks: A CRect
object will be defined to act as a container for the following elements. Afterwards, a CFrame
object will be created with the proportions of the CRect object, to make space for a
background image (which will be defined as a CBitmap) and several controls that will be
assigned to the frame.

All of the GUI elements and controls shown in Fig. 4.1 are implemented in the files
vstgui.h, vstgui.cpp, vstcontrols.h, and vstcontrols.cpp. Detailed documentation on
the methods and attributes implemented in the CControl subclasses and other classes can be
obtained from the VSTGUI documentation included on the CD-ROM. Moreover, several
classes and their behaviors will be depicted in chapter 5.

27
5. Example: Schroeder’s Reverberation Plug-in

5.1 Plug-in Design

As mentioned earlier, Schroeder’s reverberator was chosen for implementation because of its
fairly simple structure, as shown in Fig. 5.1.

Fig. 5.1 Schroeder’s reverberator structure

As shown in section 2.1.1, a comb filter resembles an efficient means to model the
frequencies of the normal modes of an enclosure. Mostly, four or six normal modes are
considered sufficient to reproduce the spatial sound effects of a room, especially if it has the
shape of a cuboid. Despite the fact that it may seem logical to cover as many normal modes as
possible by the use of many comb filters, this idea is generally neglected because of the fact
that their effects would counteract each other, leading to a more and more homogenous
frequency spectrum. This fact would again conceal the desired effect of reverberation. The
most significant disadvantage of the pure use of comb filters must not be left out as well:
Since the amount of reflections hitting the receiver increases as time progresses, the density of
peaks in the comb filter’s transfer function is due to grow as well. However, this is not the
case with traditional comb filters.28

28
see Green, 2003, p. 4

28
The second filter structure used in the Schroeder reverberation effect, the allpass filter, has no
effect on the frequency spectrum by definition, yet it affects the phase of the signal
considerably. The outcome gives the sound an impression of remoteness, which is why this
filter is series connected with the output of the matrix of comb filters, so as to render the
sound effect more realistic.29

5.2 MATLAB Implementation

Within MATLAB, a numerical computing environment equipped with a function plotter and a
GUI development engine, it becomes rather easy to find and optimize algorithms for any
given problem. In this particular case, what comes to the developers aid is MATLAB’s filter
function, which makes it possible to filter input data with both finite impulse response (FIR)
as well as infinite impulse response (IIR) processes.

y = filter(b,a,X) filters the data in vector X with the filter described by numerator
coefficient vector b and denominator coefficient vector a. If a(1) is not equal to 1,
filter normalizes the filter coefficients by a(1). If a(1) equals 0, filter returns an
error.30

Thus, it becomes a simple task to design the aforementioned comb and allpass filters. The
following MATLAB listings were all taken from David Green, “Reverberation Filters”, 2003:

% Comb filter
% x - input signal
% d - depth of comb filter
% g - attenuation per iteration

function [y] = comb ( x, d, g )

a(1) = 1;
a(round(d)) = -g;
b(1) = 1;

y = filter( b, a, x );

Listing 5.2.1: comb.m - Here, a comb filter is implemented using a


simple FIR algorithm

29
see Green, 2003, p. 7
30
MATLAB Documentation, MATLAB Function Reference: Filter

29
% Allpass filter
% x - input signal
% d - delay depth
% g - attenuation factor

function [y] = allpass ( x, d, g )

a(1) = 1;
a(round(d)) = -g;
b(1) = -g;
b(round(d)) = 1;

y = filter ( b, a, x );

Listing 5.2.2: allpass.m – this allpass filter has both a FIR and a IIR component

% Schroeder Reverberator

function [y] = schroeder (x)

y1 = comb (x, 0.0297 * 44100, 0.9);


y2 = comb (x, 0.0371 * 44100, 0.9);
y3 = comb (x, 0.0411 * 44100, 0.9);
y4 = comb (x, 0.0437 * 44100, 0.9);

ya = (y1 + y2 + y3+ y4)/4;

ya1 = allpass (ya, 0.005*44100, 0.8);


y = allpass (ya1, 0.0017*44100, 0.8);

Listing 5.2.3: schroeder.m

In Listing 5.2.3, all the comb and allpass filters are combined according to the block diagram
shown above. The vector variable x represents an input of (in this case audio) samples, which
is then sent through four comb filter functions. Afterwards, the output signals are averaged,
and a series of two allpass filters are fed with this signal. Finally, the result of these filters’
calculations is sent to the output.

The filters’ delay depths are computed using an assumed sample rate of 44100 Hz. The values
for delays and attenuation coefficients were all obtained from David Green, “Reverberation
Filters”, 2003.

30
5.3 Plug-in Implementation

5.3.1 Schroeder Algorithm

First, it was necessary to implement a class named “Schroeder”, extending the AudioEffectX
class, thus inheriting all of its attributes and methods. For reasons of abbreviation, the
schroeder.h header file has been left out of this description, since the most significant
methods included in it will be lined out by the file schroeder.cpp in Listing 5.3.1.

Schroeder::Schroeder (audioMasterCallback audioMaster)


: AudioEffectX (audioMaster, 1, 3)
{
setNumInputs (1); // mono in
setNumOutputs (1); // mono out
setUniqueID ('Schr'); // identify
canProcessReplacing (); // supports replacing output

fRoomSize = 1.0;
// roomSize is initialized as 100%
fAttenuation = 1.0;
// attenuation (allpass filter gain) is initialized as
100%
fDryWet = 0.5;
// dry-wet mix is initialized as 50%

vst_strncpy (programName, "Default", kVstMaxProgNameLen);


// default program name

editor = new SchroederEditor (this);


// initialize the GUI editor

resume();
// perform filter initializations
}

Listing 5.3.1: schroeder.cpp – constructor

In the constructor, mainly variable initializations are performed. Also, the GUI editor (which
will be described in the following section) is called.

31
void Schroeder::resume ()
{
// perform filters’ buffers initializations

// initialize the buffer length


bufferLengthC1 = (int) (0.0297*fRoomSize*getSampleRate());

// initialize an array of float with the length defined above


bufferStartC1 = new float[bufferLengthC1];

// define a pointer at the end of the buffer array


bufferEndC1 = bufferStartC1 + bufferLengthC1;

// set all array values to 0.0


for(int i = 0; i < bufferLengthC1; i++) {
bufferStartC1[i] = 0.0;
}
// set the read pointer to the starting point of the buffer array
readPtrC1 = bufferStartC1;

/* ... code omitted ... */

}
Listing 5.3.2: schroeder.cpp – method resume()

Exemplarily, one comb filter’s buffer initialization is shown in Listing 5.3.2. First, the length
of the buffer is set, then an array of floating-point numbers with this length is created.
Afterwards, the a pointer is set to define where in the system memory the buffer’s ending
address should be located. Lastly, all the values in the buffers are set to zero, and the read
pointer is set to the starting point of the buffer array. Of course, these actions are performed
on all the filters used in this plug-in.

void Schroeder::setParameter (VstInt32 index, float value)


{
switch(index) {
case kRoomSize: fRoomSize = value; break;
case kAttenuation: fAttenuation = value; break;
case kDryWet: fDryWet = value; break;
}

if (editor)
((AEffGUIEditor*)editor)->setParameter (index, value);
}
Listing 5.3.3: schroeder.cpp – method setParameter()

In Listing 5.3.3, it is depicted how this program handles the change of parameters: The
parameter passed by the method is compared to the parameters’ indices (which have been
defined in an enumeration in the schroeder.h header file). According to this switch-
condition, the actual parameter’s value is changed.

32
float Schroeder::getParameter (VstInt32 index)
{
switch(index) {
case kRoomSize: return fRoomSize; break;
case kAttenuation: return fAttenuation; break;
case kDryWet: return fDryWet; break;
}
}
Listing 5.3.4: schroeder.cpp – method getParameter()

In this method, simply the requested parameter’s current value is returned to the caller.

void Schroeder::processReplacing (float** inputs, float**


outputs, VstInt32 sampleFrames)
{
float* in = inputs[0];

float* out = outputs[0];

float y, y1, y2, y3, y4, y5, y6;

while (--sampleFrames >= 0)


{

// this temporary input variable is needed, because 4


parallel comb filters are used
float tempC = *in++;

/* Comb Filter #1 */

// read value from buffer


y1 = *readPtrC1;

// perform feedforward filter function


*readPtrC1 = (*readPtrC1 * 0.9 + tempC);

// if the reading pointer reaches the buffer's end, reset


if(++readPtrC1 >= bufferEndC1) {
readPtrC1 = bufferStartC1;
}

/* ... comb filters #2-4 omitted ... */

// summarize all comb filters' outputs and divide them by


their amount
y5 = (y1 + y2 + y3 + y4)/4;

/* Allpass Filter #1 */

// define a temporary variable, because this filter has a


feedback loop
float tempA1;

// read from the buffer


tempA1 = *readPtrA1;

33
// perform feedforward filter function
*readPtrA1 = 0.8 * tempA1 + y5;

// perform feedback filter function


y6 = tempA1 - 0.8 * *readPtrA1;

// if the read pointer has reached the buffer's end, reset


if(++readPtrA1 >= bufferEndA1) {
readPtrA1 = bufferStartA1;
}

/* ... allpass filter #2 omitted ... */

// signal output, with dry/wet mix included


*out++ = y * fDryWet + tempC * (1 - fDryWet);

}
}
Listing 5.3.5: schroeder.cpp – method processReplacing()

This function resembles the actual core of the plug-in. Step by step the actions performed in
it will be explained here.

First, input and output pointers are assigned to temporary variables, as well as some other
floating-point variables are initialized to hold the filters’ outputs in a later part of the
program.

Thereafter, a while-loop iterates over all sample frames passed to the function. This is also
the point where the actual algorithm starts: Again, a temporary variable is needed, because
the comb filters are all fed with the same input signal, which has to be stored in a separate
variable before the signal is split into four parts.

One comb filter and one allpass filter algorithm are to be lined out here, since the rest of the
program simply reproduces the signal flow according to the block diagram.

Within the comb filter’s algorithm, first the current buffer’s value is assigned to the filter’s
output, then the current buffer’s value is recalculated using the feedforward filter function.
Then the read pointer is increased, until it reaches the buffer’s end. When it does so, it is
reset to the buffer’s starting point. Thus, the comb filter function is realized in C++.

Similarly the allpass filter’s output is computed: First, the buffer’s current value is fed into
another temporary variable, which is necessary, as this filter has a feedforward and a

34
feedback loop. The feedforward and feedback functions are carried out, producing an output
which is assigned to y6 . Like above, the read pointer is increased by 1 every time the while
loop iterates over the sample frames, and reset once it approaches the buffer’s end.

Finally, a dry/wet mix is applied to the output, so that the user is able to decide how big a
portion of the effect signal is to be added to the original signal.

Basically, these filter algorithms were taken from http://www.harmony-


central.com/Computer/Programming/Code/comb.html and
http://www.harmony-central.com/Computer/Programming/Code/allpass.html, and adapted by
the author so as to fit the problem at hand.

5.3.2 GUI Design

As for GUI Design, the class SchroederEditor needs to extend both AEffGUIEditor and
CControlListener, to inherit their methods, of which the most important are to be explained
here.

enum {
// bitmaps
kBackgroundId = 128,
kFaderBackgroundId = 129,
kFaderHandleId = 130,

// positions
kFaderX = 180,
kFaderY = 95,

kFaderInc = 40,

kDisplayX = 340,
kDisplayY = 100,
kDisplayXWidth = 30,
kDisplayHeight = 14,

};

Listing 5.3.6: schroedereditor.cpp – enumeration: Several parameters, such as fader positions,


are initialized.

35
SchroederEditor::SchroederEditor (AudioEffect *effect)
: AEffGUIEditor (effect)
{
/* ... code omitted ... */

hBackground = new CBitmap (kBackgroundId);

// init the size of the plugin


rect.left = 0;
rect.top = 0;
rect.right = (short)hBackground->getWidth ();
rect.bottom = (short)hBackground->getHeight ();
}
Listing 5.3.7: schroedereditor.cpp – contructor

In the constructor, basically the background image is loaded and the size of the viewbox is
defined.

SchroederEditor::~SchroederEditor ()
{
if (hBackground)
hBackground->forget ();
hBackground = 0;
}
Listing 5.3.8: schroedereditor.cpp – destructor

Here, the memory used by the background has to be freed.

bool SchroederEditor::open (void *ptr)


{

AEffGUIEditor::open (ptr);

// load bitmaps
CBitmap* hFaderHandle = new CBitmap (kFaderHandleId);
CBitmap* hFaderBackground = new CBitmap (kFaderBackgroundId);

// initialize the background frame


CRect size (0, 0, hBackground->getWidth (), hBackground-
>getHeight ());
CFrame* lFrame = new CFrame (size, ptr, this);
lFrame->setBackground (hBackground);

36
// initialize the faders
long minPos = kFaderX;
long maxPos = kFaderX + hFaderBackground->getWidth() -
hFaderHandle->getWidth();

CPoint point (0, 0);

// Roomsize
size (kFaderX, kFaderY, kFaderX+hFaderBackground->getWidth(),
kFaderY+hFaderBackground->getHeight());
roomsizeFader = new CHorizontalSlider (size, this, kRoomSize,
minPos, maxPos, hFaderHandle, hFaderBackground, point, kLeft);
roomsizeFader->setValue(effect->getParameter (kRoomSize));
roomsizeFader->setDefaultValue(0.5f);
lFrame->addView (roomsizeFader);

/* other faders omitted */

// initialize the display


// RoomSize
size (kDisplayX, kDisplayY,
kDisplayX + kDisplayXWidth, kDisplayY +
kDisplayHeight);
roomsizeDisplay = new CParamDisplay (size, 0, kCenterText);
roomsizeDisplay->setFont (kNormalFontSmall);
roomsizeDisplay->setFontColor (kWhiteCColor);
roomsizeDisplay->setBackColor (kBlackCColor);
roomsizeDisplay->setFrameColor (kBlueCColor);
roomsizeDisplay->setValue (effect->getParameter (kRoomSize));
roomsizeDisplay->setStringConvert (percentStringConvert);
lFrame->addView (roomsizeDisplay);

/* other displays omitted */

// free memory
hFaderBackground->forget ();
hFaderHandle->forget ();

frame = lFrame;
return true;
}
Listing 5.3.9: schroedereditor.cpp – method open()

First, the open() method from the superclass AEffGUIEditor has to be called. Afterwards,
bitmaps are loaded that represent the faders’ bodies and handles. Then, a frame is created to
hold the faders and displays. Subsequently, the faders are initialized and assigned to the frame
by lFrame->addView(). The same is performed with the displays, with the only difference
that here the current value to be displayed has to be retrieved from the effect base class via
effect->getParameter(). Finally, the memory has again to be freed of the bitmaps loaded
into it.

37
void SchroederEditor::setParameter (VstInt32 index, float value)
{
if (frame == 0)
return;

switch (index)
{
case kRoomSize:
if (roomsizeFader)
roomsizeFader->setValue (effect->getParameter
(index));
if (roomsizeDisplay)
roomsizeDisplay->setValue (effect->getParameter
(index));
break;

case kAttenuation:
if (attenuationFader)
attenuationFader->setValue (effect-
>getParameter(index));
if (attenuationDisplay)
attenuationDisplay->setValue (effect-
>getParameter(index));
break;

case kDryWet:
if (drywetFader)
drywetFader->setValue (effect->getParameter(index));

}
Listing 5.3.10: schroedereditor.cpp – method setParameter()

This method is called by the GUI, when the user modifies the value of a parameter. According
to the index passed to the method, the value of the respective control is changed.

void SchroederEditor::valueChanged (CDrawContext* context,


CControl* control)
{
long tag = control->getTag ();
switch (tag)
{
case kRoomSize:
case kAttenuation:
case kDryWet:
effect->setParameterAutomated (tag, control->getValue ());
control->setDirty ();
break;
}
}
Listing 5.3.11: schroedereditor.cpp – method valueChanged()

38
Originally implemented in CControlListener, this function sets the GUI control’s value
passed to the method to the current value via an automation process, whenever the control’s
value is changed by setParameter(). Without this method, the faders’ and knobs’ positions
would remain unchanged whenever a parameter’s value is altered.

Fig. 5.2 Schroeder plugin GUI

The GUI files have to be compiled along with the base classes, to create a DLL under
Windows, which has to be copied into the VSTPlugin directory specified in the Windows
Registry. Moreover, the bitmaps used in the GUI have to be included in a resource file to be
able to be compiled into the DLL.

The GUI of the Schroeder plug-in was designed using the sample GUIs included in the
VSTSDK samples directory, which also included on the CD-ROM31.

31
D:\VST SDK\ vstsdk2.4\public.sdk\samples\vst2.x\adelay

39
6. Discussion of Results & Conclusion

The studies conducted in the sections above predominantly demonstrated that the VST SDK
programming framework provides simple functionality for developers to implement both
audio algorithms and well-structured GUIs. In particular, the API is set up in such a way, that
it becomes fairly easy to divide the task of creating a software sound effect into various sub-
tasks, such as algorithm design and implementation, or GUI design.
It has also been shown that nowadays the construction of real-time effect plug-ins causes no
more difficulties for programmers, since all the hardware relevant operations, such as
input/output strategies and so on, are taken care of by the host application and the audio
drivers respectively. Since performance is a major issue when it comes to developing real-
time applications of any kind, it has also to be mentioned that there are development kits
available in other programming languages such as Java or Delphi. However, they make use of
a so-called “wrapper”, which only translates given Java or Delphi code into C++ code, adding
another component of latency to the plug-in.32 Although this may seem negligible as
compared to today’s omnipresent computer power, it must not be forgotten that with the
increase of computer resources, also the complexity of the used algorithms rises more and
more. Therefore, any developer would be well advised to use the native C++ framework,
provided that he is capable of programming in C++, to avoid performance problems.

With the emergence of many open-source communities in the 1990s, also a movement of
freeware VST programmers was born. A multitude of plug-ins is now available for download
from the web, and even though they differ widely as for their qualities, the contributions made
by their developers should not be underestimated. Software sequencers, such as Cubase SX,
Nuendo and others, come with a variety of editing possibilities, yet they are limited as far as
sound effects are concerned. Therefore, sound engineers and editors have to rely on
programmers to supply them with all the effects and instruments they need. On the one hand,
large studios will mostly rely on those developed by audio software companies, such as
Steinberg or Native Instruments, but on the other hand, little one-man-studios will be grateful
for any useful freeware plug-in that they come across.

32
see http://jvstwrapper.sourceforge.net/, 16.6.2006

40
Finally, it can be stated that when Steinberg introduced the innovation of a software plug-in
technology, their software architects not only provided a basis for developers around the
world to take part in the process of improving audio algorithms, but also revaluated their own
software products. While Cubase was planned as a MIDI sequencer in the first place by the
time it was launched, it has nowadays become an adequate sound editing application, thanks
to the VST plug-in technology to a certain extent.
This fact leads to the conclusion that plug-in technologies have gained so high an importance,
and opened people’s mind as for the potentials lying within them, that it would be desirable
for programmers to unite all the plug-in formats available today into one or two major ones.
Yet, if this will ever be the case, or if it will remain wishful thinking, depends on the future
behavior of sound software companies around the world.

41
7. References

7.1 Bibliography

Ekeroot Jonas; Implementing a parametric EQ plug-in in C++ using the multi-platform VST
specification, 2003
/references/LTU-CUPP-03044-SE.pdf
Green, David W., Reverberation Filters, 2003
/references/reverb.pdf
MATLAB Documentation, 2005
Pastl Wolfgang; Entwicklung eines Software-Synthesizers als VSTi auf Basis von Physical
Modeling, 2003
Steinberg, VST SDK 2.4 Documentation, 2006 (included on CD-ROM):
/VST SDK/vstsdk2.4/doc/html/intro.html
/VST SDK/vstsdk2.4/doc/html/namespace_host_can_dos.html
/VST SDK/vstsdk2.4/doc/html/namespace_plug_can_dos.html
/VST SDK/vstsdk2.4/doc/html/class_audio_effect.html
/VST SDK/vstsdk2.4/doc/html/class_audio_effect_x.html
Raffaseder Hannes; Audiodesign, 2002, fbv Leipzig
Zölzer Udo et al.; DAFX, 2002, John Wiley and Sons Ltd

42
7.2 List of URLs

http://www.steinberg.de/Steinberg/Company/default5b09.html?Langue_ID=7, 4.3.2006
http://www.steinberg.de/docloader_sb136c.html?DocLink=/webvideo/Steinberg/Support/doc/
glossary_en.htm&templ=200&Langue_ID=7, 4.3.2006

Borne Claude,
http://www.macmusic.org/articles/view.php/lang/EN/id/62?vRmtQjpAznOhMaS=1, 5.4.2003
/references/macmusic_audio_plugins.html
http://www.harmony-central.com/Computer/Programming/Code/comb.html, 28.6.1998
/references/harmonycentral_comb.html
http://www.harmony-central.com/Computer/Programming/Code/allpass.html, 28.6.1998
/references/harmonycentral_allpass.html
http://en.wikipedia.org/wiki/TDM, 14.6.2006
/references/en_wikipedia_TDM.html
http://jvstwrapper.sourceforge.net/, 16.6.2006
/references/jvstwrapper.html

43
8. List of Figures & Listings

Fig. 2.1 Comb filter structure


(http://www.staffs.ac.uk/personal/engineering_and_technology/dp11/phase/comb_filter.gif)
Fig. 2.2 Comb filter frequency response, Green, 2003, p. 5
Fig. 2.3 Allpass filter structure
(http://www.difitec.com/wavepurity/homepage/img/allpass1.gif)
Fig. 2.3 Moorer’s reverberator
(http://disi.eit.uni-kl.de/skripte/audio1/audi10.pdf, /references/audi10.pdf)
Fig. 3.1 Class hierarchy of the VSTSDK-2.4 framework
Fig. 4.1 Simplified inheritance diagram of the VSTGUI-framework
Fig. 5.1 Schroeder’s reverberator structure
(http://www.music.miami.edu/programs/mue/mue2003/research/sbrowne/images/schroeder.gi
f)
Fig. 5.2 Schroeder plugin GUI

Listing 5.2.1 comb.m, Green, 2003, p.12


Listing 5.2.2 allpass.m, Green, 2003, p. 12
Listing 5.2.3 schroder.m, Green, 2003, p. 15
Listing 5.3.1 schroeder.cpp – constructor
Listing 5.3.2 schroeder.cpp – method resume()
Listing 5.3.3 schroeder.cpp – method setParameter()
Listing 5.3.4 schroeder.cpp – method getParameter()
Listing 5.3.5 schroeder.cpp – method processReplacing()
Listing 5.3.6 schroedereditor.cpp – enumeration
Listing 5.3.7 schroedereditor.cpp – contructor
Listing 5.3.8 schroedereditor.cpp – destructor
Listing 5.3.9 schroedereditor.cpp – method open()
Listing 5.3.10 schroedereditor.cpp – method setParameter()
Listing 5.3.11 schroedereditor.cpp – method valueChanged()

44
Appendix A – CD-ROM Contents

/
• schroeder.dll – the compiled plug-in
• Introduction to VST Plug-in Implementation.pdf – this research paper
/references/
• audi10.pdf
• en_wikipedia_TDM.html
• harmonycentral_allpass.html
• harmonycentral_comb.html
• jvstwrapper.html
• LTU-CUPP-03044-SE.pdf
• macmusic_audio_plugins.html
• reverb.pdf

/MATLAB/
• allpass.m
• comb.m
• schroder.m

/VST SDK/
• ./vstsdk2.4/index.html
Path to VST Documentation
• ./vstsdk2.4/public.sdk/samples/vst2.x/schroeder_gui
Path to Schroeder source files

45

You might also like