Professional Documents
Culture Documents
The Development of Technology-Based Music 1 - Synths & MIDI
The Development of Technology-Based Music 1 - Synths & MIDI
SYNTHESIZERS
Synthesizers are at the forefront of music technology today, and can be heard in film and classical music as well as pop music. The basic concept is that an electronically generated signal is processed, resulting in a unique sound which cannot be duplicated by conventional instruments. A synthesizer is a device that generates sounds electronically. The classic synthesizer uses a technique called `subtractive synthesis'. This is so called because a raw wave-form (such as a sawtooth wave) is used as a starting point. Its inherent sound is rather buzzy and uninteresting, so the sound is `sculpted' by shaping and filtering, `subtracting' from the basic wave-form. A basic synthesizer is made up of four elements: oscillators, filters, envelopes and modulators. Even though few synthesizers nowadays are analogue, these core elements of synthesizers remain fundamentally the same. OSCILLATORS provide the sound sources for the instrument. In the 1960s and 70s the sound sources were analogue wave-forms. As synthesis evolved, the analogue wave-forms were gradually replaced by samples of basic wave-forms or of actual instruments (as in the Korg Trinity), or by mathematically modelled reproductions of wave-forms (via the process of `physical modelling') as in the Clavia Nord Lead. An analogue oscillator is referred to as a 'VCO' (voltage controlled oscillator). A FILTER shapes the tone of the sound, e.g. making it bright or dull. The type of filter used greatly influences the overall sound of a synthesizer. Moog filters are known for their warm and `fat' sound. In older synthesizers, filters were analogue, and are often referred to as VCFs (voltage controlled filters). The term ENVELOPE refers to the dynamic shape of a sound. An envelope commonly has four parameters known collectively as ADSR (attack, decay, sustain, release).
ATTACK - how long the sound takes to reach maximum. A piano's attack is fast, while a violin's attack can be very slow. DECAY - how quickly the sound decays from maximum to the `sustain' level. SUSTAIN - the continuing level of loudness while the finger remains on the keyboard. RELEASE - how long the sound takes to decay to silence after the key is released. There is normally one ADSR envelope for the amplifier section (or VCA), which can be used to shape the overall volume, and one for the filter (or VCF), which works the same way but affects the tone instead. MODULATION most commonly uses an LFO (low frequency oscillator), which does not make an audible sound itself but can be used to alter or `modulate' the oscillator (or whatever sound source is being used). This causes variations in pitch, often used to mimic vibrato. The LFO can also modulate the filter, causing changes in tone quality (e.g. a slow `sweeping' effect that is often heard in dance music). Additionally, the LEO can modulate the amplifier section, often to create rapid changes in volume, resulting in a tremolo effect. The ADSR envelope can also be utilized to modulate the oscillator(s), filter and amplifier. The earliest synthesizers were modular, i.e. they were not made in a self-contained unit but in separate
FM SYNTHESIS The 1980s In the early 1980s FM (frequency modulation) synthesis arrived. This worked by additive synthesis which, as its name implies, combines sine waves, rather than starting off with a raw waveform and `sculpting' it as in subtractive synthesis. Probably the most famous FM synthesizer was Yamaha's DX7, first produced in 1983. FM synthesis was very good at reproducing sounds such as electric pianos and brass, but its often hard and metallic sounds were less successful at reproducing richer timbres such as strings. In 1987 Roland brought out the D-50, which used memory-hungry samples of real instruments for the short, attack part of the sound, combined with more traditional synthesis for the much longer remaining portion. This system was known as S+S or sample and synthesis. It resulted in richer timbres that were quite realistic in their attempts to reproduce acoustic sounds. As memory has become cheaper and processors faster, many synthesizers now have large numbers of samples on board, which can be processed through filters and special effects to produce very complex, as well as realistic, sounds. At the time of writing, the most recent development has been `physical modelling', also known as virtual synthesis. In a traditional synthesizer, different parts of the circuit board form the oscillator, filter, etc. Virtual modelling synthesizers are derived from mathematical models of `real-world' sounds, or of analogue sythesizers, which are then recreated inside the virtual synthesizer's software. A virtual synthesizer may attempt to recreate an old analogue synthesizer, including the distinctive way that the components of that particular synthesizer react with each other. Or it may take an acoustic instrument as a starting point. The Yamaha AN1X and. Clavia Nord Modular are examples of hardware virtual synthesizers. Many virtual synthesis instruments are software-only, e.g. Propellerheads' Reason, and Native Instruments' Reaktor.
DIGITAL TECHNOLOGY
Digital technology opened up the way for a host of new developments in music. These were not confined to the realm of digital music playback, which was discussed in the previous chapter. The introduction of MIDI in 1983 had huge implications both for pop musicians and for consumers, opening up a new range of creative possibilities. Today's musician is able to produce an entire song from start to finish with only a computer and a microphone.
MIDI
MIDI - Musical Instrument Digital Interface is a digital communications protocol. In August of 1983, music manufacturers agreed on a document that is called "MIDI 1.0 Specification". This became possible when manufacturers agreed a common system in 1983 (the MIDI 1.0 spec.) which allowed electronic instruments to control and communicate with each other. This took the form of a set of standards for hardware connections, and for messages which could then be relayed between devices. By 1985, virtually every new musical keyboard on the market had a MIDI interface. MIDI also provides the means for electronic instruments to communicate with computers - which can then send, store and process MIDI data. This is central to the function of a sequencer: a device for inputting, editing, storing and playing back data from a musical performance. It records many details of a performance - such as duration, pitch and rhythm - which can then be edited. Why was it developed? MIDI was perhaps the first true effort at joint development among a large number of musical manufacturers. An industry standard enabling musical communication between musical hardware synths and sequencers for example. By 1985, virtually every new musical keyboard on the market had a MIDI interface. What is contained in Midi data? It is important to remember that MIDI transmits commands, but it does not transmit an audio signal. The MIDI specification includes a common language that provides information about events, such as note on and off, preset changes, sustain pedal, pitch bend, and timing information. Binary Data (just to aid understanding). Computers use binary data. A base 2 numeric system. We are used to a base 10 or decimal system.
Channel Voice Messages Whenever a MIDI instrument is played, its controllers (keyboard, pitch wheel etc) transmit channel voice messages. There are 7 channel voice message types ... 1. Note On 2. Note Off 3. Polyphonic key pressure 4. Aftertouch 1. Note-On This message indicates the beginning of a MIDI note and consists of 3 bytes. The 1st byte (Status byte) specifies a note-on event and channel. The 2nd byte specifies the number of the note played. The 3rd byte specifies the velocity with which the note was played. 2. Note-off This message indicates the end of a MIDI note. The 1st byte (Status byte) specifies a note-off event and channel. The 2nd byte specifies the number of the note played. The 3rd byte specifies the release velocity. 3. Polyphonic key pressure Key pressure or aftertouch, with each key sending its own independent message 5. Program change 6. Control change 7. Pitch bend change
7. Pitch bend These messages are transmitted whenever an instruments pitch wheel is moved. While almost all channel voice messages assign a single data byte to a single parameter such as key # or velocity (128 values because they start with '0,' so = 2^7=128), the exception is pitch bend. If pitch bend used only 128 values, discreet steps might be heard if the bend range were large (this range is set on the instrument, not by MIDI). So the 7 non-zero bits of the first data byte (called the most significant byte or MSB) are combined with the 7 non-zero bits from the second data byte (called the least significant byte or LSB) to create a 14-bit data value, giving pitch bend data a range of 16,384 values. MIDI Files
Disadvantages of MIDI: Limited Timbres when creating MIDI files one is limited to the GM sound bank.