Glossary : Electronic Music |
A/D converter | A device that changes the continuous fluctuations in voltage from an analog device (such as a microphone) into digital information that can be stored or processed in a sampler, digital signal processor, or digital recording device. |
ADPCM | Adaptive delta pulse code modulation. An audio compression algorithm for digital audio based on describing level differences between adjacent samples. |
ADSR | Attack/decay/sustain/release, the four segments of a common type of synthesizer envelope. The controls for these four parameters determine the duration (or in the case of sustain, the height) of the segments of the envelope. See envelope |
AES/EBU | A type of digital audio connection used in professional applications. The connection is most often made over a standard XLR (mic) cable. It is a standard created by the Audio Engineering Society and the European Broadcasting Union. |
Aftertouch | A type of control data generated by pressing down on one or more keys on a synthesizer keyboard after they have reached and are resting on the keybed. See channel pressure, poly pressure. |
AIFF | Audio interchange file format. A common Macintosh audio file format. It can be mono or stereo, at sampling rates up to 48kHz. AIFF files are QuickTime compatible. |
Algorithm | A set of procedures designed to accomplish something. In the case of computer software, the procedures may appear to the user as a configuration of software components -- for example, an arrangement of operators in a Yamaha DX-series synthesizer -- or as an element (such as a reverb algorithm) that performs specific operations on the signal. |
Algorithmic composition | A type of composition in which the large outlines of the piece, or the procedures to be used in generating it, are determined by the human composer while some of the details, such as notes or rhythms, are created by a computer program using algorithmic processes. |
Aliasing | A type of distortion that occurs when digitally recording high frequencies with a low sample rate. A visual analogy can be found in video, when a car's wheels appear to slowly spin backwards while the car is quickly moving forward. Similarly when you try to record a frequency greater than one half of the sampling rate (Nyquist Frequency), instead of hearing a high pitch you may hear a low frequency rumble. An anti-aliasing filter can be used to remove high-frequencies before recording. However, once a sound has been recorded, aliasing distortion is impossible to remove without also removing other frequencies from the sound. |
All-notes-off | A MIDI command, recognized by some but not all synthesizers and sound modules, that causes any notes that are currently sounding to be shut off. The panic button on a synth or sequencer usually transmits all-notes-off messages on all 16 MIDI channels. |
Amplitude | The amount of a signal. Amplitude is measured by determining the amount of fluctuation in air pressure (of a sound), voltage (of an electrical signal), or numerical data (in a digital application). When the signal is in the audio range, amplitude is perceived as loudness. |
Analog | Capable of exhibiting continuous fluctuations. In an analog audio system, fluctuations in voltage correspond in a one-to-one fashion with (that is, are analogous to) the fluctuations in air pressure at the audio input or output. In an analog synthesizer, such parameters as oscillator pitch and LFO speed are typically controlled by analog control voltages rather than by digital data, and the audio signal is also an analog voltage. Compare with digital. |
Arpeggiator | A device that sequentially plays a pattern of notes over a range of the keyboard. The speed of the arpeggiation and pattern of notes are variable depending on the tempo and specified/pressed notes. |
ASIO | Audio Stream Input Output (ASIO) is a protocol for low-latency digital audio specified by Developed by Steinberg. ASIO provides an interface between an application and the sound card. Whereas Microsoft's DirectSound? is typically for stereo input and output for consumers, ASIO provides for the needs of musicians? and sound engineers?. ASIO offers a relatively simple way of accessing multiple audio inputs and outputs independently. It also provides for the synchronization of input with output in a way that is not possible with DirectSound?, allowing recording studios? to process their audio via software on the computer instead of using thousands of dollar's worth of separate equipment. Its main strength relies in its method of bypassing the inherently high latency of Operating system audio mixing Kernels, allowing direct, high speed communication with audio hardware. |
Attack | The initial period of a typical Envelope during which a sound's attribute (such as volume) increases from 0 (silence) to it's maximum amount. The length of the attack determines how "soft" or "harsh" a sound is. For example, most drum or percussion sounds have a short amplitude attack time and thus have a sudden "harsh" start. A string sound usually has a long amplitude attack and thus has a "soft" start and eases in. |
Attenuator | A potentiometer (pot) that is used to lower the amplitude of the signal passing through it. The amplitude can usually be set to any value between full (no attenuation) and zero (infinite attenuation). Pots can be either rotary or linear (sliders), and can be either hardware or "virtual sliders" on a computer screen. |
Top | |
Balanced | Balanced audio cables are used to reduce interference? when transporting an audio signal. An unbalanced audio cable has two internal wires where as a balanced cable has three. In a balanced cable two wires carry the audio signal and the other wire is the ground and shield. Balanced audio cables normally make use of XLR connectors. When sending a signal over a balanced audio connection, the signal is split into two. One of these is reversed. |
Bandpass Filter | A type of Filter used to eliminate high- and low-range frequencies around a specific frequency, resulting in more distinctive sound. |
Bandwidth | The available "opening" through which information can pass. In audio, the bandwidth of a device is the portion of the frequency spectrum that it can handle without significant degradation. In digital communications, the bandwidth is the amount of data that can be transmitted in a given period of time. |
Bank | (1) A set of patches. (2) Any related set of items, e.g., a filter bank (a set of filters that work together to process a single signal). |
Bar | Bar is a unit of time in music. A bar consists of a certain number of beats. The number depends on the time signature of the piece of music. For example if we have a piece in 3/4 time, every bar of that piece has three beats. |
Baud rate | Informally, the number of bits of computer information transmitted per second. MIDI transmissions have a baud rate of 31,250 (31.25 kilobaud), while modems typically have a much lower rate of 2,400, 9,600, or 14,400 baud. |
Beat | A beat is a subdivision of a measure? or bar of music. The most common such division is 4/4? time in which a measure? of music is split into 4 beats. This is counted as 1-2-3-4 which is commonly heard being shouted by drummer a the beginning of a song to establish the tempo of the song. |
Bend | To change pitch in a continuous sliding manner, usually using a pitch-bend wheel or lever. See pitch-bend. |
Big-Endian | Refers to the most significant byte first order in which bytes of a multi-byte value (such as a 32-bit dword value) are stored. For example a decimal value of 457,851 is represented as 0x0006FC7B in hexidecimal and would be stored in a file as: 0x00, 0x06, 0xFC, 0x7B. Many Moterola proccessors (Macintosh) use Big-Endian. The opposite byte ordering method is called Little-Endian. |
Bit | The smallest possible unit of digital information, numerically either a 1 or a 0. Digital audio is encoded in words that are usually eight, 12, or 16 bits long (the bit resolution). Each added bit represents a theoretical improvement of about 6dB in the signal-to-noise ratio. |
Bit-Depth | Often used to describe the resolution or quality of each sample in a digital audio stream. It is the number of bits (0's and 1's) used to describe the amplitude or volume of an audio signal at a specific point in time. The higher the number, the more precisely the original or intended audio signal can be (re)produced. See Digital Audio Basics for a more detailed explination. |
Bpm | Beats per minute. The usual measurement of tempo. |
Breaks | A music genre typically based around the 'breaks'? or break downs of older records. Breaks are generally parts of tracks where the other instruments take a break and the drummer plays a solo or fill. These were then sampled and adopted by DJs and producers to create new tracks, which use the rhythm of the break. Breaks are often considered more funky and danceable than 4/4 genres like house? and trance because of the human groove that comes from the sample as opposed to a strictly quantized beat. |
Brick-wall filter | A lowpass filter at the input of an analog-to-digital converter, used to prevent frequencies above the Nyquist limit from being encoded by the converter. See Nyquist frequency, aliasing. |
Buffer | An area of memory, used for recording or editing data before it is stored in a more permanent form. |
Byte | A group of eight bits. (MIDI bytes consist of ten bits because each byte includes a start bit and a stop bit, with eight bits in the middle to convey information.) |
Top | |
Card | (1) A plug-in memory device. RAM cards, which require an internal battery, can be used for storing user data, while ROM cards, which have no battery, can only be used for reading the data recorded on them by the manufacturer. (2) A circuit board that plugs into a slot in a computer. |
Carrier | A signal that is being modulated by some other signal, as in FM synthesis. |
CDDB | A huge online database of audio CD information including album, artist, song names and more. Information is added and retreived by users of CDDB enabled software, allowing the database to continually grow. Building on the original database, CDDB2 enables expanded album and track-by-track credits, genres, web-links, segments and more. You can learn more about CDDB and CDDB2 including programming information at www.cddb.com. |
Cent | The smallest conventional unit of pitch deviation. One hundred cents equal one half-step. |
Channel | An electrical signal path. In analog audio (such as a mixer), each channel consists of separate wired components. In the digital domain, channels may share wiring, and are kept separate through logical operations. MIDI provides definitions for 16 channels, which transmit not audio signals but digital control signals for triggering synthesizers and other devices. |
Channel pressure | A type of MIDI control message that is applied equally to all of the notes on a given channel; the opposite of poly pressure, in which each MIDI note has its own pressure value. Also called aftertouch, channel pressure is generated on keyboard instruments by pressing down on a key or keys while holding them down. See aftertouch, poly pressure. |
Chorus | An audio effect used to "expand" or "thicken" a sound by playing multiple versions of the input signal with slightly different delays and changes in pitch simulating an ensemble of the input sound. |
Clangorous | Containing partials that are not part of the natural harmonic series. Clangorous tones often sound bell-like. |
Clipping | Clipping is the alteration of a waveform due to volume overload. Audio devices have certain limits to how far a crest or trough can go in any waveform passing through it. When that limit is breached the area of the waveform that exceeds the limit will be cut off, or clipped. |
Clock | Any of several types of timing control devices, or the periodic signals that they generate. A sequencer's internal clock is always set to some number of pulses per quarter-note (ppq), and this setting is one of the main factors that determine how precisely the sequencer can record time-dependent information. The actual clock speed is usually determined by the beats-per-minute setting. See ppq, bpm, MIDI clock. |
Clock resolution | The precision (measured in ppq) with which a sequencer can encode time-based information. |
Codec | Stands for coder/decoder. Codecs are often used by software to compress and decompress audio data. For example, most Windows computers have an ADPCM codec which many software applications use to read and write compressed audio data from ADPCM compressed WAV files. You can view the codecs installed in Windows by going to Control Panel > Multimedia > Devices Tab > Audio Compression Codecs. |
Companding | A type of signal processing in which the signal is compressed on input and expanded back to its original form on output. Digital companding allows a device to achieve a greater apparent dynamic range with fewer bits per sample word. |
Compression | (1) The process of reducing the amplitude range of an audio signal by reducing the peaks and bringing up the low levels. (2) The process of reducing a data file in size, often by noting patterns in the data and summarizing them. Some types of audio data compression are "lossy," meaning the quality of the audio is reduced. |
Continuous controller | A type of MIDI channel message that allows control changes to be made in notes that are currently sounding. See controller. |
Controller | (1) Any device -- for example, a keyboard, wind synth controller, or pitch-bend lever -- capable of producing a change in some aspect of a sound by altering the action of some other device. (2) Any of the defined MIDI data types used for controlling the ongoing quality of a sustaining tone. Strictly speaking, MIDI continuous controllers are numbered from 0 to 122; in many synthesizers, the controller data category is more loosely defined to include pitch-bend and aftertouch data. |
Cross-switching | A velocity threshold effect in a synthesizer in which one sound is triggered at low velocities and another at high velocities, with an abrupt transition between the two. If the transition is smooth rather than abrupt, the effect is called crossfading rather than cross-switching. Cross-switching can also be initiated from a footswitch, LFO, or some other controller. Also called velocity switching. |
Crossfade looping | A sample-editing feature found in many samplers and most sample-editing software, in which some portion of the data at the beginning of a loop is mixed with some portion of the data at the end of the same loop, so as to produce a smoother transition between the end and the beginning when the loop plays. |
Cutoff frequency | The point in the frequency spectrum beyond which a synthesizer's filter attenuates the audio signal being sent through it. |
Top | |
DAC (digital-to-analog converter) | A device that changes the sample words put out by a digital audio device into analog fluctuations in voltage that can be sent to a mixer or amplifier. All digital synthesizers, samplers, and effects devices have DACs (rhymes with fax) at their outputs to create audio signals. |
DAT | Digital Audio Tape. A tape-based digital audio recording and playback system, developed by Sony, which use a sampling rate of 48 kHz (slightly higher than CDs, which use 44.1 kHz). |
Data dump | A packet of memory contents being transmitted from place to place (usually in the form of MIDI system-exclusive data) or stored to a RAM card. |
Daughterboard | A small circuit board that can be attached to a larger one (the motherboard), giving it new capabilities. For example, some companies manufacture daughterboards that add sampled sounds to soundcards that previously could only synthesize sounds via FM. |
DAW (Digital Audio Workstation) | A Digital Audio Workstation (DAW) generally means a computer system specifically set up for music and audio recording. A DAW can be represented in many forms. They can be either a PC running Windows or Linux, a Mac or a standalone box used for music making. Usually a DAW consists of a sequencer, mixer? and some means of audio recording and playback. Some DAWs also have facilities for synthesis and effects, although these are generally not standard features. DAW is also used to describe multi-track audio/MIDI sequencer software suites. |
dB (decibel) | A unit of measurement used to indicate audio power level. Technically, a decibel is a logarithmic ratio of two numbers, which means that there is no such thing as a dB measurement of a single signal. In order to measure a signal in dB, you need to know what level it is referenced to. Commonly used reference levels are indicated by such symbols as dBm, dBV, and dBu. |
Decay | The period of an Envelope during which a sound's attribute (such as volume) stabilizes after the Attack has completed. When the sound attribute reaches the end of it's decay, it has reached the Sustain period. |
Delay | An effect that is used to add depth or space to an audio signal by repeating the input one or more times after a brief pause of a few milliseconds to a few seconds. Delay is also often referred to as Echo. |
Detune | A control that allows one oscillator to sound a slightly different pitch than another. Verb: To change the pitch of one oscillator relative to another, producing a fuller sound. |
Digital | Using computer-type binary arithmetic operations. Digital music equipment uses microprocessors to store, retrieve, and manipulate information about sound in the form of numbers, and typically divides potentially continuous fluctuations in value (such as amplitude or pitch) into discrete quantized steps. Compare with analog. |
DirectX | This Microsoft Windows API was designed to provide software developers with direct access to low-level functions on PC peripherals. Before DirectX, programmers usually opted for the DOS environment, which was free of the limited multimedia feature set that characterized Windows for many years. |
Distortion | Distortion is casually used to describe any form of interference or unwanted portion of a signal (for example background hiss on a recording), or any significant alteration of a signal's shape. In more technical terms, audio distortion is the result of any process or deformation of a signal which intensifies its latent harmonic content, or the displacement of any given fundamental frequency to its harmonics. distortion is a nonlinear process, which means it is a time varying function most plainly described as a function whose graph is not a straight line or cannot be predicted over time. The simplest and most common mechanism for distortion is clipping, which occurs when a signal's amplitude is restricted by a set limit (in Digital Signal Processing) or is driven beyond the gain capacity of a processor or amplifier (in analog circuitry), effectively cutting off the tops of the sound wave. Types of distortion Waveshaping: Waveshaping is a technique of distortion which explicitly alters the shape of the original waveform by forcing its amplitude to conform to the shape of a nonlinear transfer function. A waveshaper can emulate many other types of distortion. Fuzz: Originated with the "Fuzz Face" Germanium circuit guitar pedal, used to describe a form of high gain asymmetrical clipping, with prominent second and third harmonics, and noticeable fourth and fifth harmonics. Typically also adds extra noise to the signal. Rectification: The process of inverting the polarity of all positive or negative half-cycles of a signal. Full wave rectification inverts the polarity of one set of half-cycles but still retains all half-cycles, commonly used "octave" style guitar distortion effects pedals. Half wave rectification is a similar process, but eliminates one set of half-cycles. Saturation: The distortion that occurs when an audio signal's amplitude exceeds the capacity of a magnetic recording tape. The resultant clipping of the signal was asymmetrical with a soft slope. Similarly, in valve amplifiers, distortion would occur when the control grid of a triode went more positive than the cathode. Overdrive: Effectively the same form of distortion as valve amplifier saturation. A high gain signal is driven beyond the limits of an amplifier circuit, resulting in soft clipping. Phase distortion: The process of modulating the position in time of a signal's wavecycles. Most commonly known as the method of synthesis used in the Casio CZ-series synthesizers, in which the clock frequency of the digital oscillator was sped up and slowed down, forcing the waveshape to compress and expand to fit within a regulated frequency. Quantization distortion: Quantization is the conversion or representation of an analog signal by a series of discrete values. Quantization distortion results from the difference in amplitude from the original and the stepped amplitude of the representative signal. This form of distortion is exploited in bit crusher devices, which results in a wideband noise called quantization noise. Crossover distortion: A type of distortion that can occur in an audio amplifier caused by a slight delay in amplification at the crossing-over point, affecting the signal's polarity transition (from positive to negative or vice versa) as it is passed from one output device to another. Intermodulation distortion: Intermodulation distortion results when two or more signals of different frequencies are mixed together, forming additional signals at frequencies that are not, in general, at harmonic frequencies of either original signal. |
Dither | Dithering (adding dither) is the process of adding very low level noise. Usually that is white noise with triangular distribution. This is usefull everytime a signal gets requantized. Without a dither the quantisation error is correlated to the signal and will result in harmonics. With dither the error is distributed equally to all freqeuencies and not only to harmonics. Although an additional noise is introduced, dither usually sounds better to the human ear. If an additional noiseshaper? is involved, the quantisation error can be pushed into regions, where the human ear is less sensible. |
Dry | Consisting entirely of the original, unprocessed sound. The output of an effects device is 100% dry when only the input signal is being heard, with none of the effects created by the processor itself. Compare with wet. |
DSP | Digital Signal Processing uses mathmatics to operate on a digital signal (such as a digital audio stream) to generate some type of altered output. Broadly speaking, all changes in sound that are produced within a digital audio device, other than changes caused by simple cutting and pasting of sections of a waveform, are created through DSP. A digital reverb is a typical DSP device. DSP is used heavily in software and hardware effects processing. DSP chips are found on an increasing number of Sound Card to provide extra audio processing power and help relieve the computers CPU of this type of work |
Dynamic voice allocation | A system found on many multitimbral synthesizers and samplers that allows voice channels to be reassigned automatically to play different notes (often with different sounds) whenever required by the musical input from the keyboard or MIDI. |
Dynamics | Dynamic Range The dynamic range of a system for producing, reproducing or storing sounds is the difference between the loudest and the quietest sound levels the system is capable of handling. The loudest is understood as the point where a certain level of distortion sets in, and the quietest is the level where noise from the system drowns out the sound information you're interested in. In the form of an equation, it looks like this: Dynamic range in dB = 20 x log(max level / min level) Your dynamic range is finite. There is a floor and a ceiling, and everything in between is your dynamic range. The dynamic range of your own analogue hearing system (ears, brain, bones, etc) is somewhere around 130 dB(SPL) for short periods of time. More than that, and you get hurt, less than zero and you simply can't hear it, even in a silent room when you're trying hard. The dynamic range of a typical sound system is understood to be its signal-to-noise ratio. Any system will have a noise floor, or the noise generated by the components of the system itself. Your musical sounds or the useful signal you're trying to record, store, transmit or communicate will not be distinguishable from this noise if the level of the signal does not exceed the noise level. So you measure the noise coming from the system when you're not playing any sounds, and then you measure the level of a sound that just barely begins to distort (there are rules for doing this right), and, simply put, the difference between those two levels becomes your dynamic range. Typical values: A Compact disc has a S/N ratio of about 100dB. My Shure condenser microphone manages about 75dB, about the same as a vinyl record, which beats an FM radio transmission by about 15dB. Normally, you would attempt to record and mix your music in the upper part of your dynamic range, to ensure that intentional dynamics in the music get enough "space" in the range, but also so musical or otherwise useful sounds are significantly louder than the system noise. Dynamics in music In music, dynamics are crucial in many ways. A musical piece will normally have an overall dynamic "envelope", where softer and louder parts interact to make the feel of the music more interesting. But also individual bars or rhythmic patterns or figures are dynamic in the sense that some notes are accented or played quieter than others for a musical effect. Terms like piano, forte, crescendo etc all indicate a composers intentions for the dynamics of a piece of music, and these should of course not be ignored or taken lightly when performing, recording or mixing music. See Wikipedia: Dynamics (music) Dynamics in the mix Unwanted dynamics can cause problems for an engineer, and this is an area that deserves close attention. A recording of a bass guitar, for instance, will most likely have softer and louder moments, and this may be intentional. But if there are a few transient? attacks or booms that are considerably louder than the rest of the track, this will probably mean that you are unable to adjust the entire track as loudly in the mix as you would like to, because those transients might clip? or otherwise distort, or simply overpower the mix at those points. Turning the entire track down to accommodate the few spikes that are too loud might not be a good idea, because then most of the bass track will be too low in the mix. "Riding the fader" during mixdown is a possibility (and a time-honoured mixing technique), and automating faders is another much used technique, but dynamic processing is often a better choice than manual handling of the problems. Dynamic processing can also help you pick up passages that are too low, or help you suppress unwanted noise from a track when there is little or no useful information present. Dynamic Processing Compressor, expander, limiter, and gate are all common names for dynamic processors. They are not effects in the sense that they are in the least bit interested in frequencies, timbre, rhythm, phatness, warmth or anything remotely musical, they are dynamic processors. They deal with levels. They turn it up or they turn it down. No more. In digital audio, you determine the resolution, or the number of discrete steps between min and max, of the dynamic range by the bit depth of the system. |
Top | |
Early reflections | A reverb algorithm whose output consists of a number of closely spaced discrete echoes, designed to mimic the bouncing of sound off of nearby walls in an acoustic space. |
Echo | A very basic effect produced by repeating a sound with a delay long enough to be heard as a separate event. It is often just called Delay and is usually used to add more depth to an audio signal without the muddiness often introduced by Reverb. |
Edit buffer | An area of memory used for making changes in the current patch. Usually the contents of the edit buffer will be lost when the instrument is switched off; a write operation is required to move the data to a more permanent area of memory for long-term storage. |
Effects | Any form of audio signal processing -- reverb, delay, chorusing, etc. |
Envelope | Used in sound synthesis to control the volume, pan, pitch or other attribute of sound over a period of time. ADSR envelopes are the most commonly used type of envelope. They are divided into several segments, Attack, Decay, Sustain and Release. The attack segment is often triggered by pressing a keyboard note. The envelope continues and holds the sustain level until the keyboard note is released, which causes the release segment to finish the envelope. |
Envelope generator | A device that generates an envelope. Also known as a contour generator or transient generator, because the envelope is a contour (shape) that is used to create some of the transient (changing) characteristics of the sound. See ADSR, envelope. |
Envelope tracking | A function (also called keyboard tracking, key follow, and keyboard rate scaling) that changes the length of one or more envelope segments depending on which key on the keyboard is being played. Envelope tracking is most often used to give the higher notes shorter envelopes and the lower notes longer envelopes, mimicking the response characteristics of percussion-activated acoustic instruments, such as guitar and marimba. |
Equalizer (EQ) | A device used to cut and boost individual frequencies of an audio signal using a number of filters. The name "equalizer" comes from the original application of correcting distorted audio signals to sound closer to the original source. Graphic Equalizer and Parametric Equalizer are different types of equalizers used by audio equipment and software Plug-In. |
Top | |
Feedback | Feedback is a process where a signal or part of a signal is fed back onto itself and (usually) carried further on in the signal chain. A negative feedback loop is often used to control the future of a signal by comparing it with an inverted version of its recent past, while a positive feedback loop is usually an unwanted increasing oscillation that might be damaging, for instance to ears or electronic equipment. There are many types of feedback loops in nature, electronics, mechanics and so on, but the feedback we're often most interested in when we talk about audio is of course the process where the output of a signal chain gets fed back onto itself causing unwanted positive feedback loops. This quickly starts resonating at frequencies determined by the equipment used, and multiplies in level until you're deaf or the speaker blows. A typical such feedback loop is one where the output of a stage monitor is picked up by the vocal microphone which is the original source of the sound from that monitor. Once a resonance frequency is determined, the well-known deafening howl is a fact. Feedback from a guitar amplifier via the microphones of an electric guitar back into that same amplifier is often used with stunning results in a musical way, and is often a sought after beast that requires some skill to tame. See also Delay; Reverb; Flanger |
FFT (Fourier analysis) | Fast Fourier transform. A quick method of performing a Fourier analysis on a sound. A technique, usually performed using a DSP algorithm, that allows complex, dynamically changing audio waveforms to be described mathematically as sums of sine waves at various frequencies and amplitudes. See DSP. |
File Format | The structure that defines how data is organized in a software file used to store information about a sample, musical score, etc. A standardized file format makes it possible for different software programs to share the same information. |
Filter | A function that cuts off a specific frequency band to change a sounds brightness, thickness and other qualities. A few common filter types are Low-pass Filter, High-pass Filter and Bandpass Filter. |
Flanger | An audio effect created by varying a slight delay between two identical audio signals that results in a sound similar to a jet airplane taking off or landing. |
FM (frequency modulation) | A change in the frequency (pitch) of a signal. At low modulation rates, FM is perceived as vibrato or some type of trill, depending on the shape of the modulating waveform. When the modulating wave is in the audio range (above 20Hz or so), FM is perceived as a change in tone color. FM synthesizers, commonly found on computer soundcards, create sounds using audio-range frequency modulation. |
FM synthesis | A technique in which frequency modulation (FM) is used to create complex audio waveforms. |
Formant | A resonant peak in a frequency spectrum. For example, the variable formants produced by the human vocal tract are what give vowels their characteristic sound. |
Frame | The basic unit of SMPTE time code, corresponding to one frame of a film or video image. Depending on the format used, SMPTE time can be defined with 24, 25, 30, or 29.97 frames per second. See SMPTE time code. |
FreeMIDI | A Macintosh operating system extension developed by Mark of the Unicorn that enables different programs to share MIDI data. For example, a sequencer could communicate with a librarian program to display synthesizer patch names -- rather than just numbers -- in the sequencer's editing windows. |
Full-Duplex | The ability to send and receive data simultaneously which, in digital audio terms, translates to being able to play and record audio at the same time. Many sequencing and multi-track recording programs use a sound card's full-duplex capabilities to allow recording to a new track while playing back previously recorded tracks for reference. Most modern sound cards are full-duplex, but many of the older ones are only able to record or play audio at different times. They are said to be "Half-Duplex". |
Top | |
Gain | The amount of boost or attenuation of a signal. |
Gate | A gate can refer to anything that is switched on and off. A noise gate is used to mute an audio source whose volume goes below a set level. A gated saw synth is typically used in trance, in which a subtractive synth using saw oscillators has its volume or filter turned on and off in a quick, repetetive and rhythmic pattern. A noise gate is a dynamics processor by nature, see dynamics. Gate is also a term for a signal which switches on envelopes or other triggered systems within analog synthesisers. Typically this will be initiated by the action of depressing a key, but other signals may be used. |
General MIDI (GM) | A set of requirements for MIDI devices aimed at ensuring consistent playback performance on all instruments bearing the GM logo. Some of the requirements include 24-voice polyphony and a standardized group (and location) of sounds. For example, patch #17 will always be a drawbar organ sound on all General MIDI instruments. |
Glide | A function, also called portamento, in which the pitch slides smoothly from one note to the next instead of jumping over the intervening pitches. |
Graphic editing | A method of editing parameter values using graphic representations (for example, of envelope shapes) displayed on a computer screen or LCD. |
Graphic Equalizer | A type of Equalizer (EQ) that provides control over a fixed set of frequencies. Each filter provides linear cut/boost control over a fixed frequency. The number of filters on graphic equalizers range from three (low, mid, high) to well over eleven. While graphic equalizers generally have more filters than Parametric Equalizer, they are less flexible, in that the individual filter frequencies are not adjustable. |
Groove | Groove refers to the rhythmic feel of a piece of music. A piece that has groove (or is groovy) has a kind of elasticity and liveliness that is considered rhythmically pleasant and even fundamentally important. Groove is created by playing some hits on an instrument (usually drums and percussion) too early or too late compared to mathematically accurate playing in tempo. Usually the offset is subtle but it may also be exaggerated such as in hip-hop?. The term swing is often used synonymously with groove. |
Top | |
Hall Radius | Hall radius is the distance from a sound source to where the level of the diffuse ambience? or reverberation from the room equals the level of the direct sound. Beyond the hall radius, the ambience is louder than the source. The physics of the room itself determines this radius, according to the following equation: Hall Radius = 0.141 * V/A where V is the volume of the room and A is the absorption? coefficient. You generally want microphones to be placed closer to the sound source than the hall radius, unless you're looking for a very diffuse, ambient result. Knowing the hall radius also helps you place speakers in large rooms or yourself relative speakers in any room. |
Hard disk recording | A computer-based form of tapeless recording in which incoming audio is converted into digital data and stored on a hard disk. |
Harmonic | A frequency that is a whole-number multiple of the fundamental frequency. For example, if the fundamental frequency of a sound is 440Hz, then the first two harmonics are 880Hz and 1,320Hz (1.32kHz). See overtone. |
Headroom | The amount of additional signal above the nominal input level that can be sent into or out of an electronic device before clipping distortion occurs. |
Hertz (Hz) | the unit measurement of frequency. One Hz equals one cycle per second. The frequency range of human hearing is from 20Hz to 20kHz (20,000Hz). |
Highpass Filter | A type of Filter used to eliminate low-range frequencies resulting in a crisper sound, good for creating percussion sounds with distinctive high ranges. |
Top | |
Impulse Response | An impulse response is, as its name suggests, the response of a particular musical device or a sound space to impulses of sound. This response is "recorded" and used as the basis for audio processing. In the context of this web site, an impulse response is commonly used to model the particular sonic characteristics of a space (e.g. a room) or a device (e.g. an amplifier) which is then interpreted by an effect plugin, for instance, to create a reverb effect on the audio passed to it. |
Inharmonic | Containing frequencies that are not whole-number multiples of the fundamental. See harmonic. |
Input | Each signal that goes into an electronic system is an input. For instance, an input for a home studio setup would be the signal from a Microphone, or the signal from a MIDI Keyboard. Another example; an input for a reverb plugin would be the signal that the reverb is to be applied to. |
Insert FX | An Effect Channel? that can be applied to multiple audio channels, thus 'Inserting' or 'Sending' the effect(s) to them. This is useful for effects such as Reverb, as you can run all your channels through one Insert Reverb, instead of creating an effect for every channel. Also known as Send Channels, Insert Effects are an easy way to cut down on processor usage. |
Interface | A linkage between two things. A user interface is the system of controls with which the user controls a device. Two devices are said to be interfaced when their operations are linked electronically. An interface box is often required to convert signals from one form to another. For example, in order to get MIDI data in and out of a computer, you need some type of MIDI interface hardware. This may hook to an existing port on the computer, such as the printer port, or (in the case of the IBM-PC) it may consist of a circuit board that is plugged into one of the computer's internal slots. |
IRQ | Interrupt Request level. In IBM-PCs, a setting given to peripheral devices like soundcards and CD-ROM drives that identifies them to the computer's CPU. When the peripheral needs to communicate with the CPU, it will send an interrupt with that value. Problems will result if two or more peripherals are set to the same IRQ value. |
Top | |
Jungle | An urban music genre originating in London in the 90's. It contains elements of both breaks, and Jamaican music. Usually around 160bpm, it gets its name from the city it came from being refered to as an 'urban jungle' |
Top | |
Key follow | See envelope tracking. |
Keyboard scaling | A function with which the sound can be altered smoothly across the range of the keyboard by using key number as a modulation source. Level scaling changes the loudness of the sound, while filter scaling changes its brightness. |
kHz | kilohertz (thousands of Hertz). See Hertz. |
Top | |
Latency | Latency is a delay that occurs in 'realtime' audio-processing. For instance, if you load a VST instrument and press a key there will be a slight pause before the note is sounded. This is due to the time the computer needs to receive an input-signal, process it, and then output it. Therefore true realtime processing is impossible as there will always be at least a short (~1-2ms) latency. Nowadays, most DAWs have a feature called "Automatic Delay Compensation" which gets rid of the latency introduced by the plugins used on the different tracks. Of course, sound only travels at roughly a foot per millisecond, so one could argue that even acoustic instruments have latency. |
Layering | Sounding two or more voices, each of which typically has its own timbre, from each key depression. Layering can be accomplished within a single synthesizer, or by linking two synths together via MIDI and assigning both to the same MIDI channel. |
LFO | Low-frequency oscillator. An oscillator especially devoted to applications below the audible frequency range, and typically used as a control source for modulating a sound to create vibrato, tremolo, trills, and so on. |
Librarian | A piece of computer software that allows the user to load and store patches and banks of patches (the librarian) and edit parameters (the editor). |
Limiter | A limiter is a type of compressor that attempts to keep its output signal absolutely below a given level. They are often used for mastering?. |
Little-Endian | Refers to the least significant byte first order in which bytes of a multi-byte value (such as a 32-bit dword value) are stored. For example a decimal value of 457,851 is represented as 0x0006FC7B in hexidecimal and would be stored in a file as: 0x7B, 0xFC, 0x06, 0x00. Intel processors (PC) use Little-Endian. The opposite byte ordering method is called Big-Endian. |
Loop | A piece of material that plays over and over. In a sequencer, a loop repeats a musical phrase. In a sampler, loops are used to allow samples of finite length to be sustained indefinitely. |
Lowpass Filter | A type of Filter used to eliminate high-range frequencies resulting in a rounder sound. |
Top | |
Map | A table in which input values are assigned to outputs arbitrarily by the user on an item-by-item basis. |
Mapper | A device that translates MIDI data from one form to another in real time. |
Matrix modulation | A method of connecting modulation sources to destinations in such a way that any source can be sent to any combination of destinations. |
MCI | Media control interface. A multimedia specification designed to provide control of onscreen movies and peripherals like CD-ROM drives. |
Memory | A system or device for storing information -- in the case of musical devices, information about patches, sequences, waveforms, and so on. |
Merger | A MIDI accessory that allows two incoming MIDI signals to be combined into one MIDI output. |
Microphone | A microphone is a device that transforms sound waves? to an electrical signal. There are many types of microphones, but the most common microphones for music production are the dynamic microphone and the condenser microphone. Both these types of microphones use a light weight membrane, or diaphragm, that vibrates in sympathy with the air pressure waves? of sound. The difference is in the technology used to transform the movement of the diaphragm into an alternating current. In the dynamic microphone the diaphragm is connected to a light weight wire coil that is suspended in a magnet. The vibrations of the diaphragm cause the coil to move in the magnetic field, which induces a current in the coiled wire that is more or less analog with the sound working on the membrane. The dynamic microphone does not require a power supply to work and can normally "take a beating". In the condenser microphone the diaphragm is actually one of the plates in a capacitor, and the vibrations of the sound waves cause the capacitance of this capacitor to vary. The air between the plates acs as the dielectric, and when one plate moves, the properties of the dielectric change as a result. The condenser microphone requires a voltage to be set up across this capacitor, but since this voltage can be supplied via the signal cables, this is normally the job of the receiving equipment (mixer, recording device, etc). This voltage is often called phantom power and is usually set at +48V. The condenser microphone normally responds to higher frequencies than the dynamic microphone, and often has a higher sensitivity?. |
MIDI | Musical Instrument Digital Interface provides a standardized method for MIDI devices such as synthesizers, samplers, sound cards, etc. MIDI commands contain all the information a sound board needs to reproduce the desired sound. MIDI is a specification for the types of control signals that can be sent from one electronic music device to another. |
MIDI clock | A timing reference signal sent over a MIDI cable at the rate of 24 clock pulses per quarter-note (ppq). |
MIDI Interface | A hardware interface that is either inserted into one of the computer's internal expansion slots or plugged into a computer (serial/parallel) port. It allows the computer to communicate with other MIDI instruments by adding one or more MIDI input and output ports. |
MIDI Mapper | A Windows applet that automatically maps (shifts the value of) channel, program change, and note numbers. For example, a map could cause all notes coming in on MIDI channel 3 to go out on MIDI channel 7. |
MIDI mode | Any of the ways of responding to incoming MIDI data. While four modes -- omni off/poly, omni on/poly, omni off/mono, and omni on/mono -- are defined by the MIDI specification, omni on/mono is never used, and at least two other useful modes have been developed -- multi mode for multitimbral instruments and multi-mono for guitar synthesizers. |
MIDI Out/Thru | A MIDI output port that can be configured either to transmit MIDI messages generated within the unit (Out) or to retransmit messages received at the MIDI In (Thru). |
MIDI thru | There are two types of MIDI thru. One, a simple hardware connection, is found on the back panels of many synthesizers. The thru jack in this case simply duplicates whatever data is arriving at the MIDI in jack. Sequencers have a second type, called software thru. In this case, data arriving at the in jack is merged with data being played by the sequencer, and both sets of data appear in a single stream at the out (not the thru) jack. A software thru is useful because it allows you to hook a master keyboard to the sequencer's MIDI input and a tone module to its output. You can then play the keyboard and hear the tone module, and the sequencer can also send its messages directly to the tone module. |
Mixer | A hardware or software device that combines multiple audio signals into one destination signal. Mixers usually provide control over the volume and/or stereo balance of each source signal. |
Mod wheel | A controller, normally mounted at the left end of the keyboard and played with the left hand, that is used for modulation. It is typically set up to add vibrato. See modulation, vibrato. |
Modular Hosts | Modular Hosts differ from conventional Digital Audio Sequencers in that they offer a graphicaly editable view of a signal path(Audio and/or Midi) allowing the User to establish routings via connecting 'wires' between components. This is analogous to being able to access the rear panel of a hardware rack, and frees one from percieved restrictions of conventional host mixers, etc. These 'hosts' typically load 3rd party effects and instruments in VST format, but also can come with proprietary components unique to that particular host. A partial list of Modular Hosts: Audiomulch Buzz energyXT Plogue Bidule Reaktor Tracktion (through rack filters) |
Modular Synthesizer | The modular synthesizer is a type of synthesizer which consists of a set of separate 'units' which generate or process audio or control information, but has no fixed connections or routing between those units. The connections between modules, and thus the overall routing of both audio and control information, are made by the user. This might be done by wires (also known as patch cables?), via patchpins? on a patch matrix?, by drawing lines between points on an onscreen user interface, or even by drop-down menus. Modular synthesisers exist in both analog and digital forms. There are some specific differences between digital and analog modular synthesisers, although the latter often mimic the operation of their analog counterparts. One of the primary differences is that the digital versions can often utilise a flexible number of instances of any given module. However there are usually only limited possibilities for routing audio and control signals in and out of a digital modular. With analog modulars, this is often as easy as patching an extra cable, and, despite the fact that there are some differences in the type of sockets used for the connections, and that the standard for control voltages sometimes differs, using modules from completely different systems together is entirely feasible. Another difference is that digital modulars often allow the creation of one's own 'higher-level' modules out of the existing modules supplied. Most analog modulars do not differentiate between audio signals and control signals, although some digital modulars do. |
Modulation | The fast oscillation of one or more operators or sound waves of a synthesized sound. Commonly used in FM synthesis to add some complexity and texture to a sound. Many MIDI controllers and keyboards provide a specific wheel or slider for controlling the modulation of an instrument sound (often referred to as the mod-wheel). |
Modulation Matrix | A modulation matrix is a part of a synthesizer where you can view or edit modulation sources and destinations. This is a necessity for complex synthesizers with many modulators, as it could quickly become unmanageable otherwise. In an analog modular synthesizer, the modulation matrix may simply be the interconnecting patch cords?, but in digital or software synthesizers there may be charts, tables, or diagrams to show the relationships between parameters. |
Module | A hardware sound generator with no attached keyboard. A module can be either physically separate or integrated into a modular synthesizer, and is designed to make some particular contribution to the process of generating electronic sound. |
Monitors | Speakers with a flat frequency response designed for music production. Monitors should be matched-pair (check product manual) and are intented for mixing/mastering music and not for listening to it. Their purpose is to "surgically" spot mistakes and sound-element placement in the mix, rather than to make music played through them sound good. |
Mono mode | One of the basic reception modes of MIDI channel. In mono mode, an instrument responds monophonically to all notes arriving over a specific MIDI channel. |
Monophonic | Only one note of an instrument may be played at a time. An instrument that can play many at once is said to be Polyphonic. Monophonic instruments usually cut-off the sound of previously played note with the start of new one. |
Monotimbral | Only one instrument sound (Timbre) may be played at a time. Older synthesizers were often monotimbral before sequencers where invented, which enables musicians to play multiple parts on the same instrument. A monotimbral synthesizer may be able to play multiple notes of the one instrument sound simultaneously. |
MTC | MIDI time code. MTC is a way of transmitting SMPTE timing data over a MIDI cable. See SMPTE time code. |
Multi mode | A MIDI reception mode in which a multitimbral module responds to MIDI input on two or more channels and maintains musical independence between the channels, typically playing a different patch on each channel. |
Multiband Compression | Multiband compression is a process by which an audio signal is separated into different frequency-bands (usually two-three). Each frequency-band is then passed through a compressor individually and finally summed with the others. |
Multisample | The distribution of several related samples at different pitches across the keyboard. Multisampling can provide greater realism in sample playback (wavetable) synthesis, since the individual samples don't have to be transposed over a great distance. |
Multitimbral | More than one instrument sound (Timbre) may be played at the same time. Most modern synthesisers, samplers and sound cards have this capability. A musical device that is not multitimbral is said to be Monotimbral. |
Native Processing | This describes any processing which is done by the program/host's CPU as opposed to any outboard processing system such as Pro Tools' TDM system. In audio this most commanly takes the form of DSP for effects and soft-synths. |
Top | |
Normalize | To boost the level of a waveform to its maximum amount short of clipping (distortion). This maximizes resolution and minimizes certain types of noise. |
Nyquist frequency | The highest frequency that can be reproduced accurately when a signal is digitally encoded at a given sample rate. Theoretically, the Nyquist frequency is half of the sampling rate. For example, when a digital recording uses a sampling rate of 44.1kHz, the Nyquist frequency is 22.050kHz. If a signal being sampled contains frequency components that are above the Nyquist limit, aliasing will be introduced in the digital representation of the signal unless those frequencies are filtered out prior to digital encoding. See aliasing, brick-wall filter. |
Top | |
Octave | The most basic musical interval, and the second harmonic of the natural scale, which represents a doubling of frequency. A string vibrating at 440 Hz will produce the octave, 880 Hz, if you divide it in half (for instance by pressing it down at the 12th fret on a guitar). The diatonic scales in the western harmonic system have 8 notes to the octave, hence the name. There are 12 semi-tones to an octave and in some middle-eastern scales, 24 quarter-tones. |
Omni mode | A MIDI reception mode in which a module responds to incoming MIDI channel messages no matter what their channel. |
OMS | Open Music System (formerly Opcode MIDI System). A real-time MIDI operating system for Macintosh applications (and slated to be integrated into Windows 95). OMS allows communication between different MIDI programs and hardware, so that, for example, a sequencer could interface with a librarian program to display synthesizer patch names -- rather than just numbers -- in the sequencer's editing windows. |
Operator | A term used in Yamaha's FM synthesizers to refer to the software equivalent of an oscillator, envelope generator, and envelope-controlled amplifier. |
Oscillator | A synthesis module used to create a cyclical waveform. An electronic sound source. In an analog synthesizer, oscillators typically produce regularly repeating fluctuations in voltage; that is, they oscillate. In a digital synth, an oscillator more typically plays back a complex waveform by reading the numbers in a wavetable. These simple waveforms may then be passed through other modules (Low Frequency Oscillator, Envelopes, etc.) to add some character. |
Ostinato | A musical term meaning a repeating pattern, often (but not always) in the bass instruments, around which the rest of the music builds itself. Although ostinato is a classical term, ostinato bass patterns are more common in modern music and are better known as riffs or grooves or something like that. |
Output | An output is a signal that comes out of an electronic system. For example, the output for a reverb plugin would be the processed audio containing the reverb. |
Overdub | To record additional parts alongside (or merged with) previous tracks. Overdubbing enables "one-man band" productions, as multiple synchronized performances are recorded sequentially. |
Overtone | A whole-number multiple of the fundamental frequency of a tone. The overtones define the harmonic spectrum of a sound. See Fourier analysis, partial. |
Top | |
Panning | Definition "Panning" refers to the horizontal position a sound is percieved to emenate from. The panning field is generally thought of in terms of a 180 degree plane, left to right, but with the advent of sophisticated psychoacoustic tools, panning sometimes refers to the position of a sound in space, be it "above" or "below" the listener, though this is more properly called sound positioning. As a Compositional Element With stereo audio it is possible to pan different signals to the right a left, creating a more "transparent" and lively mix. Often higher pitched instruments are panned to opposite sides so that neither is spectrally masked by the other. Lower frequency tones are often centered? in the mix, as panning your kick drum to one side, tends to unbalance things, and low tones still spectrally mask each other even when panned (an elephant, however would be able to enjoy low frequency panning, but we have not the ears). Panning is definitely an art that must be learned, as opposed to taught, there really in no one way to pan a track, but there are a lot of ways not to. |
Parallel interface | A connection between two pieces of hardware in which several data lines carry information at the same time. Compare with serial interface. |
Parameter | A user-adjustable quantity that governs some aspect of a device's performance. Normally, the settings for all of the parameters that make up a synthesizer patch can be changed by the user and stored in memory, but the parameters themselves are defined by the operating system and cannot be altered. |
Parametric Equalizer | A type of Equalizer (EQ) that provides control over each filter's frequency and the amount of cut or boost of each filter. Typically, parametric equalizer's provide three to four filters that work in parallel, each one filtering a different frequency of the spectrum (i.e. low, mid, high). While parametric equalizers generally have fewer filters than a Graphic Equalizer, they are more flexible and provide finer control, due to the adjustability of the filtered frequencies. |
Partial | One of the sine-wave components (the fundamental, an overtone, or a tone at some other frequency) of a complex tone. See overtone. |
Patch | Refers to an instrument sound, program or voice on a synthesizer or sampler. This term comes from the roots of hardware synthesis, where physical cables where used to connect and route signals in a matrix to create a unique sound. |
Patch map | A map with which any incoming MIDI program change message can be assigned to call up any of an instrument's patches (sounds). See map, MIDI Mapper. |
PCM | Pulse code modulation -- a standard method of encoding analog audio signals in digital form. |
Top | |
Percentage quantization | A method of quantization in which notes recorded into a sequencer with uneven rhythms are not shifted all the way to their theoretically perfect timings but instead are shifted part of the way, with the amount of shift being dependent on the user-selected percentage (quantization strength). See quantization. |
Phaser | A phaser takes the input signal and overlays a copy that has been very slightly delayed, or phase shifted, actually creating a moving comb filter effect. The phase shift is modulated by an LFO to create a characteristic undulating whooshing effect. Stereo phasers often phase-shift one side of the stereo signal. Phasers are 'classic' effects, used for guitars, keyboards and vocals and just anbout anything else |
Physical modeling synthesis | A type of sound synthesis performed by computer models of instruments.This technique emulates the impulse patterns of real-world instruments using a software model. These models are sets of complex equations that describe the physical properties of an instrument (such as the shape of the bell and the density of the material) and the way a musician interacts with it (blow, pluck, or hit, for example). |
Pitch-bend | A shift in a note's pitch, usually in small increments, caused by the movement of a pitch-bend wheel or lever; also, the MIDI data used to create such a shift. See bend. |
Pitch-shift | To change the pitch of a sound without changing its duration, as opposed to pitch-transpose, which changes both. Some people use the two terms interchangeably. |
Plug-In | A "client program" that is used to expand the functionality of a "host program", such as a sequencer or digital audio editor. The host provides the plug-in with some type of input data such as digital audio samples, which is then processed to generate new output, such as effected digital audio. A plug-in is often run seamlessly from within a host program appearing to be part of the standard interface. One plug-in can be used by multiple host programs that share the same plug-in format. Two popular plug-in formats used in computer music and audio are DirectX Plugin and VST Plugin digital audio plug-ins. |
Pole | A portion of a filter circuit. The more poles a filter has, the more abrupt its cutoff slope will be. Each pole causes a slope of 6dB per octave; typical filter configurations are two-pole (12dB/oct) and four-pole (24dB/oct). See rolloff slope. |
Poly mode | A MIDI reception mode in which a module responds to note messages on only one channel, and plays as many of these notes at a time (polyphonically) as it can. |
Poly pressure | Polyphonic pressure. (Also called key pressure.) A type of MIDI channel message in which each key senses and transmits pressure data independently. Compare with channel pressure. |
Polyphonic | More than one note of an instrument sound may be played at the same time. Hardware and software synthesizers usually range from 1 to 128 notes polyphony. The number specifies exactly how many notes may be played at once before cutting-off previously played notes. An instrument that can play only one is said to be Monophonic. General MIDI-compliant synthesizers are required to provide 24 voices of polyphony. Compare with multitimbral. |
Polyphony | The number of voices (notes) a device can produce simultaneously. |
Polytimbral | More than one instrument sound (Timbre) may be played at the same time. Multitimbral is the more commonly used term for this functionality. |
Portamento | See glide. |
Potentiometer (Pot) | A variable resistor (rotary or linear) used to control volume, tone, or other function of an electronic device. Commonly attached to a knob or slider used to adjust some aspect of the signal being passed through it, or to send out a control signal corresponding to its position. |
Ppq | Pulses per quarter-note; the usual measure of a sequencer's clock resolution. |
Preset | (1) A factory-programmed patch that cannot be altered by the user. (2) Any patch. Note: Some manufacturers make distinctions between presets, programs, and/or patches, each of which may contain a different set of parameters. |
Pressure sensitivity | See aftertouch, channel pressure, poly pressure. |
Program change | A MIDI message that causes a synthesizer or other device to switch to a new program (also called preset, patch) contained in its memory. |
Programmable | Equipped with software that enables the user to create new sounds or other assignments by altering parameter settings and storing the new settings in memory. An individual control parameter is said to be programmable if its setting can be stored separately with each individual patch. |
Psychoacoustic | Psychoacoustics is the study of subjective human perception of sounds. Alternatively it can be described as the study of the psychological correlates of the physical parameters of acoustics. |
Pulse Wave | A wave, with a rectangular shape, meaning that the amplitude changes from negative to positive in an ON/OFF manner. Its timbre and harmonic content is determined by the width of the pulse, or the ratio of how long the wave is ON compared to how long it's OFF. A Square Wave is a pulse wave with a ratio of 50:50 and the two names are often used interchangeably, leading to a good deal of confusion. |
Quantization | A function found on sequencers and drum machines that causes notes played at odd times to be "rounded off" to regular rhythmic values. See percentage quantization. |
Quantization noise | One of the types of error introduced into an analog audio signal by encoding it in digital form. The digital equivalent of tape hiss, quantization noise is caused by the small differences between the actual amplitudes of the points being sampled and the bit resolution of the analog-to-digital converter. |
Quantized | Set up to produce an output in discrete steps |
Top | |
RAM | Random access memory. RAM is used for storing user-programmed patch parameter settings in synthesizers, and sample waveforms in samplers. A constant source of power (usually a long-lasting battery) is required for RAM to maintain its contents when power is switched off. Compare with ROM. |
Real time | Occurring at the same time as other, usually human, activities. In real-time sequence recording, timing information is encoded along with the note data by analyzing the timing of the input. In real-time editing, changes in parameter settings can be heard immediately, without the need to play a new note or wait for computational processes to be completed. |
Receptor | A hardware device made by Muse Research that runs VST virtual instruments and effects. |
Reconstruction filter | A lowpass filter on the output of a digital-to-analog converter that smoothes the staircase-like changes in voltage produced by the converter in order to eliminate clock noise from the output. |
Release | The final period of an Envelope during which a sound's attribute (such as volume) decreases from the Sustain level to 0 (silence). The release period is usually started upon releasing a keyboard's note. This period of the envelope defines how a sound finishes off. A long release time causes a sound's attribute to fade away slowly, while a short release time causes it to drop out quickly. |
Release velocity | The speed with which a key is raised, and the type of MIDI data used to encode that speed. Release velocity sensing is rare but found on some instruments. It is usually used to control the rate of the release segments of the envelope(s). |
Remix | Originally comming from the dub? versions, a remix is recording which has been mixed-down again for one reason or another. This usually involves material being taken, and added to the recording and different parts empahised. This is usually for club use. Remixes are usually not made by the original artist himself. By nature a remix can be very similar to the original (maybe a long version or 'club mix') or it can be very different and only use small elements of the original (like remixes by aphex twin). Remix culture seems to have stemed from 12" Vinyl, DJs and B-Sides in the 1980's although no-one person pioneered the idea, and bands have been creating alternate versions of their songs to play live for a lot longer than this. |
Resample | To recalculate Sample in a sound file at a different Sample Rate than the file was originally recorded. If a sample is resampled at a lower rate, sample values are removed from the sound file, decreasing its size, but also decreasing its available frequency range and possibly introducing Aliasing. Resampling to a higher sample rate, often interpolates extra sample values into the sound file. This increases the size of the sound file but may not increase the quality (depends on the algorithm used). |
Resolution | The fineness of the divisions into which a sensing or encoding system is divided. The higher the resolution, the more accurate the digital representation of the original signal will be. |
Resonance | A function on a filter in which a narrow band of frequencies (the resonant peak) becomes relatively more prominent. If the resonant peak is high enough, the filter will begin to oscillate, producing an audio output even in the absence of input. Filter resonance is also known as emphasis and Q. It is also referred to in some older instruments as regeneration or feedback, because feedback was used in the circuit to produce a resonant peak. |
Reverb | A type of digital signal processing that simulates natural reverberations (sound reflections) that occur in different rooms and environments to create an ambience or sense of spaciousness. Reverberation contains the some frequency components as the sound being processed (concert hall). |
Rewire | What is ReWire? ReWire is a system for transferring audio data between two computer applications, in real time. Basically, you could view ReWire as an "invisible cable" that streams audio from one computer program into another. ReWire was developed by Propellerhead Software AB in 1998 and first appeared in Propellerheads' ReBirth RB-338 and Steinberg's Cubase VST, allowing the two programs to communicate in a way that hadn't been possible before. Since then, a version 2 of ReWire has been released, with several significant improvements and additions. Today, a number of software applications from different manufacturers support ReWire. |
Rex File | Rex files are beat sliced audio files in a proprietary format used primarily by Propellerhead's ReCycle software. Once created, these files can also be used natively within Cubase. |
RIFF | The Resource Interchange File Format is the storage structure commonly used for multimedia data on the Windows platform. It organizes data in chunks which each have a small header that describe the chunk type and size. This structure allows programs that do not recognize specific chunk types to skip over the unknown data and continue correctly processing known chunks in the file. Data chunks may contain smaller "sub-chunks" of data. In fact, all RIFF files are supposed to store all data chunks inside a master "RIFF" chunk that defines the type of resource data the file contains. WAVE and AVI files are examples of data stored in the RIFF format. |
Ring Modulation | Ring Modulation is a simple multiplication of one waveform's amplitude values by another. In ring modulation the modulator? output is applied directly to the input of the carrier?, without any DC offset to remove negative values as done in classic ring modulation. Ring modulation creates a somewhat unpredictable sound, often clangourous, due to the removal of frequency content, because neither the frequency of the carrier or modulator appear in the final output, instead the sidebands?, the sum and difference frequencies, appear. |
Ring modulator | A special type of mixer that accepts two signals as audio inputs and produces their sum and difference tones at its output, but does not pass on the frequencies found in the original signals themselves. |
Rip | To extract or copy data from one format to another more useful format. The most common example is found in the phrase "CD Ripping" which means to copy audio tracks from an ordinary audio CD and save them to hard disk as a WAV, MP3 or other audio file, which can then be played, edited or written back to another CD. |
Rolloff slope | The acuity of a filter's cutoff frequency. Rolloff is generally measured in decibels (dB) per octave. A shallow slope, such as 6dB per octave, allows some frequency components beyond the cutoff frequency to be heard, but at a reduced volume. When the rolloff slope is steep (on the order of 24dB per octave), frequency components very close to the cutoff frequency are reduced in volume so much that they fall below the threshold of audibility. See filter, pole. |
ROM | Read-only memory. A type of data storage whose contents cannot be altered by the user. An instrument's operating system, and in some cases its waveforms and factory presets, are stored in ROM. Compare with RAM. |
Rompler | A Sampler which has all its sample material built into its interface, in what can be refered to as Read-Only-Memory - hence the ROM in Rompler. Technically to qualify as a full fledged rompler the sample set must be somewhat comprehensive, being carefully selected to provide a vast range of sounds to make the rompler more versatile and programmable. Otherwise, it is merely be a dedicated sample player capable of producing a limited sound palette. Also, romplers typically can perform more than mere sample playback of their waveforms, containing some degree of modulation and effects capability to modify the sample(s). Typical modulation capabilities include filters, lfos, and envelopes. Effects capabilities may include reverb, distorion, delay, phasers, flangers, panning, stereo spread, compression, etc. |
Top | |
Sample | A sound or short piece of audio stored digitally in a computer, synthesizer or Sampler. The word sample may refer to either a single moment in a digital audio stream (the smallest piece of data used to represent an audio signal at a given time) or a complete sound or digital audio stream made up of a collection of individual samples. |
Sample Rate | The resolution of digital audio that determines it's sound quality. When audio is digitally recorded (digitized), it must be converted into a series of Samples which can be stored in memory or on disk. The sample rate defines how many samples are recorded per second of audio input and is measured in Hz (Hertz, cycles per second) and kHz (Kilohertz, thousand cycles per second). |
Sample-and-hold | A circuit on an analog synthesizer that, when triggered (usually by a clock pulse), looks at (samples) the voltage at its input and then passes this voltage on to its output unchanged, regardless of what the input voltage does in the meantime (the hold period), until the next trigger is received. In one familiar application, the input was a noise source and the output was connected to oscillator pitch, which caused the pitch to change in a random staircase pattern. The sample-and-hold effect is often emulated by digital synthesizers through an LFO waveshape called "random." |
Sampler | A hardware device or software application that uses Samples as it's main method of generating it's audio output. Samplers often use a number of samples together to create realistic sounding reproductions of real sounds and musical instruments. |
Sampling | The process of encoding an analog signal in digital form by reading (sampling) its level at precisely spaced intervals of time. See sample, sampling rate. |
Scrub | To move backward and forward through an audio waveform under manual control, in order to find a precise point in the wave for editing purposes. |
SCSI | Small Computer Systems Interface, a high-speed communications protocol that allows computers, samplers, and disk drives to communicate with one another. Pronounced "scuzzy." |
SDII | Sound Designer II, an audio file format. The native format of Digidesign's Sound Designer II (Macintosh) graphic audio waveform editing program. |
Sequence | A set of music performance commands (notes and controller data) stored in a sequencer. |
Sequencer | A hardware device, software application or module used to arrange (ie. sequence) timed events into some order. In digital audio and music, sequencers are used to record and arrange MIDI and/or audio events into patterns and musical compositions. |
SFI | A file extension specifying Turtle Beach's SoundStage audio format. Typically encountered as FILENAME.SFI. |
Sidebands | Frequency components outside the natural harmonic series, generally introduced to the tone by using an audio-range wave for modulation. |
Sidechain | Sidechain (and sidechaining) is a process by which one audio input is used to determine the amount of an effect? which applies to another audio input. The determining audio input is called the sidechain and the process itself is called sidechaining. In some cases this is also called "ducking", particularly in the context of delay effects. In ducking, the delay works inversely with the sidechain input so that the delays sound after or between certain levels in the original input. The most common use of sidechaining is when using one or more compressors to maintain volume levels across several different audio inputs or streams. |
Signal-to-Noise Ratio | "Signal" refers to the useful or "pure" information found in a an audio stream or other medium, and "noise" to anything else. The ratio of these is usually expressed logarithmically, in decibels. Signal-to-Noise Ratio is sometimes abbreviated as SNR, s/n ratio and s:n ratio. A high SNR translates to a "cleaner" signal. |
Sine wave | A signal put out by an oscillator in which the voltage or equivalent rises and falls smoothly and symmetrically, following the trigonometric formula for the sine function. Sub-audio sine waves are used to modulate other waveforms to produce vibrato and tremolo. Audio-range sine waves contain only the fundamental frequency, with no overtones, and thus can form the building blocks for more complex sounds. |
Single-step mode | A method of loading events (such as notes) into memory one event at a time. Also called step mode and step-time. Compare with real time. |
SMP | Turtle Beach's SampleVision audio file format. Typically encountered as FILENAME.SMP. |
SMPTE time code | A timing reference signal developed by the Society of Motion Picture & Television Engineers and used for synchronizing film and videotape to audio tape and software-based playback systems. Pronounced "simp-tee." See frame. |
Snapshot automation | A form of mixing automation (frequently MIDI-controlled) in which the controlling device records the instantaneous settings (the snapshot) for all levels and pan pots, and recalls these settings on cue. |
SND | Sound resource. A Macintosh audio file format. |
Sostenuto pedal | A pedal found on the grand piano and mimicked on some synthesizers, with which notes are sustained only if they are already being held on the keyboard at the moment when the pedal is pressed. Compare with sustain pedal. |
Sound Card | A hardware interface that is either built into a computer's motherboard or inserted into one of the computer's internal expansion slots. Sound cards allow the computer to play digital audio and/or musical instrument sounds. Many sound cards also provide a MIDI Interface. |
SPDIF | S/PDIF (Sony/Philips Digital InterFace) is a serial interface for transferring digital audio between devices such as CD and DVD players and amplifiers. The consumer version of the AES/EBU interface, S/PDIF uses unbalanced 75 ohm coaxial cable up to 10 meters terminated with RCA connectors. It also uses an optical fiber cable with a Toslink (Toshiba link) connector. |
SPP (song position pointer) | A type of MIDI data that tells a device how many sixteenth-notes have passed since the beginning of a song. An SPP message is generally sent in conjunction with a continue message in order to start playback from the middle of a song. |
Spring Reverb | A type of reverb using a spring to generate the reverberations by passing the signal through the spring and recording the sounds it creates. Sounds very artificial, and can have a very long decay. However it is often favoured in Jamaican and British urban music. |
Step input | In sequencing, a technique that allows you to enter notes one step at a time. (Also called step recording.) Common step values are sixteenth- and eighth-notes. After each entry, the sequencer's clock (position in the sequence) will advance one step, then stop, awaiting new input. Recording while the clock is running is called real-time input. |
Subtractive synthesis | The technique of arriving at a desired tone color by filtering waveforms rich in harmonics. Subtractive synthesis is the type generally used on analog synthesizers. Compare with FM synthesis, sampling. |
Sustain | The period of an Envelope during which a sound's attribute (such as volume) holds at a constant level. The sustain period starts at the end of the Decay period and holds until the Release period is started (usually by a keyboard note release). Unlike the other periods of an envelope, the sustain period does not have a slope because it must be capable of holding indefinitely (as long as a keyboard note is pressed). |
Sync | Synchronization. Two devices are said to be in sync when they are locked together with respect to time, so that the events generated by each of them will always fall into predicable time relationships. |
Sync track | A timing reference signal recorded onto tape. See SMPTE time code, FSK. |
Syncopate | To shift the regular accent of a tone or beat by beginning on an unaccented beat and continuing through the next accented beat. |
Synthesizer | A musical instrument that generates sound electronically and is designed according to certain principles developed by Robert Moog and others in the 1960s. A synthesizer is distinguished from an electronic piano or electronic organ by the fact that its sounds can be programmed by the user, and from a sampler by the fact that the sampler allows the user to make digital recordings of external sound sources. |
System real-time | A type of MIDI data that is used for timing reference. Because of its timing-critical nature, a system real-time byte can be inserted into the middle of any multi-byte MIDI message. System real-time messages include MIDI clock, start, stop, continue, active sensing, and system reset. |
System-common | A type of MIDI data used to control certain aspects of the operation of the entire MIDI system. System-common messages include song position pointer, song select, tune request, and end-of-system-exclusive. |
System-exclusive (sys-ex) | A type of MIDI data that allows messages to be sent over a MIDI cable that will be responded to only by devices of a specific type. Sys-ex data is used most commonly for sending patch parameter data to and from an editor/librarian program. |
Top | |
THD | Total harmonic distortion. An audio measurement specification used to determine the accuracy with which a device can reproduce an input signal at its output. THD describes the cumulative level of the harmonic overtones that the device being tested adds to an input sine wave. THD+n is a specification that includes both harmonic distortion of the sine wave and nonharmonic noise. |
Timbre | The characteristics that differentiate one instrument, voice or sound from another. It can be thought of as the texture or characteristics that define a sound. Notes of the same pitch and volume may have a different timbre. In electronic music, timbre sometimes refers to a synthesizer voice or patch (see Multitimbral). |
Time code | A type of signal that contains information about location in time. Used for a synchronization reference when synchronizing two or more machines such as sequencers, drum machines, and tape decks. |
Time Variant Amplifier | Alters the volume of an audio signal over a period of time, often based on an Envelope. |
Time Variant Filter | Alters the brightness, thickness and other aspects of an audio signal over a period of time using filters, often based on an Envelope. |
Time Variant Pitch | Alters the pitch of an audio signal over a period of time, often based on an Envelope. |
Touch-sensitive | Equipped with a sensing mechanism that responds to variations in key velocity or pressure by sending out a corresponding control signal. See velocity, aftertouch. |
Track | One of a number of independent memory areas in o sequencer. By analogy with tape tracks, sequencer tracks are normally longitudinal with respect to time and play back in sync with other tracks. |
Transient | Any of the non-sustaining, non-periodic frequency components of a sound, usually of brief duration and higher amplitude than the sustaining components, and occurring near the onset of the sound (attack transients). |
Tremolo | A periodic change in amplitude, usually controlled by an LFO, with a periodicity of less than 20Hz. Compare with vibrato. |
Top | |
VCA (Voltage-Controlled Amplifier) | An audio signal amplifier whose output is controlled by voltage (instead of a Potentiometer (Pot)). VCAs can be used to alter the amplitude of a signal output from a Voltage-Controlled Oscillator. |
VCD | A Video CD is a compact disc which stores video and audio compressed using MPEG-1 file format technology. Movies are generally compressed to around 352x240 pixels (NTSC) resulting in about 1 GB of data, which spans over two CDs. While they aren't common in the USA, VCDs are more common in other countries where many popular electronics companies sell dedicated VCD players. DVD technology surpasses the quality of VCD technology, mostly due to the increased storage capacity of DVD media. |
VCF (Voltage-Controlled Filter) | A filter whose cutoff frequency and resonant frequency is adjusted using a control voltage. VCFs are used to filter the audio signals generated by VCOs in an analog synthesizer in order to create more interesting and textured sounds. |
VCO (Voltage-Controlled Oscillator) | An analog circuit that generates a electrical waveform, such as a Sine, Saw or Square wave where the pitch is determined by a control voltage. VCOs are used by older analog synthesizers to generate the base sounds which are then altered by Voltage-Controlled Amplifiers and Voltage-Controlled Filters. |
Velocity | A type of MIDI data (range 1 to 127) usually used to indicate how quickly a key was pushed down (attack velocity) or allowed to rise (release velocity). Note: A note-on message with a velocity value of 0 is equivalent to a note-off message. |
Velocity curve | A map that translates incoming velocity values into other velocities in order to alter the feel or response of a keyboard or tone module. |
Velocity sensitivity | A type of touch sensitivity in which the keyboard measures how fast each key is descending. Compare with pressure sensitivity. |
Vibrato | A periodic change in frequency, often controlled by an LFO, with a periodicity of less than 20Hz. Compare with tremolo. |
Vinyl | An analog method of storing sound, Vinyl or a Record is a disc of plastic (Vinyl), pressed with a tight-wound spiral groove. The varying depth of the groove on the record represents the amplitude of the recorded signal's wave form. The needle (often a diamond) on the record player's stylus follows the groove as the record spins and transforms the amplitude into an electro-magnetic signal that is driven by an amplifier before going to speakers. |
Vocoder | An audio effect that produces "robotic" sounding results when processing vocal input. It uses an algorithm called ring modulation to produce the effect. Examples can be found in some disco and modern music, sounds like the "Transformers". |
Voice | (1) An element of synthesizer circuitry capable of producing a note. The polyphonic capability of a synthesizer is defined by how many voices it has. See polyphony. (2) In Yamaha synthesizers, a patch (sound). |
Voice channel | A signal path containing (at a minimum) an oscillator and VCA or their digital equivalent, and capable of producing a note. On a typical synthesizer, two or more voice channels, each with its own waveform and parameter settings, can be combined to form a single note. |
Voice stealing | A process in which a synthesizer that is being required to play more notes than it has available voices switches off some currently sounding voices (typically those that have been sounding longest or are at the lowest amplitude) in order to assign them to play new notes. |
VST | Developed by Steinberg and first launched in 1996, VST (Virtual Studio Technology) creates a full, professional studio environment on your Windows PC or Mac. VST allows the integration of virtual effect processors (VSTfx) and instruments (VSTi) into your digital audio environment. These can be software recreations of hardware effect units and instruments or new creative effect components in your VST system. All are integrated seamlessly into the host application. Because these connections are virtual, there is no need for messy audio or MIDI cabling. These VST modules have the sound quality of the best hardware units, yet are far more flexible. All functions of a VST effect processor or instrument are directly controllable and automatable, either with a mouse or with an external hardware controller. |
VST Plugin | A program that uses Steinberg's VST technology to obtain digital audio samples which are then manipulated by applying reverb, compression or some other type of audio signal effect. The output signal may be rendered off-line or generated in real-time while the plug-in's host program performs playback. See Plug-In for more details. |
Top | |
Waveform | A signal, either sampled (digitally recorded) or periodic, being generated by an oscillator. Also, the graphic representation of this signal, as on a computer screen. Each waveform has its own unique harmonic content. See oscillator. |
Wavetable lookup | The process of reading the numbers in a wavetable (not necessarily in linear order from beginning to end) and sending them to a voice channel. |
Wavetable synthesis | A common method for generating sound electronically on a synthesizer or PC. Output is produced using a table of sound samples--actual recorded sounds--that are digitized and played back as needed. By continuously rereading samples and looping them together at different pitches, highly complex tones can be generated from a minimum of stored data without overtaxing the processor. |
Wet | Consisting entirely of processed sound. The output of an effects device is 100% wet when only the output of the processor itself is being heard, with none of the dry (unprocessed) signal. Compare with dry. |
Wheel | A controller, normally mounted at the left end of the keyboard and played with the left hand, that is used for pitch-bending or modulation. |
WMDM | Windows Media Device Manager is a Microsoft software component that enables Windows applications to share and transfer files to and from non-PC devices, such as portable MP3 players, in a standardized way. The use of a common software component enables greater software and hardware compatibility and support. |
Word | A single number (sample word) that represents the instantaneous amplitude of a sampled sound at a particular moment in time. In 8-bit recording, a sample word contains one byte; in 16-bit recording, each word is a two-byte number. |
Workstation | A synthesizer or sampler in which several of the tasks usually associated with electronic music production, such as sequencing, effects processing, rhythm programming, and data storage on disk, can all be performed by components found within a single physical device. |
Top | |
XLR | A standardised interconnect format, often used as a more robust alternative to jack sockets. They are based on pins, and usually include a mechanism for 'locking' the connector in place. XLR connectors typically used in audio are 3-pin, and are primarily used in situations where a balanced interconnection is required. Many microphones relying on phantom power also use XLR connectors. |
Top | |
Z-transform | The z-transform is a mathematical transformation commonly used to help understand the frequency response of a filter. Sounds exist in the 'time domain' ie. as a series of amplitudes that vary over time. Filters however, operate over a range of frequencies and you generally need to know their behavioural characteristics against all these frequencies. This means analysing a filter response requires that you visualise 3 dimensions at once - amplitude response (attenuation) (1-d) AND frequency (2-d amplitude/time). This is tricky as your standard piece of paper excels at 2-d x/y style graphs only. By applying a z-transform, you apply some maths to your input signal response to remove the time domain (which you are not interested in anyway when looking at filters). The result is a formulation of your filtered signal showing you how your amplitude varies against frequency. The time domain is in effect removed, allowing you to focus on the way the filter works on ALL frequencies, not just those in a given input. What's the 'z' in a z-transform? The z is a complex number (ie. has two components) showing both frequency and amplitude in the one number. Usually this is then graphed on a unit circle as a polar coordinate. Frequencies change as the 'clock hand' representing the z-value moves anticlockwise around the circle. Zeros and poles (not covered here) that appear on the unit circle and represent the filter then show you where (ie. at what frequencies and by how much) signals are amplified or attenuated. |
Zero crossing | A point at which a digitally encoded waveform crosses the center of its amplitude range. |
Zone | A contiguous set of keys on the keyboard. Typically, a single sound or MIDI channel is assigned to a given zone. |
Top |