Everipedia Logo
Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.


A synthesizer or synthesiser (often abbreviated to synth) is an electronic musical instrument that generates audio signals that may be converted to sound. Synthesizers may imitate traditional musical instruments such as piano, flute, vocals, or natural sounds such as ocean waves; or generate novel electronic timbres. They are often played with a musical keyboard, but they can be controlled via a variety of other devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled via USB, MIDI or CV/gate using a controller device, often a MIDI keyboard or other controller.

Synthesizers use various methods to generate electronic signals (sounds).

Among the most popular waveform synthesis techniques are subtractive synthesis, additive synthesis, wavetable synthesis, frequency modulation synthesis, phase distortion synthesis, physical modeling synthesis and sample-based synthesis.

Synthesizers were first used in pop music in the 1960s.

In the late 1970s, synths were used in progressive rock, pop and disco. In the 1980s, the invention of the relatively inexpensive Yamaha DX7 synth made digital synthesizers widely available. 1980s pop and dance music often made heavy use of synthesizers. Synthesizers are used in genres such as pop, hip hop, metal, rock, dance, and contemporary classical music.


The beginnings of the synthesizer are difficult to trace, as it is difficult to draw a distinction between synthesizers and some early electric or electronic musical instruments.[4][5]

Early electric instruments

One of the earliest electric musical instruments, the Musical Telegraph, was invented in 1876 by American electrical engineer Elisha Gray. He accidentally discovered the sound generation from a self-vibrating electromechanical circuit, and invented a basic single-note oscillator. This instrument used steel reeds with oscillations created by electromagnets transmitted over a telegraph line. Gray also built a simple loudspeaker device into later models, consisting of a vibrating diaphragm in a magnetic field, to make the oscillator audible.[6][7] This instrument was a remote electromechanical musical instrument that used telegraphy and electric buzzers that generated fixed timbre sound. Though it lacked an arbitrary sound-synthesis function, some have erroneously called it the first synthesizer.[4][5]

In 1897 Thaddeus Cahill was granted his first patent for an electronic musical instrument, which by 1901 he had developed into the Telharmonium capable of additive synthesis.[8] Cahill's business was unsuccessful for various reasons, but similar and more compact instruments were subsequently developed, such as electronic and tonewheel organs including the Hammond organ, which was invented in 1935.[8]

In 1906, American engineer Lee de Forest invented the first amplifying vacuum tube, the Audion[10] whose amplification of weak audio signals contributed to advances in sound recording, radio and film,[8] and the invention of early electronic musical instruments including the theremin, the ondes martenot,[12] and the trautonium.[13] Most of these early instruments used heterodyne circuits to produce audio frequencies, and were limited in their synthesis capabilities. The ondes martenot and trautonium were continuously developed for several decades, finally developing qualities similar to later synthesizers.

Graphical sound

In the 1920s, Arseny Avraamov developed various systems of graphic sonic art,[14] and similar graphical sound and tonewheel systems were developed around the world.[15] In 1938, USSR engineer Yevgeny Murzin designed a compositional tool called ANS, one of the earliest real-time additive synthesizers using optoelectronics. Although his idea of reconstructing a sound from its visible image was apparently simple, the instrument was not realized until 20 years later, in 1958, as Murzin was, "an engineer who worked in areas unrelated to music".[16]

Subtractive synthesis and polyphonic synthesizer

In the 1930s and 1940s, the basic elements required for the modern analog subtractive synthesizers — electronic oscillators, audio filters, envelope controllers, and various effects units — had already appeared and were utilized in several electronic instruments.

The earliest polyphonic synthesizers were developed in Germany and the United States. The Warbo Formant Orgel developed by Harald Bode in Germany in 1937, was a four-voice key-assignment keyboard with two formant filters and a dynamic envelope controller.[17][18]

The Novachord released in 1939, was an electronic keyboard that used twelve sets of top-octave oscillators with octave dividers to generate sound, with vibrato, a resonator filter bank and a dynamic envelope controller. During the three years that Hammond manufactured this model, 1,069 units were shipped, but production was discontinued at the start of World War II.[19][20] Both instruments were the forerunners of the later electronic organs and polyphonic synthesizers.

Monophonic electronic keyboards

In the 1940s and 1950s, before the popularization of electronic organs and the introductions of combo organs, manufacturers developed various portable monophonic electronic instruments with small keyboards. These small instruments consisted of an electronic oscillator, vibrato effect, and passive filters. Most were designed for conventional ensembles, rather than as experimental instruments for electronic music studios, but contributed to the evolution of modern synthesizers. These instruments include the Solovox, Multimonica, Ondioline, and Clavioline.

Other innovations

In the late 1940s, Canadian inventor and composer, Hugh Le Caine invented the Electronic Sackbut, a voltage-controlled electronic musical instrument that provided the earliest real-time control of three aspects of sound (amplitude, pitch, and timbre)—corresponding to today's touch-sensitive keyboard, pitch and modulation controllers. The controllers were initially implemented as a multidimensional pressure keyboard in 1945, then changed to a group of dedicated controllers operated by left hand in 1948.[21]

In Japan, as early as 1935, Yamaha released the Magna organ,[22] a multi-timbral keyboard instrument based on electrically blown free reeds with pickups.[23] It may have been similar to another electrostatic reed organ, the Orgatron, developed by Frederick Albert Hoschke in 1934 and then manufactured by Everett and Wurlitzer until 1961.

In 1949, Japanese composer Minao Shibata discussed the concept of "a musical instrument with very high performance" that can "synthesize any kind of sound waves" and is "...operated very easily," predicting that with such an instrument, "...the music scene will be changed drastically."[24][25]

Electronic music studios as sound synthesizers

After World War II, electronic music including electroacoustic music and musique concrète was created by contemporary composers, and numerous electronic music studios were established around the world, for example Studio for Electronic Music (WDR), and Studio di fonologia musicale di Radio Milano. These studios were typically filled with electronic equipment including oscillators, filters, tape recorders, audio consoles etc., and the whole studio functioned as a sound synthesizer.

Origin of the term "sound synthesizer"

Thaddeus Cahill's 1897 patent for his electromechanical instrument, the Telharmonium, uses the verb synthesize 25 times, for example in the phrase "synthesizing composite electrical vibrations out of the ground-tone vibrations and the overtone vibrations" (a description of additive synthesis).[26] Thom Holmes regards Cahill as the coiner of the term in this field.[27]

In 1951–1952, RCA produced a machine called the Electronic Music Synthesizer; however, it was more accurately a composition machine, because it did not produce sounds in real time.[28] RCA then developed the first programmable sound synthesizer, the RCA Mark II Sound Synthesizer, installing it at the Columbia-Princeton Electronic Music Center in 1957.[29] Prominent composers including Vladimir Ussachevsky, Otto Luening, Milton Babbitt, Halim El-Dabh, Bülent Arel, Charles Wuorinen, and Mario Davidovsky used the RCA Synthesizer extensively in various compositions.[30]

In 1959–1960, Harald Bode developed a modular synthesizer and sound processor,[31][32] and in 1961, he wrote a paper exploring the concept of self-contained portable modular synthesizer using newly emerging transistor technology.[33] He also served as AES session chairman on music and electronic for the fall conventions in 1962 and 1964.[34] His ideas were adopted by Donald Buchla and Robert Moog in the United States, and Paolo Ketoff et al. in Italy[35][36][37] at about the same time.[38] Moog is known as the first synthesizer designer to popularize the voltage control technique in analog electronic musical instruments.[38]

A working group at Roman Electronic Music Center, composer Gino Marinuzzi, Jr., designer Giuliano Strini, MSEE, and sound engineer and technician Paolo Ketoff in Italy; their vacuum-tube modular "FonoSynth" slightly predated (1957–58) Moog and Buchla's work.

Later the group created a solid-state version, the "Synket".

Both devices remained prototypes (except a model made for John Eaton who wrote a "Concert Piece for Synket and Orchestra"), owned and used only by Marinuzzi, notably in the original soundtrack of Mario Bava's sci-fi film "Terrore nello spazio" (a.k.a. Planet of the Vampires, 1965), and a RAI-TV mini-series, "Jeckyll".[35][36][37]

Robert Moog built his first prototype between 1963 and 1964, and was then commissioned by the Alwin Nikolais Dance Theater of NY;[39][40] while Donald Buchla was commissioned by Morton Subotnick.[41][42] In the late 1960s to 1970s, the development of miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments, as proposed by Harald Bode in 1961. By the early 1980s, companies were selling compact, modestly priced synthesizers to the public. This, along with the development of Musical Instrument Digital Interface (MIDI), made it easier to integrate and synchronize synthesizers and other electronic instruments for use in musical composition. In the 1990s, synthesizer emulations began to appear in computer software, known as software synthesizers. From 1996 onward, Steinberg's Virtual Studio Technology (VST) plug-ins – and a host of other kinds of competing plug-in software, all designed to run on personal computers – began emulating classic hardware synthesizers, becoming increasingly successful at doing so during the following decades.

The synthesizer had a considerable effect on 20th-century music.[43] Micky Dolenz of The Monkees bought one of the first Moog synthesizers. The band was the first to release an album featuring a Moog with Pisces, Aquarius, Capricorn & Jones Ltd. in 1967,[44] which became a Billboard number-one album. A few months later the title track of the Doors' 1967 album Strange Days featured a Moog played by Paul Beaver. Wendy Carlos's Switched-On Bach (1968), recorded using Moog synthesizers, also influenced numerous musicians of that era and is one of the most popular recordings of classical music ever made,[45]Wendy%20Carlos%20]]alongside the records (particularly Snowflakes are Dancing Isao Tomita in the early 1970s utilized synthesizers to create new artificial sounds (rather than simply mimicking real instruments[46]) and made significant advances in analog synthesizer programming.[47]

The sound of the Moog reached the mass market with Simon and Garfunkel's Bookends in 1968 and The Beatles' Abbey Road the following year; hundreds of other popular recordings subsequently used synthesizers, most famously the portable Minimoog. Electronic music albums by Beaver and Krause, Tonto's Expanding Head Band, The United States of America, and White Noise reached a sizable cult audience and progressive rock musicians such as Richard Wright of Pink Floyd and Rick Wakeman of Yes were soon using the new portable synthesizers extensively. Stevie Wonder and Herbie Hancock also played a major role in popularising synthesizers in soul & jazz music.[48][49] Other early users included Emerson, Lake & Palmer's Keith Emerson, Tony Banks of Genesis, Todd Rundgren, Pete Townshend, and The Crazy World of Arthur Brown's Vincent Crane. In Europe, the first no. 1 single to feature a Moog prominently was Chicory Tip's 1972 hit "Son of My Father".[50]

In 1974, Roland Corporation released the EP-30, the first touch-sensitive electronic keyboard.[51]

Polyphonic keyboards and the digital revolution

In 1973, Yamaha developed the Yamaha GX-1, an early polyphonic synthesizer.[52] Other polyphonic synthesizers followed, mainly manufactured in Japan and the United States from the mid-1970s to the early-1980s, and included Roland Corporation's RS-101 and RS-202 (1975 and 1976) string synthesizers,[53][54] the Yamaha CS-80 (1976), Oberheim's Polyphonic and OB-X (1975 and 1979), Sequential Circuits' Prophet-5 (1978), and Roland's Jupiter-4 and Jupiter-8 (1978 and 1981). The success of the Prophet-5, a polyphonic and microprocessor-controlled keyboard synthesizer, aided the shift of synthesizers away from large modular units and towards smaller keyboard instruments.[55] This helped accelerate the integration of synthesizers into popular music, a shift that had been lent powerful momentum by the Minimoog and later the ARP Odyssey.[56] Earlier polyphonic electronic instruments of the 1970s, rooted in string synthesizers before advancing to multi-synthesizers incorporating monosynths and more, gradually fell out of favour in the wake of these newer, note-assigned polyphonic keyboard synthesizers.[57]

In 1973,[58] Yamaha licensed the algorithms for the first digital synthesis algorithm, frequency modulation synthesis (FM synthesis), from John Chowning, who had experimented with it since 1971.[59] Yamaha's engineers began adapting Chowning's algorithm for use in a commercial digital synthesizer, adding improvements such as the "key scaling" method to avoid the introduction of distortion that normally occurred in analog systems during frequency modulation.[59] In the 1970s, Yamaha were granted a number of patents, evolving Chowning's early work on FM synthesis technology.[61] Yamaha built the first prototype digital synthesizer in 1974.[58] Yamaha eventually commercialized FM synthesis technology with the Yamaha GS-1, the first FM digital synthesizer, released in 1980.[62] The first commercial digital synthesizer released a year earlier, the Casio VL-1,[63] released in 1979.[64]

By the end of the 1970s, digital synthesizers and samplers had arrived on markets around the world.[1] Compared with analog synthesizer sounds, the digital sounds produced by these new instruments tended to have a number of different characteristics: clear attack and sound outlines, carrying sounds, rich overtones with inharmonic contents, and complex motion of sound textures, amongst others. While these new instruments were expensive, these characteristics meant musicians were quick to adopt them, especially in the United Kingdom[65] and the United States. This encouraged a trend towards producing music using digital sounds,[2] and laid the foundations for the development of the inexpensive digital instruments popular in the next decade. Relatively successful instruments, with each selling more than several hundred units per series, included the NED Synclavier (1977), Fairlight CMI (1979), E-mu Emulator (1981), and PPG Wave (1981).[1][65][66][67][68]

In 1983, Yamaha's DX7 digital synthesizer[58] swept through popular music, leading to the adoption and development of digital synthesizers in many varying forms during the 1980s, and the rapid decline of analog synthesizer technology. In 1987, Roland's D-50 synthesizer was released, which combined the already existing sample-based synthesis[3] and the onboard digital effects,[69] while Korg's even more popular M1 (1988) now also heralded the era of the workstation synthesizer, based on ROM sample sounds for composing and sequencing whole songs, rather than solely traditional sound synthesis.[68]

Throughout the 1990s, the popularity of electronic dance music employing analog sounds, the appearance of digital analog modelling synthesizers to recreate these sounds, and the development of the Eurorack modular synthesiser system, initially introduced with the Doepfer A-100 and since adopted by other manufacturers, all contributed to the resurgence of interest in analog technology. The turn of the century also saw improvements in technology that led to the popularity of digital software synthesizers.[71] In the 2010s, new analog synthesizers, both in keyboard instrument and modular form, are released alongside current digital hardware instruments.[72] In 2016, Korg announced the Korg Minilogue, the first mass-produced polyphonic analogue synth in decades.

According to Fact, "The synthesizer is as important, and as ubiquitous, in modern music today as the human voice."[73] It is one of the most important instruments in the music industry.[74]

In the 1970s, electronic music composers such as Jean Michel Jarre,[75] Vangelis[76] and Isao Tomita,[47][46][77] released successful synthesizer-led instrumental albums. Over time, this helped influence the emergence of synthpop, a subgenre of new wave, from the late 1970s to the early 1980s. The work of German krautrock bands such as Kraftwerk[78] and Tangerine Dream, British acts such as John Foxx, Gary Numan and David Bowie, African-American acts such as George Clinton and Zapp, and Japanese electronic acts such as Yellow Magic Orchestra and Kitaro, were influential in the development of the genre.[74] Gary Numan's 1979 hits "Are 'Friends' Electric?" and "Cars" made heavy use of synthesizers.[79][80] OMD's "Enola Gay" (1980) used distinctive electronic percussion and a synthesized melody. Soft Cell used a synthesized melody on their 1981 hit "Tainted Love".[74] Nick Rhodes, keyboardist of Duran Duran, used various synthesizers including the Roland Jupiter-4 and Jupiter-8.[81] Chart hits include Depeche Mode's "Just Can't Get Enough" (1981),[74] The Human League's "Don't You Want Me"[82] and works by Ultravox.[74]

Sound synthesis

Additive synthesis builds sounds by adding together waveforms into a composite sound. Instrument sounds are simulated by matching their natural harmonic overtone structure. Early analog examples of additive synthesizers are the Teleharmonium, Hammond organ, and Synclavier.

Subtractive synthesis is based on filtering harmonically rich waveforms. It is implemented in early monophonic keyboard synthesizers such as the MINI Moog Moog synthesizer. Signal routing, or patching was usually very limited and followed a normalized path, as described here. Subtractive synthesizers approximate instrumental sounds by an oscillator (producing sawtooth waves, square waves, etc.) followed by a filter, followed by an amplifier which is being controlled by an ADSR. The combination of simple modulation routings (such as pulse width modulation and oscillator sync), along with the lowpass filter, is responsible for the "classic synthesizer" sound commonly associated with "analog synthesis".

FM synthesis (frequency modulation synthesis) is a process that usually involves the use of at least two signal generators (sine-wave oscillators, commonly referred to as "operators" in FM-only synthesizers) to create and modify a voice. Often, this is done through the analog or digital generation of a signal that modulates the tonal and amplitude characteristics of a base carrier signal. FM synthesis was pioneered by John Chowning,[83] who patented the idea and sold it to Yamaha. Unlike the exponential relationship between voltage-in-to-frequency-out and multiple waveforms in classical 1-volt-per-octave synthesizer oscillators, Chowning-style FM synthesis uses a linear voltage-in-to-frequency-out relationship and sine-wave oscillators. The resulting complex waveform may have many component frequencies, and there is no requirement that they all bear a harmonic relationship. Sophisticated FM synths such as the Yamaha DX7 series can have 6 operators per voice; some synths with FM can also often use filters and variable amplifier types to alter the signal's characteristics into a sonic voice that either roughly imitates acoustic instruments or creates sounds that are unique. FM synthesis is especially valuable for metallic or clangorous noises such as bells, cymbals, or other percussion.

Phase distortion synthesis is a method implemented on Casio CZ synthesizers. It replaces the traditional analog waveform with a choice of several digital waveforms which are more complex than the standard square, sine, and sawtooth waves. This waveform is routed to a digital filter and digital amplifier, each modulated by an eight-stage envelope. The sound can then be further modified with ring modulation or noise modulation.[84]

Physical modelling synthesis is the synthesis of sound by using a set of equations and algorithms to simulate each sonic characteristic of an instrument, starting with the harmonics that make up the tone itself, then adding the sound of the resonator, the instrument body, etc., until the sound realistically approximates the desired instrument. When an initial set of parameters is run through the physical simulation, the simulated sound is generated. Although physical modeling was not a new concept in acoustics and synthesis, it was not until the development of the Karplus-Strong algorithm and the increase in DSP power in the late 1980s that commercial implementations became feasible. The quality and speed of physical modeling on computers improves with higher processing power.

Linear Arithmetic synthesis is a form of synthesis that utilizes PCM samples for the attack of a waveform, and subtractive synthesis for the rest of the envelope. This type of synthesis bridges the gap between the older subtractive synthesis and the newer sample-based synthesis at a time where PCM samples would take up a substantial amount of the memory allotted. The first synthesizer to debut with this form of synthesis was the Roland D-50 in 1987.

Sample-based synthesis involves digitally recording a short snippet of sound from a real instrument or other source and then playing it back at different speeds to produce different pitches. A sample can be played as a one shot, used often for percussion or short duration sounds, or it can be looped, which allows the tone to sustain or repeat as long as the note is held. Samplers usually include a filter, envelope generators, and other controls for further manipulation of the sound. Virtual samplers that store the samples on a hard drive make it possible for the sounds of an entire orchestra, including multiple articulations of each instrument, to be accessed from a sample library.. See also Wavetable synthesis, Vector synthesis.

Analysis/resynthesis is a form of synthesis that uses a series of bandpass filters or Fourier transforms to analyze the harmonic content of a sound. The results are then used to resynthesize the sound using a band of oscillators. The vocoder, linear predictive coding, and some forms of speech synthesis are based on analysis/resynthesis.

Imitative synthesis

Sound synthesis can be used to mimic acoustic sound sources.

Generally, a sound that does not change over time includes a fundamental partial or harmonic, and any number of partials. Synthesis may attempt to mimic the amplitude and pitch of the partials in an acoustic sound source.

When natural sounds are analyzed in the frequency domain (as on a spectrum analyzer), the spectra of their sounds exhibits amplitude spikes at each of the fundamental tone's harmonics corresponding to resonant properties of the instruments (spectral peaks that are also referred to as formants). Some harmonics may have higher amplitudes than others. The specific set of harmonic-vs-amplitude pairs is known as a sound's harmonic content. A synthesized sound requires accurate reproduction of the original sound in both the frequency domain and the time domain. A sound does not necessarily have the same harmonic content throughout the duration of the sound. Typically, high-frequency harmonics die out more quickly than the lower harmonics.

In most conventional synthesizers, for purposes of re-synthesis, recordings of real instruments are composed of several components representing the acoustic responses of different parts of the instrument, the sounds produced by the instrument during different parts of a performance, or the behavior of the instrument under different playing conditions (pitch, intensity of playing, fingering, etc.)


Synthesizers generate sound through various analogue and digital techniques. Early synthesizers were analog hardware based but many modern synthesizers use a combination of DSP software and hardware or else are purely software-based (see softsynth). Digital synthesizers often emulate classic analog designs. Sound is controllable by the operator by means of circuits or virtual stages that may include:

  • Electronic oscillators – create raw sounds with a timbre that depends upon the waveform generated. Voltage-controlled oscillators (VCOs) and digital oscillators may be used. Harmonic additive synthesis models sounds directly from pure sine waves, somewhat in the manner of an organ, while frequency modulation and phase distortion synthesis use one oscillator to modulate another. Subtractive synthesis depends upon filtering a harmonically rich oscillator waveform. Sample-based and granular synthesis use one or more digitally recorded sounds in place of an oscillator.

  • Low frequency oscillator (LFO) – an oscillator of adjustable frequency that can be used to modulate the sound rhythmically, for example to create tremolo or vibrato or to control a filter's operating frequency. LFOs are used in most forms of synthesis.

  • Voltage-controlled filter (VCF) – "shape" the sound generated by the oscillators in the frequency domain, often under the control of an envelope or LFO. These are essential to subtractive synthesis.

  • ADSR envelopesenvelopes]]– provide ation** to "shape" the volume or harmonic content of the produced note in the time domain with the principal parameters being attack, decay, sustain and release. These are used in most forms of synthesis. ADSR control is provided by envelope generators.

  • Voltage-controlled amplifier (VCA) – After the signal generated by one (or a mix of more) VCOs has been modified by filters and LFOs, and its waveform has been shaped (contoured) by an ADSR envelope generator, it then passes on to one or more voltage-controlled amplifiers (VCAs). A VCA is a preamp that boosts (amplifies) the electronic signal before passing it on to an external or built-in power amplifier, as well as a means to control its amplitude (volume) using an attenuator. The gain of the VCA is affected by a control voltage (CV), coming from an envelope generator, an LFO, the keyboard or some other source.[85]

  • Other sound processing effects units such as ring modulators and fuzz bass pedals may be encountered.


Electronic filters are particularly important in subtractive synthesis, being designed to pass some frequency regions through unattenuated while significantly attenuating ("subtracting") others. The low-pass filter is most frequently used, but band-pass filters, band-reject filters and high-pass filters are also sometimes available.

The filter may be controlled with a second ADSR envelope.

An "envelope modulation" ("env mod") parameter on many synthesizers with filter envelopes determines how much the envelope affects the filter.

If turned all the way down, the filter produces a flat sound with no envelope.

When turned up the envelope becomes more noticeable, expanding the minimum and maximum range of the filter.

The envelope applied on the filter helps the sound designer generating long notes or short notes by moving the parameters up and down such as decay, sustain and finally release.

For instance by using a short decay with no sustain, the sound generated is commonly known as a stab. Sound designers may prefer shaping the sound with filter instead of volume.


Many synthesizers use an envelope generator to control how sounds change over time.

An envelope may control elements such as amplitude (volume), a filter (frequencies), or pitch. The most common envelope is the ADSR (Attack, Decay, Sustain, Release) envelope:[86]

  • Attack time is the time taken for initial run-up of level from nil to peak, beginning when the key is first pressed.

  • Decay time is the time taken for the subsequent run down from the attack level to the designated sustain level.

  • Sustain level is the level during the main sequence of the sound's duration, until the key is released.

  • Release time is the time taken for the level to decay from the sustain level to zero after the key is released.

The "attack" and "decay" of a sound have a great effect on the instrument's sonic character.[87]


A low-frequency oscillator (LFO) generates an electronic signal, usually below 20 Hz. LFO signals create a periodic control signal or sweep, often used in vibrato, tremolo and other effects. In certain genres of electronic music, the LFO signal can control the cutoff frequency of a VCF to make a rhythmic wah-wah sound, or the signature dubstep wobble bass.


An arpeggiator (arp) is a feature available on several synthesizers that automatically steps through a sequence of notes based on an input chord, thus creating an arpeggio. The notes can often be transmitted to a MIDI sequencer for recording and further editing. An arpeggiator may have controls for speed, range, and order in which the notes play; upwards, downwards, or in a random order. More advanced arpeggiators allow the user to step through a pre-programmed complex sequence of notes, or play several arpeggios at once. Some allow a pattern sustained after releasing keys: in this way, a sequence of arpeggio patterns may be built up over time by pressing several keys one after the other. Arpeggiators are also commonly found in software sequencers. Some arpeggiators/sequencers expand features into a full phrase sequencer, which allows the user to trigger complex, multi-track blocks of sequenced data from a keyboard or input device, typically synchronized with the tempo of the master clock.

Arpeggiators seem to have grown from the accompaniment system used in electronic organs in the mid-1960s to the mid-1970s.[88] They were also commonly fitted to keyboard instruments through the late 1970s and early 1980s. Notable examples are the RMI Harmonic Synthesizer (1974),[89] Roland Jupiter-8, Oberheim OB-8, Roland SH-101, Sequential Circuits Six-Trak and Korg Polysix. A famous example can be heard on Duran Duran's song "Rio", in which the arpeggiator on a Roland Jupiter-4 plays a C minor chord in random mode. They fell out of favor by the latter part of the 1980s and early 1990s and were absent from the most popular synthesizers of the period but a resurgence of interest in analog synthesizers during the 1990s, and the use of rapid-fire arpeggios in several popular dance hits, brought with it a resurgence.


A synthesizer patch (some manufacturers chose the term program) is a sound setting. Modular synthesizers used cables ("patch cords") to connect the different sound modules together. Since these machines had no memory to save settings, musicians wrote down the locations of the patch cables and knob positions on a "patch sheet" (which usually showed a diagram of the synthesizer). Ever since, an overall sound setting for any type of synthesizer has been referred to as a patch.

In mid–late 1970s, patch memory (allowing storage and loading of 'patches' or 'programs') began to appear in synths like the Oberheim Four-voice (1975/1976)[90] and Sequential Circuits Prophet-5 (1977/1978). After MIDI was introduced in 1983, more and more synthesizers could import or export patches via MIDI SYSEX commands. When a synthesizer patch is uploaded to a personal computer that has patch editing software installed, the user can alter the parameters of the patch and download it back to the synthesizer. Because there is no standard patch language, it is rare that a patch generated on one synthesizer can be used on a different model. However, sometimes manufacturers design a family of synthesizers to be compatible.


A synth module is a standalone unit which synthesizes sounds using electronic or digital circuits. A synth module does not typically have a built-in MIDI controller such as a musical keyboard. As such, to play the sounds from a sound module using MIDI, a MIDI controller such as a MIDI-compatible keyboard or other device has to be used. Some synth modules are the sound synthesis components from an integrated synthesizer keyboard, packaged into a rackmountable unit.

Control interfaces

Modern synthesizers often look like small pianos, though with many additional knob and button controls.

These are integrated controllers, where the sound synthesis electronics are integrated into the same package as the controller.

However, many early synthesizers were modular and keyboardless, while most modern synthesizers may be controlled via MIDI, allowing other means of playing such as:

  • Fingerboards (ribbon controllers) and touchpads

  • Wind controllers

  • Guitar-style interfaces

  • Drum pads

  • Music sequencers

  • Non-contact interfaces akin to theremins

  • Tangible interfaces like a Reactable, AudioCubes

  • Various auxiliary input device including: wheels for pitch bend and modulation, footpedals for expression and sustain, breath controllers, beam controllers, etc.

Fingerboard controller

A ribbon controller or other violin-like user interface may be used to control synthesizer parameters. The idea dates to Léon Theremin's 1922 first concept[91] and his 1932 Fingerboard Theremin and Keyboard Theremin,[92][93] Maurice Martenot's 1928 Ondes Martenot (sliding a metal ring),[94] Friedrich Trautwein's 1929 Trautonium (finger pressure), and was also later utilized by Robert Moog.[95][96][97] The ribbon controller has no moving parts. Instead, a finger pressed down and moved along it creates an electrical contact at some point along a pair of thin, flexible longitudinal strips whose electric potential varies from one end to the other. Older fingerboards used a long wire pressed to a resistive plate. A ribbon controller is similar to a touchpad, but a ribbon controller only registers linear motion. Although it may be used to operate any parameter that is affected by control voltages, a ribbon controller is most commonly associated with pitch bending.

Fingerboard-controlled instruments include the Trautonium (1929), Hellertion (1929) and Heliophon (1936),[98][99][100] Electro-Theremin (Tannerin, late 1950s), Persephone (2004), and the Swarmatron (2004). A ribbon controller is used as an additional controller in the Yamaha CS-80 and CS-60, the Korg Prophecy and Korg Trinity series, the Kurzweil synthesizers, Moog synthesizers, and others.

Rock musician Keith Emerson used it with the Moog modular synthesizer from 1970 onward. In the late 1980s, keyboards in the synth lab at Berklee College of Music were equipped with membrane thin ribbon style controllers that output MIDI. Such ribbon controllers can serve as a main MIDI controller instead of a keyboard, as with the Continuum instrument.

Wind controllers

Wind controllers (and wind synthesizers) are convenient for woodwind and brass players, being designed to imitate those instruments. These are usually either analog or MIDI controllers, and sometimes include their own built-in sound modules (synthesizers). In addition to the follow of key arrangements and fingering, the controllers have breath-operated pressure transducers, and may have gate extractors, velocity sensors, and bite sensors. Saxophone-style controllers have included the Lyricon, and products by Yamaha, Akai, and Casio. The mouthpieces range from alto clarinet to alto saxophone sizes. The Eigenharp, a controller similar in style to a bassoon, was released by Eigenlabs in 2009. Melodica and recorder-style controllers have included the Martinetta (1975)[101] and Variophon (1980),[102] and Josef Zawinul's custom Korg Pepe.[103] A harmonica-style interface was the Millionizer 2000 (c. 1983).[104]

Trumpet-style controllers have included products by Steiner/Crumar/Akai, Yamaha, and Morrison. Breath controllers can also be used to control conventional synthesizers, e.g. the Crumar Steiner Masters Touch,[105] Yamaha Breath Controller and compatible products.[106] Several controllers also provide breath-like articulation capabilities.

Accordion controllers use pressure transducers on bellows for articulation.


Other controllers include theremin, lightbeam controllers, touch buttons (touche d’intensité) on the ondes Martenot, and various types of foot pedals. Envelope following systems, the most sophisticated being the vocoder, are controlled by the power or amplitude of input audio signal. A musician uses the talk box to manipulate sound using the vocal tract, though it is rarely categorized as a synthesizer.

MIDI control

Synthesizers became easier to integrate and synchronize with other electronic instruments and controllers with the introduction of Musical Instrument Digital Interface (MIDI) in 1983.[107] First proposed in 1981 by engineer Dave Smith of Sequential Circuits, the MIDI standard was developed by a consortium now known as the MIDI Manufacturers Association.[108] MIDI is an opto-isolated serial interface and communication protocol.[108] It provides for the transmission from one device or instrument to another of real-time performance data. This data includes note events, commands for the selection of instrument presets (i.e. sounds, or programs or patches, previously stored in the instrument's memory), the control of performance-related parameters such as volume, effects levels and the like, as well as synchronization, transport control and other types of data. MIDI interfaces are now almost ubiquitous on music equipment and are commonly available on personal computers (PCs).[108]

The General MIDI (GM) software standard was devised in 1991 to serve as a consistent way of describing a set of over 200 sounds (including percussion) available to a PC for playback of musical scores.[109] For the first time, a given MIDI preset consistently produced a specific instrumental sound on any GM-compatible device. The Standard MIDI File (SMF) format (extension.mid) combined MIDI events with delta times – a form of time-stamping – and became a popular standard for exchanging music scores between computers. In the case of SMF playback using integrated synthesizers (as in computers and cell phones), the hardware component of the MIDI interface design is often unneeded.

Open Sound Control (OSC) is another music data specification designed for online networking. In contrast with MIDI, OSC allows thousands of synthesizers or computers to share music performance data over the Internet in realtime.

Recent trends in synthesizer design, particularly the resurgence of modular systems in eurorack, have allowed for a hybrid of MIDI control and control voltage i/o to be found together in many models.

(Examples being the Moog Model D reissue, which was enhanced from its original design to offer both MIDI i/o and CV i/o).

In these models of MIDI/CV hybrids, it is often possible to send and receive control voltages to control parameters of equipment at the identical time MIDI messages are being sent and received.

Additional examples of MIDI/CV hybrids include models like the Arturia Minibrute, which is able to receive MIDI messages from an external controller and automatically convert the MIDI signal into gate and pitch notes, which it can then send out as control voltage.

Typical roles

Synth lead

In popular music, a synth lead is generally used for playing the main melody of a song, but it is also often used for creating rhythmic or bass effects. Although most commonly heard in electronic dance music, synth leads have been used extensively in hip-hop music since the 1980s and some types of rock songs since the 1970s. Many post-1980s pop music songs use a synth lead to provide a musical hook to sustain the listener's interest throughout a song.

Synth pad

A synth pad is a sustained chord or tone generated by a synthesizer, often employed for background harmony and atmosphere in much the same fashion that a string section is often used in orchestral music and film scores. Typically, a synth pad is performed using whole notes, which are often tied over bar lines. A synth pad sometimes holds the same note while a lead voice sings or plays an entire musical phrase or section. Often, the sounds used for synth pads have a vaguely organ, string, or vocal timbre. During the late 1970s and 1980s, specialized string synthesizers were made that specialized in creating string sounds using the limited technology of the time. Much popular music in the 1980s employed synth pads, this being the time of polyphonic synthesizers, as did the then-new styles of smooth jazz and new-age music. One of many well-known songs from the era to incorporate a synth pad is "West End Girls" by the Pet Shop Boys, who were noted users of the technique.

The main feature of a synth pad is very long attack and decay time with extended sustains.

In some instances pulse-width modulation (PWM) using a square wave oscillator can be added to create a "vibrating" sound.

Synth bass

The bass synthesizer (or "bass synth") is used to create sounds in the bass range, from simulations of the electric bass or double bass to distorted, buzz-saw-like artificial bass sounds, by generating and combining signals of different frequencies. Bass synth patches may incorporate a range of sounds and tones, including wavetable-style, analog, and FM-style bass sounds, delay effects, distortion effects, envelope filters. A modern digital synthesizer uses a frequency synthesizer microprocessor component to generate signals of different frequencies. While most bass synths are controlled by electronic keyboards or pedalboards, some performers use an electric bass with MIDI pickups to trigger a bass synthesizer.

In the 1970s miniaturized solid-state components allowed self-contained, portable instruments such as the Moog Taurus, a 13-note pedal keyboard played by the feet.

The Moog Taurus was used in live performances by a range of pop, rock, and blues-rock bands.

An early use of bass synthesizer was in 1972, on a solo album by John Entwistle (the bassist for The Who), entitled Whistle Rymes. Genesis bass player Mike Rutherford used a Dewtron "Mister Bassman" for the recording of their album Nursery Cryme in August 1971. Stevie Wonder introduced synth bass to a pop audience in the early 1970s, notably on "Superstition" (1972) and "Boogie On Reggae Woman" (1974). In 1977 Parliament's funk single "Flash Light" used the bass synthesizer. Lou Reed, widely considered a pioneer of electric guitar textures, played bass synthesizer on the song "Families", from his 1979 album The Bells

Following the availability of programmable music sequencers such as the Synclavier and Roland MC-8 Microcomposer in the late 1970s, bass synths began incorporating sequencers in the early 1980s. The first bass synthesizer with a sequencer was the Firstman SQ-01.[111][112] It was originally released in 1980 by Hillwood/Firstman, a Japanese synthesizer company founded in 1972 by Kazuo Morioka (who later worked for Akai in the early 1980s), and was then released by Multivox for North America in 1981.[113][114][54]

A particularly influential bass synthesizer was the Roland TB-303.[115] Released in late 1981, it featured a built-in sequencer and later became strongly associated with acid house music.[116] Bass synthesizers began being used to create highly syncopated rhythms and complex, rapid basslines. Bass synth patches incorporate a range of sounds and tones, including wavetable-style, analog, and FM-style bass sounds, delay effects, distortion effects, envelope filters. In popular music, these techniques gained wide popularity with the emergence of acid house music, after Phuture's use of the TB-303 for the single "Acid Tracks" in 1987,[115] though such techniques were predated by Charanjit Singh's use of the TB-303 in 1982.[116]

In the 2000s, several equipment manufacturers such as Boss and Akai produced bass synthesizer effect pedals for electric bass guitar players, which simulate the sound of an analog or digital bass synth. With these devices, a bass guitar is used to generate synth bass sounds. The BOSS SYB-3 was one of the early bass synthesizer pedals. The SYB-3 reproduces sounds of analog synthesizers with Digital Signal Processing saw, square, and pulse synth waves and user-adjustable filter cutoff. The Akai bass synth pedal contains a four-oscillator synthesizer with user selectable parameters (attack, decay, envelope depth, dynamics, cutoff, resonance). Bass synthesizer software allows performers to use MIDI to integrate the bass sounds with other synthesizers or drum machines. Bass synthesizers often provide samples from vintage 1970s and 1980s bass synths. Some bass synths are built into an organ style pedalboard or button board.


Since their invention, there has been concern over synthesizers putting session musicians out of a job, since they can recreate the sounds of many instruments. Some musicians (especially keyboardists) viewed the synth as they would any musical instrument. Other musicians viewed the synth as a threat to traditional session musicians, and the British Musicians' Union attempted to ban it in 1982. The ban never became official policy.[117] Broadway plays are also now using synthesizers to reduce the number of live musicians required.[118]

See also

  • List of classic synthesizers

  • List of synthesizer manufacturers

Various synthesizers
  • Guitar synthesizer

  • Keytar

  • Modular synthesizer

  • String synthesizer

  • Wind controller

Related instruments & technologies
  • Clavioline (Musitron)

  • Electronic keyboard

  • Musical instrument

  • Music workstation

  • Sampler

  • Speech synthesis Vocaloid

Components & technologies
  • Analytic signal

  • Envelope detector

  • Low-frequency oscillation

  • MIDI

Music genres
Notable works
  • List of compositions for electronic keyboard


Citation Linkwww.mixonline.comList of commercially successful early digital synthesizers and digital samplers introduced during the late-1970s and early-1980s, each sold over several hundred of units per series: NED Synclavier (1977–1992) by New England Digital, based on the research of Dartmouth Digital Synthesizer since 1973. Note: Several sources point out that FM synthesis of Synclavier was licensed from Yamaha, who was an exclusively licensed from the original inventor, John Chowning. Fairlight CMI (1979–1988, over 300 units) in Sydney, based on the early developments of Qasar M8 by Tony Furse in Canberra since 1972. Yamaha GS-1, GS-2 (1980, around 100 units) and CE20, CE25 (1982) in Hamamatsu, based on research into frequency modulation synthesis by John Chowning between 1967–1973, and early developments of TRX-100 and Programmable Algorithm Music Synthesizer (PAMS) by Yamaha between 1973–1979.("[Chapter 2] FM Tone Generators and the Dawn of Home Music Production". Yamaha Synth 40th Anniversary – History. Yamaha Corporation. 2014.) E-mu Emulator (1981-2000s) in California, roughly based on a notion of table-lookup oscillator seen on the MUSIC language in 1960s PPG Wave (1981–1987, around 1,000 units) in Hamburg, based on wavetable synthesis previously implemented on PPG Wavecomputer 360, 340 and 380 circa 1978. etc. Most products listed above are still sold in the 21st century, e.g. Yamaha DX200 in 2001, E-mu Emulator X in 2009, Fairlight CMI 30A in 2011, and Waldorf's wavetable synthesis products as the reincarnations of PPG Wave. In addition, the long history of additive synthesis is notable for providing fundamental research that underlies the technology used in various forms of digital synthesis, but is not listed above due to the lack of commercially successful products. Additive synthesis has influenced most products in list above, and even the Yamaha Vocaloid released in 2003 (Excitation plus Resonance (EpR), which is based on Spectral modeling synthesis (SMS)).
Sep 29, 2019, 3:09 PM
Citation Linkopenlibrary.orgFor the details of the new trend of music influenced by early digital instruments, see Fairlight§Artists who used the Fairlight CMI, Synclavier§Notable users and E-mu Emulator§Notable users.
Sep 29, 2019, 3:09 PM
Citation Linkopenlibrary.orgSample-based synthesis was previously introduced by the E-mu Emulator II in 1984, Ensoniq Mirage in 1985, Ensoniq ESQ-1 and Korg DSS-1 in 1986, etc.
Sep 29, 2019, 3:09 PM
Citation Linkwww.palatin-project.com"The Palatin Project-The life and work of Elisha Gray". Palatin Project.
Sep 29, 2019, 3:09 PM
Citation Linkbooks.google.comBrown, Jeremy K. (2010). Stevie Wonder: Musician. Infobase Publishing. p. 50. ISBN 978-1-4381-3422-2.
Sep 29, 2019, 3:09 PM
Citation Link120years.net"Elisha Gray and "The Musical Telegraph"(1876)", 120 Years of Electronic Music, 2005
Sep 29, 2019, 3:09 PM
Citation Linkwww.emusician.comChadabe, Joel (February 1, 2001), The Electronic Century Part I: Beginnings, Electronic Musician, pp. 74–90
Sep 29, 2019, 3:09 PM
Citation Linkopenlibrary.orgChadabe, Joel (1997). Electric Sound: The Past and Promise of Electronic Music. New Jersey, United States: Prentice Hall. pp. 3–4. ISBN 0-13-303231-0.
Sep 29, 2019, 3:09 PM
Citation Linkbooks.google.comOkamura, Sōgo (1994). History of Electron Tubes. IOS Press. pp. 17–22. ISBN 9051991452.
Sep 29, 2019, 3:09 PM
Citation Linkwww.theguardian.comMcNamee, David (12 October 2009). "Hey, what's that sound: Ondes martenot". The Guardian. Retrieved 7 September 2018.
Sep 29, 2019, 3:09 PM
Citation Linkwww.nytimes.comMartin, Douglas (19 August 2001). "Jeanne Loriod, Who Transformed Electronic Wails Into Heartfelt Music, Dies at 73". New York Times. Retrieved 25 July 2018.
Sep 29, 2019, 3:09 PM
Citation Linkopenlibrary.orgEdmunds, Neil (2004), Soviet Music and Society Under Lenin and Stalin, London: Routledge Curzon
Sep 29, 2019, 3:09 PM
Citation Linkwww.umatic.nlHolzer, Derek (February 2010), Tonewheels – a brief history of optical synthesis, Umatic.nl
Sep 29, 2019, 3:09 PM
Citation Linkwww.theremin.ruKreichi, Stanislav (10 November 1997), The ANS Synthesizer: Composing on a Photoelectronic Instrument, Theremin Center, Despite the apparent simplicity of his idea of reconstructing a sound from its visible image, the technical realization of the ANS as a musical instrument did not occur until 20 years later...Murzin was an engineer who worked in areas unrelated to music, and the development of the ANS synthesizer was a hobby and he had many problems realizing on a practical level.
Sep 29, 2019, 3:09 PM
Citation Linkcec.sonus.caRhea, Thomas L., "Harald Bode's Four-Voice Assignment Keyboard (1937)", eContact! (reprint ed.), Canadian Electroacoustic Community, 13 (4) (July 2011), originally published as Rhea, Tom (December 1979), "Electronic Perspectives", Contemporary Keyboard, 5 (12): 89
Sep 29, 2019, 3:09 PM
Citation Link120years.net"The 'Warbo Formant Orgel' (1937), The 'Melodium' (1938), The 'Melochord' (1947–9), and 'Bode Sound Co' (1963–)", 120 years of Electronic Music, retrieved 2018-09-20
Sep 29, 2019, 3:09 PM
Citation Linkwww.discretesynthesizers.comCirocco, Phil (2006). "The Novachord Restoration Project". Cirocco Modular Synthesizers.
Sep 29, 2019, 3:09 PM
Citation Linkwww.novachord.co.ukSteve Howell; Dan Wilson. "Novachord". Hollow Sun. (see also 'History' page)
Sep 29, 2019, 3:09 PM
Citation Linkwww.hughlecaine.comGayle Young (1999). "Electronic Sackbut (1945–1973)". HughLeCaine.com.
Sep 29, 2019, 3:09 PM
Citation Linkwww.lib.kobe-u.ac.jp一時代を画する新楽器完成 浜松の青年技師山下氏 [An epoch new musical instrument was developed by a young engineer Mr. Yamashita in Hamamatsu]. Hochi Shimbun (in Japanese). 1935-06-08.
Sep 29, 2019, 3:09 PM