Getting to know virtual-analogue synthesis

Continuing our look at the concurrent development of the synth and the capabilities of instrument modelling, we focus now on recreations of the physical characteristics of analogue circuitry…

When you purchase through affiliate links on MusicTech.com, you may contribute to our site through commissions. Learn more
Nord Lead Synthesizer, sound synthesis

Image: Nord Electro 5 promo video

In the previous story, we introduced you to physical modelling synthesis. Now, it’s time to learn more about virtual-analogue synthesis. Let’s dive right in…

Circuit visualisation

In the same year as the amazing-but-ignored Yamaha VL1 hit the shelves, Swedish manufacturer Clavia, then best known for its drum electronic-drum system, released the first incarnation of its Nord Lead synthesizer. The instrument’s tone generator was in some ways similar to the physical modelling of the VL1, in that it created its sound via a digital model rather than by filtering and shaping a simple repeating waveform. However, with the Nord Lead, Clavia had modelled the electronic characteristics of analogue-synth circuitry, rather than the physical characteristics of an acoustic instrument.

The modelling techniques involved in this form of synthesis were originally developed for use in the world of circuit design to allow fast and efficient testing and perfecting of circuits; without circuit modelling, prototype circuits would need to be fabricated manually, or as one-off production runs on high-cost fabrication machines. In order to be accurate, though, circuit modelling needs to do more than just plot a signal path, as it has to take into account the characteristics of each individual component, too.

Nord Lead Synthesizer, sound synthesis
Nord’s Lead series has become very popular synthesizers and the earliest iterations pioneered the idea of modelling analogue-synth circuitry

Analogue non-linearity

There are numerous different types of analogue-circuit component – resistors, capacitors, diodes, transistors and so on. Such components have non-linear responses; that is, changes in the signal at the input terminal do not necessarily result in a 1:1 change in the signal at the output terminal.

For example, the behaviour of a transistor, which is effectively an electronic switch, can differ slightly depending on the voltage of the signal being switched. Or, with a potentiometer, the amount of voltage attenuation it produces may not be distributed evenly across the full rotational range of the shaft or knob. Most components will exhibit this non-linear response and that nonlinearity can itself vary depending on the make and model of the component, the individual component itself – and can even be impacted by environmental factors such as temperature and humidity.

Gary Numan with a Roland D-50
Gary Numan with a Roland D-50, the first affordable synth to combine sample playback with digital synthesis. Image: Michael Putland / Getty Images

In analogue synthesizers, packed full of electronic components as they are, the combined effect of this nonlinearity plays a huge role in determining the tone and character of the instrument. It’s also something that’s almost impossible to mimic using conventional DSP-based filters, oscillators and what-not; sure, a digital filter can be programmed to mimic non-linearity, but that mimicking will itself be fixed and linear and therefore, ultimately, unconvincing.

By harnessing the principles of circuit modelling, instrument designers were able to create efficient digital models of analogue-synth components – oscillators, filters, envelopes and so on. What’s more, because these models simulated accurately the non-linearity of the circuitry, and because their parameters could be modified in real time by the user, the resulting synthesizers both sounded and behaved almost identically to a genuine analogue synthesizer.

A new golden age

This new technology, which became known as analogue modelling or ‘virtual analogue’, unshackled designers from the limitations of cost and practicality that had constrained analogue synth design: adding additional synth components didn’t affect cost of manufacture, took up no space in the instrument’s chassis, used no power and created no additional heat. Now, the only limits were the inventiveness of the designer and the amount of DSP power available to the synth engine.

The advent of virtual analogue marked a radical shift in approach for synthesis, too: where once it had been focused on emulating acoustic instruments, now the name of the game was to emulate other synthesizers and specifically their ability to create new and original sounds. The irony is, of course, that it was the desire to emulate acoustic instruments, via ever-more elaborate and complex methods that had toppled analogue subtractive synthesis from its throne in the first place!

Initially, through the 90s, this lead to a large number of new instruments being released. The analogue models incorporated in this crop of instruments tended to be unique to each manufacturer, weaving distinct and unique character traits into their offerings, much as different circuit topologies and component choices had distinguished different manufacturers’ instruments during the analogue era.

Korg OASYS
Korg’s OASYS represented decades of research into tone-generation algorithms and combined numerous synthesis engines

Many of the instruments released during this new synth golden age are still used widely and still held in high regard to this day, such as the aforementioned Clavia Nord Lead and its descendants – Yamaha’s AN1x, Roland’s JP-8000 and others. As the decade progressed and DSP hardware continued to both increase in power and decrease in price, designers developed ever-more elaborate synth engines that combined analogue modelling with other synthesis techniques. A good example is Korg’s Multi-Oscillator Synthesis System (MOSS) synth engine, as employed in its Prophecy and Z1 models, which offered a combination of physical modelling, analogue modelling and Variable Phase Modulation (essentially FM) synthesis, all within a single instrument.

Total abstraction

Korg’s MOSS engine was itself a subset of the company’s Open-Architecture Synthesis System, or OASYS, which was at the time an in-house development platform used for researching and perfecting the tone-generation algorithms for all of Korg’s synths (OASYS also lay behind the tone generators within both the Trinity and Triton workstations).

Eventually, in 2005, when advances in DSP power allowed it, OASYS itself was packaged into a commercial product, a large high-end keyboard workstation of the same name. This workstation represented the culmination of decades of synthesis research and was stuffed with many different synthesis engines, along with audio sampling/recording, audio and MIDI sequencing and effects processing – a latter-day Fairlight CMI, you could say. But at its heart, the OASYS platform was nothing more than a software environment, fully abstracted from the hardware on which it ran, and the OASYS instrument just a Linux-based computer dedicated to running that software, built in to a well-spec’d keyboard enclosure.

This separation between the tone engine software and the hardware on which it ran wasn’t new, nor was it unique to Korg – it was how digital synth design had always been done. But where previously, the software inside digital synths had tended to rely on the support of very specific hardware elements, with modelling, the entire synth engine was abstracted into software, with instruments becoming little more than expensive DSP platforms dedicated to a single task.

Now, the limiting factor in synth design was processing power – with more DSP capability, a designer could increase the resolution and accuracy of the virtual-analogue components, include more synth components for more sound-shaping power, or both. But dedicated DSP was – and is – more expensive than general-purpose computer CPUs and the rate of increase in processing power was something synth manufacturers couldn’t keep up with: by the time a new instrument made it to market, its processing innards would already be bordering on obsolescence.

Yet by this point, most users had comparatively powerful computers sat in their studios, so why the requirement to invest in even more processing capability if the synth itself is just software? The stage was set for yet another disruptive change in the world of sound synthesis…

To learn more about the history and science of sound synthesis, check here.

logo

Get the latest news, reviews and tutorials to your inbox.

Subscribe
Join Our Mailing List & Get Exclusive DealsSign Up Now
logo

The world’s leading media brand at the intersection of music and technology.

© 2024 MusicTech is part of NME Networks.