Understanding physical modelling synthesis

As we near the end of our synthesis masterclass, we turn our attention to how the evolution of the synthesizer has developed in tandem with computing technology, leading to the world of virtual synths that we all live in today…

When you purchase through affiliate links on MusicTech.com, you may contribute to our site through commissions. Learn more
Sound synthesis

Over the last few months, our series of articles on the history and science of sound synthesis has covered a lot of ground. Starting with the earliest valve-based electronic instruments that harnessed radio technology, we’ve seen how sound synthesis has always straddled the worlds of scientific research, state-of-the-art technology and abstract musical creativity.

We’ve touched on how synthesis technology has continually driven our culture through the development of new forms of music, and as such synthesis could even be seen as a perfect amalgamation of high-tech and high-art – the ultimate human expression of the march of technology. And where traditional acoustic instruments, once designed, tend to remain fundamentally unchanged from there onwards, synthesizers have constantly evolved and developed in line with the technological art of the possible – and users have continually explored and unleashed the artistic and creative potential that the technology enables.

Throughout most of this history, the driving force behind synthesis has been the desire to create increasingly accurate emulations of acoustic instruments, with the creation of new and original sounds a happy side-effect. But by the late 80s, the battle for truly accurate emulation had seemingly been won by digital sample-based synthesis, with more than a few forgotten companies, defunct technologies, and discarded instruments strewn in its wake.

But were we satisfied? Nope, not a bit of it. Because when surveying those remains, we recognised what we had lost: the individual character and hands-on control of analogue subtractive synths; the hard-edged bite and wide-ranging expressiveness of FM synths; the timbral flexibility of wavetable synthesis; the ever-evolving soundscapes of vector synthesis and the satisfying pit-of-the-stomach thump of an analogue drum machine. We surveyed a vista of identikit sample-based instruments that always sounded accurate, but always sounded the same.

Recognising this, by the late 80s and early 90s, synth designers were focusing their attention on ways to create acoustic emulations that retained the timbral accuracy of samples, that could be modified as deeply as subtractive synths – and that would responded convincingly to real-time performance techniques and nuances. The approach they hit upon made FM synthesis look like child’s play, but held vast promise, too. That method became known, generically, as ‘physical modelling’.

Let’s get physical

The concept of physical modelling had been around since the beginning of the 70s, but a practical application of the concept had to wait until the end of the decade, when it was discovered almost by accident. Kevin Karplus and Alexander Strong of Stanford University (also the birthplace and incubator of FM synthesis) were researching a technique for emulating the sound of a plucked or struck string. They referred to the research as Digitar synthesis (a portmanteau of digital guitar), but it became more widely known as Karplus-Strong string synthesis.

The technique they developed took a wideband waveform – typically white noise – and passed this through a recursive filtered feedback loop in order to create an evolving subtractive comb filter effect (a comb filter being a filter that operates on multiple frequency bands simultaneously).

The Karplus-Strong Algorithm
The Karplus-Strong Algorithm

While this was essentially subtractive synthesis, Karplus and Strong realised that their algorithm was actually mimicking the physical behaviour of a vibrating string; that is, rather than emulating the sound of a vibrating string using oscillators, filters and envelopes, the Karplus-Strong algorithm was actually modelling the behaviour of a vibrating string. This meant that by modifying the parameters of the algorithm, changes could be made to the perceived physical properties of the modelled string – changing its length and tension, for example. It was also found that the algorithm could model the behaviour and sound of some types of percussion instrument, too.

Stanford continued work on the Karplus-Strong algorithm throughout the 80s, an effort that culminated in the development of digital waveguide synthesis. A waveguide is something that restricts and channels the propagation, or growth, of a waveform.

In the acoustic world, the tubes and pipes of woodwind and brass instruments and the strings and body of a stringed instrument, can be thought of as waveguides, in that they control, tune and focus the waveforms created by the musician when the instrument is played. With digital waveguide synthesis, then, Stanford’s researchers had developed a computationally efficient technique for modelling the impact of a waveguide on an arbitrary input waveform.

It worked by using delay lines to model the shape and geometry of a waveguide and filters to tune the signal, attenuating some frequencies while accentuating others, just as would happen when blowing over the soundhole of a flute, holding down particular valves on a trumpet and so on.

History repeating?

Fresh from their FM-synthesis collaboration, Stanford and Yamaha once again joined forces to take digital waveguide synthesis from the laboratory and into commercial production. The result was the Yamaha VL1, released in 1994. In many ways just a proof-of-concept, the VL1 was a monophonic synth that hid away the deeper complexities of physical modelling, presenting instead a collection of predefined models based on real – and a few not so real – woodwind, brass and string instruments.

In order to fully expose the nuances and realism of the VL1, the synth had lots of scope for the use of novel performance controllers such as the Yamaha WX5 Wind MIDI controller. These were able to capture many of the performance nuances and subtleties that experienced players would use and to convert these into data that the VL1 could incorporate into its synthesis engine.

Yamaha VL1
Yamaha’s VL1 presented a bold new way of replicating the sound of woodwind, brass and string instruments

When played well, the VL1 sounded stunningly realistic, but this was also the VL1’s problem: in order for it to sound so realistic, it had to be played well, just as an oboe, trumpet or violin has to be played well in order to create a pleasing sound. And just as it takes time and effort to learn to play those acoustic instruments properly, so it took time and effort to learn how to play the VL1 – it was generally easier to just hire a musician and use the real thing!

This was not the return to the hands-on nirvana of analogue subtractive synthesis that so many synth players were yearning for, and so it didn’t lead to a repeat of the DX7’s runaway success, as Yamaha and Stanford were no doubt hoping for. Rather, the VL1 was largely overlooked and is now all but forgotten (but not by us!).

Physical modelling synths do still exist in the form of a small number of software synths, such as IK Multimedia’s Modo Bass and Modo Drum, and instruments like the VL1 can still be found on the used marked. But the technology never really got the traction that its potential deserved and there was another form of modelling about to hit the market that would draw all attention away from physical modelling…

To learn more about the history and science of sound synthesis, check here.

logo

Get the latest news, reviews and tutorials to your inbox.

Subscribe
Join Our Mailing List & Get Exclusive DealsSign Up Now
logo

The world’s leading media brand at the intersection of music and technology.

© 2024 MusicTech is part of NME Networks.