Ask Abbey Road: Lewis Jones on vocal production, mixing in Dolby Atmos and more
The experienced engineer explains the joys and challenges of mixing in Dolby Atmos, the pitfalls of using too many plug-ins and why, for him, there’s one compressor to rule them all.
One of the pieces of gear that Abbey Road engineer Lewis Jones would be lost without is the Neve 88RS.
In our ongoing series in partnership with the legendary Abbey Road Studios, you put your recording and mixing questions to the famous facility’s in-house engineering talent. Up this month is engineer Lewis Jones, whose credits range from Mura Masa’s Love$ick to Jurassic Park: Fallen Kingdom.
Jones began life as a studio engineer some 20 years ago at Lansdowne Recording Studios where he cut his teeth on a huge variety of sessions spanning jazz, film scoring and rock bands.
Then, in 2006, he joined the engineering team at Abbey Road Studios where he’s worked as assistant and engineer to some of the biggest film composers on the planet. Among them are Michael Giacchino (Jupiter Ascending, Doctor Strange) and Alexandre Desplat (The Grand Budapest Hotel).
Here, Jones discusses his favourite pieces of gear, experimenting with recording and the steps it takes to turn a raw vocal into a finished one.
Terence asks: What are some of your favourite pieces of gear in the Abbey Road collection? And why?
Lewis Jones: If I had to pick one mic, it would be the Neumann U 47. Because it has this nice, round, warm fat sound that generally works nicely, particularly on vocals. We’ve got quite a few of them and it doesn’t matter which one you use. They always sound good.
I’ve just been working with a young Australian artist called Cloves. She’s about to release the second album. She had been tracking her album elsewhere using an SM7. I don’t want to badmouth the SM7, which is a great mic, but she wanted to come to Abbey Road to out try some of our mics. And she used an AKG C12 and Neumann U 47 and a U 87. And she decided, with a combination of Neve 1073 and an LA-2A compressor, to go with the U 47 because she thought it added so much more character, depth, and warmth to her voice, than all the mics she’s used previously.
We just recorded vocals for six tracks for her new album last week. And the outcome has been great. The whole production team are really happy. She’s one to look out for.
There’s also the UREI 1176 compressor. It’s a classic and for good reason. It’s brilliant on pretty much anything, particularly vocals. It’s excellent on guitars; it’s great on drums. It’s also user-friendly; it’s really controllable – you can change the input and output. The attack and release controls are fantastic and no matter what you put through it, you get a great, natural sound. I definitely use that more than any of the other outboard at Abbey Road.
It’s great for controlling the vocal, keeping a consistent recording level. You can really hear what it’s doing when you get a very dynamic vocal and you play with the attack and release.
MT: How do you use the 1176 on a vocal?
LJ: It can be transparent, but sometimes, you can drive it a little bit. And you do get it to work in different ways and it will slightly modify things, but it always sounds great no matter what you’re doing.
I sometimes use it in conjunction with another compressor, too. I might do that because I want one compressor to control peaks and level things out and then I might want another compressor to colour the sound differently. So, you’re controlling it with one and then you’re using another one to see if you can get a different tone. You can see whether it adds something to it, and quite often it does.
LA-2As, UREIs, maybe the Neve 33609 sometimes. That’s the beauty of having them all there at Abbey Road. You can quickly re-patch things and try things out.
Eloise: As engineers, we’re always learning. Is there any new concept or system you’ve gained a better understanding of just recently?
LJ: This year, we’ve been doing quite a bit of recording experimenting with Ambisonic microphones and Hamasaki Cubes. And we’ve recently been recording the music for EA Games’ Star Wars Jedi: Fallen Order.
I’ve been working alongside another engineer to do that. It’s just an enormous amount of music. So we’ve had a conventional orchestral setup, mic’ing wise, along with Ambisonic mics. It’s been an interesting process.
A lot of it, I haven’t heard the outcome, because a lot of the time you need a matrix to decode it. But I’m told that the result is excellent. And then the other thing is Dolby Atmos mixing.
There’s been a big mix project in the Penthouse. Our Penthouse studio is now Dolby Atmos affiliated, and it’s been fascinating remixing stereo tracks and in Dolby Atmos. It’s a process I’ve been enjoying.
It’s been interesting because everything’s so standardised in stereo. We’re so used to listening to things in stereo that has been quite liberating mixing these projects across the 7.1.2 format. I don’t really want to go back.
The whole immersive music experience is something which is in its infancy but can sound fantastic. You don’t feel so restricted. I’ve been experimenting with spreading the sound around, trying to place audio in different areas. Sometimes it works, sometimes it doesn’t. But the whole process – the Dolby Atmos mix – is something which we’re going to be hearing a lot more.
MT: What are the challenges of working in that format?
LJ: When we mix in stereo, we use stereo buss compression and that sort of gels everything together.
But in Atmos, I can spread all of those individually in surround. There are the seven main speakers, ceiling speakers, extra low end and there isn’t any stereo buss compression there. So you have to think differently. Can you add more low end? Does it sound strange coming out of the ceiling speakers? Can I have a guitar coming out of one side and have delays coming out of the other side? It’s quite a liberating experience.
And, so far, rather than having to squeeze everything into two speakers, It frees you up to place instruments where you might not otherwise think to have them.
MT: Are you finding there are conventions from stereo that you can’t work against? For example, do you find that it works best to have bass instruments in the middle and spread things apart as the frequencies increase? Or can you wildly position musical elements?
LJ: It’s very subjective as to whether or not it works. You’re free to do all of these things and experiment a lot more.
Sometimes it works, sometimes it doesn’t. And it takes a little getting used to. Even with a 5.1 mix, you place any rhythmical elements such as bass and drums at the front, and vocals and reverb perhaps in the rears. But this new format, you can spread the drums around the sides, you can spread guitars. And pop music, with rises and stabs and hip-hop – that kind of stuff – it translates brilliantly for this format because you can have a riser coming from the back over the top of you to the front. It’s very exciting.
MT: What other benefits does Atmos bring?
LJ: It’s very much in its infancy, and it’s a process that is sound-effects-led for cinema. But I think that it’s now opening up great opportunities for music. And perhaps also people who write music will begin to think slightly differently. Everyone traditionally writes for a stereo image, whereas now there’s the option to spread that much wider.
I haven’t started to see it yet because I’ve only been working on a remix project. But I think perhaps, when people hear some of these new mixes, it will open their minds and hopefully encourage them to experiment with the way they approach writing.
MT: How is the process of mixing different for Dolby Atmos? How do you place things in the listening space?
LJ: It’s an experimental process because, with Dolby Atmos, you have a 10-channel bed. That’s a static mix that you’re spreading around the space. In the 10.1.2 system, you can spread it around the front, sides and rear speakers. In theory, you can have as many speakers as you like in your space and then the Dolby Atmos decoder works out what you’ve got and will then spread it through the speakers that are physically in the room.
The Dolby renderer gives you up to 128 objects, so you can have your bed where you perhaps have your bass and drums and then you’ve got plenty of objects where you can place anywhere within that room. And that’s just purely experimental. You come up with an idea and you try and fly things, perhaps from the rear to the front and vice versa. Some things work, some things don’t and it’s very experimental.
But for a lot of the pop music that I’ve been doing, it sounds great. You can have things building from the side, from the back, moving to the front, big percussive crashes that spread all the way over your head to the back of the room.
There’s so much opportunity. I think once people start to realise what you can do, they may think differently about how they approach writing.
Zack: What steps do you take to turn a raw vocal into a finished one, how much processing goes into a basic lead vocal track to make it a clean, polished vocal that sits as well in a mix?
LJ: Firstly, it depends on your singer. It depends on how good the recording is and how well delivered the singing is. I like to listen to the vocal with the track playing pretty much all the time. So when you start mixing, it’s always important to have that vocal in. The vocalist is obviously, the most important thing.
With processing, you start with tuning. Quite often, there’s quite a bit of tuning that goes on. And obviously, you have to make sure you’re listening to the track to make sure that everything’s right. Cutting out low frequencies, clearing out loud mud is very important.
To start with, it’s just clearing the low end, making sure that you cut everything that you don’t need. I do that with a high-pass filter. I think it’s always best to go a little too far and then to nudge it back. That’s always something you can adjust at a later stage. If you go too far, it’s obvious straight away and you and then pull it back.
De-essing is next, and that’s important, too. That’s getting rid of all that sibilance. There are some great plug-ins out there nowadays that do this very, very well.
And then the next process would be compression. I think sometimes, here, this is very much a case of just two-compressor approach that we were talking about earlier. It’s quite nice to use one compressor to control peaks and try and level things out so that your vocal is nicely even. But you can also do quite a bit of that these days in the DAWs with Clip Gain. That’s often a handy little tool, using Clip Gain in conjunction with the compressor.
And then you maybe EQ next. Maybe cut out some of those mid-high frequencies, which you feel are a bit harsh. Sometimes I like to turn it up briefly, so you can work out what sticks out, what sounds harsh on the ear. So cut some of those frequencies.
And then perhaps use another compressor just to bring the level up and to make it sit nicely at the front of your track.
Then, obviously, delays and reverb are very important for helping it to bed into the track. And that, again, is always an experimental process using slight delays or maybe longer delays which you then feed into the reverb as well. It’s very track dependent.
Read all the instalments of Ask Abbey Road here.