Chanel Summers Interview – Game On!

From being part of the team that developed the Xbox to lecturing around the world on video game sound design, Chanel Summers is at the vanguard of pushing interactive audio and game sound as an art-form. Andy Price asks her more about the complexities and potential of this ever-expanding medium. Can you tell us how […]

When you purchase through affiliate links on, you may contribute to our site through commissions. Learn more

From being part of the team that developed the Xbox to lecturing around the world on video game sound design, Chanel Summers is at the vanguard of pushing interactive audio and game sound as an art-form. Andy Price asks her more about the complexities and potential of this ever-expanding medium.

Can you tell us how you got started in the world of audio production?
Well first of all, prior to starting my audio production company Syndicate 17, I spent half of my professional career in the video game industry. I started off as a designer and producer, working on everything from 3D vehicle simulations, to platform games, to hardware peripherals while working at companies such as Mindscape, Velocity and Mattel Media.

Then I was recruited by Microsoft when they were just starting to push into video games. And after I helped ship their first massively multiplayer online game I transitioned into the Windows side of the house as the company’s first ever developer evangelist for audio technologies where I launched innovative interactive audio technologies such as DirectMusic and DirectSound 3D and dramatically increased the use of Windows as a platform for audio creation, at a time when Mac really ruled the audio world. That was the beginning of my journey into audio production.

I was a very early member of the original Xbox development team, and I helped develop the specifications for the console’s ground breaking audio subsystem, and then created the industry’s first support team for content creators – aimed at sound designers, composers, game designers, and graphic artists to assist them in taking full advantage of the Xbox’s capabilities.

So I was doing a lot of work with others in a kind of advisory and educational capacity around the world to make their audio and content in their products better – which I absolutely love to do. But I wanted to take my knowledge and experience of dynamic, adaptive audio, sound design, music composition and songwriting, and create and produce audio for projects myself. So after I left Microsoft I worked on more direct applications of audio, first as a recording and touring drummer and then at my own company, Syndicate 17.

The latest incarnation of Microsoft’s hugely succesful Xbox – originally conceived by Chanel and her team.

Can you tell us more about the work you do at Syndicate 17?
Syndicate 17 is my audio production company, based in Seattle and Los Angeles. We specialize in writing and producing original scores, cues and sound effects for everything from films and television shows to video games and websites and even video art installations and pop songs for recording artists. I think really the fact that I’ve worked in so many diverse areas of both video game and audio production has given me an understanding of all sides of most issues that I encounter on a daily basis.

All these experiences, working in these different capacities with a diverse group of people, can’t help but skew one’s perspective and certainly forced me to develop some unorthodox skills and work systems. I always say that having a narrow focus isn’t good for anybody in this industry!

Interactive audio is a fascinating area, how does approaching designing interactive audio differ from making non-interactive audio, what considerations do you have to take on board?
Well there is a little bit of a paradigm shift as you’re working with non-linear forms in the gaming world. Because games are non-linear, the audio too should be non-linear. But in a lot of ways, the game audio production process does resemble the film audio process. There are similar recording techniques for live sounds and Foley and there are similar techniques for spotting – however in games this is really more event, dynamics, and emotion mapping, and many of the same tools are used in terms of recording and software.

However there are also significant differences in the processes. In a linear medium like film there are a few things that define the parameters of audio design – a movie contains a fixed set of actions, each scene is perfectly planned and the sound designer knows exactly what to create and where to take the audience emotionally, because it’s all scripted (the timing and progression of events are known beforehand). Therefore sound can be really tailored with precision. It’s a passive form of entertainment. One of the biggest challenges in game audio design is that unlike other media, the player controls the pace and timing of how the narrative unfolds. The player controls the action and determines what’s going to happen.

Emotional moments arise as a result of user actions, so game audio really needs to be able to adjust as players navigate through a game environment regardless of what order they do it in – and without bringing any undue attention to itself.

Unlike in a linear medium, the game cannot know with exact precision the acoustical properties of the environment or how the player’s perspective changes the sound until the sound is actually played within the game. Similarly, non-linear musical scores need to smoothly change intensities and musical styles without the advance precise knowledge of when those transitions will take place, but still those transitions must happen in a musically satisfying way that does not bring any undue attention to itself.

At the same time, since the overall length of gameplay can be indeterminate, it has to survive high levels of repetitiveness in order to combat listener fatigue. It really needs to adapt and evolve based upon player actions, to continue to stay fresh and new. So, how you prepare and create your content for adaptability and variability is a very different process.

So would you be working on the audio concurrently with the development of the visuals and overall game design?
Ideally, yes. The really critically successful games like Red Dead Redemption, Journey, and The Last of Us for example, involved the audio team right in the beginning of the game production process. But this isn’t always the case with games. For many developers, audio will take a backseat to art, game design and other disciplines. And there is an attitude among many game designers and producers that sound’s job is to work miracles quickly and cheaply at the end of production. Sound is sadly an afterthought.

The Last of Us had an audio team working on the game right from the beginning of production

Really the time for a game designer to begin thinking about audio is the moment the initial design process begins. As you begin envisioning the style of gameplay and the environments in which that game will take place, you need to think about the aural aspects of the game, too. Frankly, audio can even drive a game design just as often as the other way around! Excellent content and game play means graphics and physics and immersion and sound and music and art and story. It’s all of it. So, gameplay must be designed for sound from the ground up! A great game experience is one that will totally immerse the player and one that starts with the involvement of sound planning at the very beginning of the project. With a healthy collaboration between game designer and sound designer, audio can further the artistic aims of the game, helping to establish and reinforce the core meaning and experience.

So how would you technically construct dynamic audio, and what software do you primarily use?
Well, dynamic audio typically involves some combination of manipulated sound that varies pitch, time, and volume randomly and in real-time through filters or other effects and also layering audio content that is authored and kept in small “building blocks” until runtime, when they are assembled like auditory LEGOs, mixed, and modified according to game events – or, in the case of procedural audio, by working with algorithms, behaviors, and models. In addition to recombining sounds & layers to generate new variations, you can also set markers within the audio files so that you can have multiple possible start points. With this way, you don’t have to split your file into small chunks.

So, we first need to determine all the actions and events that require sound or a change in sound state within the game. The actions that require sound or a change in the state of a sound are often called “triggers”, “cues”, or “events”. We will then create a parameterized sound for each action, which is a sound with parameters or instructions, which affect the playback of the sound.

For example, not just a punch, but how hard is the punch. The game constantly provides the audio engine with triggers. Basically, these are segments of programming that tell the audio engine when and how to play a sound. The game engine then triggers these parameterized sounds based on events in the game at non-specified times. In essence, game sound design often consists of a collection of sound files and the associated rules for when and how they should be played or, in the case of procedural audio/physical models, an algorithm that does the same.

Tools for the creation and implementation of interactive dynamic audio range from proprietary, in-house applications to those developed by 3rd party middleware companies. Audiokinetic’s Wwise, Firelight Technology’s FMOD, and RAD Game Tools’ Miles Sound Engine allow audio creators to define sound behaviors within the game, build interactive music structures, and edit and mix in real-time.

The interface of Firelight Technology’s FMOD

So there’s a sense in which ‘the gamer’ is really a part of the composition process, they are directing the course of the soundtrack’s dynamics via their actions?
Yes, we’ve made these little intelligent chunks and the game engine sends messages down to the audio engine, and re-assembling the chunks and building up mixes based on what the player is doing, where they are or what they’re about to do. So we technically have to determine what events require sound triggering and block-building. These triggers, cues and events have what’s called a parameterised sound for each action – which is really just a sound with parameters, so for example, in terms of sound effects, it’s not just a ‘sword hit’ but the engine works out how hard the sword hit was and plays the suitable sound effect. So the game is constantly providing the audio engine with triggers. The important thing with game audio now is that rather than just playing one .wav file, you’re triggering a system that contains multiple .wav files.

When thinking about music in particular, you need to ascertain what the various moods, intensities and themes that need to be represented within the game. After figuring out the various moods, you’ll create pre-composed musical “chunks” that can adjust relatively quickly to changes in the game’s mood or actions. So in a sense, you are arranging the music intelligently on the fly. Where it gets really tricky is the transitions. You need to ensure compatibility between the many different musical pieces which could play at any time. So any change in the soundtrack must blend with any other music cue at any time. So that’s where it gets hard. There are many different ways of doing it – vertical re-orchestration is one method and horizontal re-sequencing is another.

The game will then piece these chunks together based on what’s going on. With horizontal re-sequencing, the composer creates multiple, small chunks and phrases which can intelligently and musically connect and interchange with one another depending on the game state. You can also generate new variations by recombining these chunks and changing the order in which these chunks are chained together.

The challenge with this method is that the musical chunks need to be composed and scripted in such a way as to allow the music to transition intelligently from one theme to the next while still sounding musical. With the technique of vertical re-orchestration, this manipulates the mix of a single ongoing piece of music (either a long cue or a loop) to match a player’s activities within the game. So, the same piece of music is used as layers of instruments are added and disabled to suit the various game states (effectively, changing the arrangement).

Say you are in a game – walking over some virtual terrain- and everything is fine, there’s nothing really going on, so the music is at a very low intensity level. Very ambient and exploratory and then maybe some creatures appear – so a layer of sound with more intensity and tension would start to play – perhaps something more percussive, more of these layers will be intelligently added based on what the in-game events are saying to the audio engine.

But as you defeat these creatures, the layers start to be pulled back and stripped away until it’s back down to the ambient level, similar to the way a recording engineer manipulates a mixing console on a linear production. But in our case this is performed within the game engine in real-time based on where the player is at the time or what the player is about to encounter.

Creatively as well you can use this to deceive the player, by playing high-intensity audio you can manipulate the players’ perceptions and trick them into expecting danger or conflict, when in fact there’s nothing at all, making them on-edge and off-balance. It’s a good way to tinker with player psychology. That’s what good games do.

Are there any particular games out there that stand out to you as having exemplary audio and good examples of this dynamic aural versatility?
One of the best games out there for audio right now is a game called Limbo by Playdead Games. It’s a 2D side-scrolling, monochrome game. It’s about a boy searching for his lost sister in a dark forest. The game is just unbelievable – the focus is on minimalism with an emphasis on silence and subtlety. The audio designer is a man called Martin Stig Andersen and he basically creates this experience where the “music” emerges from the sounds of the environment.

Rather than relying on a traditional music soundtrack which could be seen as overtly manipulative, the game instead uses sound effects for their “musical” qualities. It’s mixed in a way that it is driven by the subjective experience of the character, emphasizing the sounds of upcoming obstacles and environments even before they’re visually revealed. Conversely, as the player passes certain objects they may be silenced entirely, even though they may still be on-screen, if the on-screen objects are no longer “important” to the gameplay. It’s non-linear. It’s dynamic.

The sounds are also created using electro-acoustic principles. They use the concept of “ambiguity” which is a great aesthetic principle that games do not normally utilise. Or perhaps I should say that games don’t utilize ambiguity in the way that, say, painters or sculptors do. Great artists use ambiguity extensively, their works are open for interpretation. And Limbo purposely wanted to use that device. They were careful not to manipulate the player. They wanted to use sounds that didn’t have a strong identity or association to something.

Playdead Games’ Limbo is, in Chanel’s opinion “one of the best games out there for audio right now”

Also I have to mention a really fantastic movement that’s taking place where developers have started making games where sound is the game. They are audio-centric games–games with audio-driven implementations, sonic adventures and augmented audio experiences that tell their story through sound. You “see through your ears.” There’s a few great examples out there. Papa Sangre is a sonic adventure/horror game which utilizes binaural audio. You have your headphones in and you listen to the audio cues to help you navigate through the adventure. Enemies will growl, and snarl and chase you. And obviously you’ll want to avoid them.

Movement is controlled by the touchscreen, where you take steps forward and then turn to face the sounds. Another one is called Blindside. It won the Most Innovative award at the Games for Change Festival.

Similarly, another game called Dimensions from RjDj is an alternate reality soundscape game that uses augmented sound to turn the world around you into an adventure game. Dimensions is played with headphones and sounds from your actual world are picked up by the iPhone’s microphone and then enhanced and manipulated and integrated into the story of the game. It’s super trippy. I played it around my Mom’s dogs and thought my head was going to explode.

And finally, there’s a game called Pugs Luv Beats, which is from an Edinburgh-based developer named Lucky Frame. This game marries elements of resource-gathering games with the grid-based music sequencing interface familiar to most musicians. This game turns arcade-style rhythm games on their heads! This game basically combines traditional gameplay with traditional music composition techniques to create something completely unique.

All of the game audio is generated by the player’s actions—you create music as you guide Pugs around the game collecting Beats, making increasingly complex melodies as you progress. Essentially, you are the composer and the pugs are the instruments. As the pugs run across the screen, they trigger sounds, which differ based on the type of terrain the pug lands on. And the sound is also affected by whether the pug has been guided to a tile by the player or if they are passing over that tile en route to a different destination.

As the player progresses through the game, they travel to different levels (or “planets”), each with its own different composition template and sonic attributes. And giving the pugs costumes will help them to tackle various terrains and give them different synth voices. Pretty bizarre, but highly creative game!

LuckyFrame’s Pugs Luv Beats has a familiar grid-based sequencer interface

Do you think Video Games – as an ever-growing and ever-expanding medium will eventually be perceived in the public consciousness in a similar state of reverence that great art, literature and filmmaking is?
This subject is particularly near and dear to my heart as I do believe video games represent a new way of interacting with an audience with the potential to do so in a much more meaningful and intimate way than any other medium has been able to do so far. I think games, and game audio in particular, should aspire to far greater artistic goals than any other medium. Having said that right now I do feel that game audio trails other art forms when it comes to artistic expression.

Game audio can and should aspire to delivering more impactful experiences than any other medium that’s come before. Now, game audio has come a long way over the last 10 – 15 years when measured by quality of execution and advancements in technology – it’s no longer little bleeps and bloops and synthetic musical melodies.

We’ve got all of these wonderful tools and technologies out there, such as procedural audio, real-time effects processing, and dynamic, real-time mixing. You can place sounds anywhere in 3D space. There are games supporting surround sound technologies. Composers are recording epic scores with live, large orchestras at sample rates comparable to film sound. So, despite all our technical advancements and all this cool sounding stuff, game audio today is as technically well-executed as a feature film, but I sincerely believe that game audio hasn’t even begun to scratch the surface of what’s possible artistically.

But there are some people who are really pushing it and going above and beyond to create artistic experiences with the audio in their games – the aforementioned Limbo is a prime example, and I’m certain that over the next few years successful game creators will push into new musical and sonic territories and drive deeper emotional resonance into their creations by beginning to focus on audio aesthetics, just as they have done in recent years by adopting some of the principles of visual aesthetics. This is one of the things I lecture about quite frequently.

So you’re very much at the vanguard of pushing ‘game’ and more specifically game audio as a high-art concept – pushing the possibilities and the potential of the medium around the world.
That is my mission – that is absolutely my mission. Talking about how we can create aesthetic and artistic audio. It’s about experimentation and exploring unorthodox paths in order to create truly remarkable works of art. As in any art form, real innovation in game audio doesn’t come from technology, but rather from the creativity and experimentation of the artists who wield it. Employing proper aesthetic principles to drive the latest game audio specific tools, technologies, and techniques will enable game creators to push audio, and games themselves, forward in an emotionally impactful way.

At its most basic level, audio aesthetics is about teaching someone the key principles and technologies that will enable them to process, mix and control sound for aesthetic effect in order to craft the story elements of a game, control the pacing of gameplay, enforce the gameplay narrative, elicit and influence emotion, create mood, shape perception and reinforce the way that players experience game characters. But really, I am trying to teach them to become artists.

Chanel is also a recording and touring drummer

You’re part of the team that developed the Xbox. How do you reflect on your time working on this iconic console?
I reflect back on those times really fondly – they were amazing and wild times for sure. We were really making something historical. It’s not every day that someone creates a console, let alone a really successful one. The audio subsystem was way ahead of its time. We saw the Xbox as a machine that was created by artists for artists. We wanted to have the best quality games.

One of the things I did that had never been done before was to create a support team for content creators – we wanted to be able the help the content creators be strong and really take advantage of the technology and be super-creative, successful and ultimately usher in the next generation of gaming. We’d help sound designers, composers, artists and anyone else who needed our expertise. It felt a little bit like the wild west at that time. I really had a blast!

So the at the nucleus of the Xbox’s creation there was a manifesto of progress and pushing the envelope?
Oh yes! We saw it as a platform for artists to express themselves primarily. We strived for the highest level of creativity. When I worked on Xbox, I was helping various game development teams to make sure that they took advantage of what could be done on this amazing new console. I created the game industry’s first support team to assist sound designers, musicians and graphic artists in taking full advantage of the Xbox’s capabilities.

Thanks for talking to us Chanel!

Check out the latest issue of MusicTech Magazine for more info on Chanel and video game audio


Get the latest news, reviews and tutorials to your inbox.

Join Our Mailing List & Get Exclusive DealsSign Up Now

The world’s leading media brand at the intersection of music and technology.

© 2024 MusicTech is part of NME Networks.