When mixing music on a computer you have a ton of advanced tools and processing techniques at your disposal. With orchestral music though, you need to tread carefully. If you have some experience mixing other types of music like rock, you probably know it’s par for the course relying on EQ and compression to sculpt the sound you want. Instruments in a rock mix are constantly battling with each other and it’s common practice to use strategic EQ cuts to make them co-exist without clashing, as well as generous servings of compression to bring out nuances and add punch. Just like with reverb though, you need to dispose of the notion that this is an approach that can and should be applied to all kinds of music.
If you’ve done your homework you should be able to get a reasonably convincing orchestral mix from the following ingredients and nothing more:
Meaning, an arrangement that makes sense, samples that do their job, panning that places things from left to right, levels that are balanced, and reverb that adds depth and makes things gel. If you find that you’re not getting the results you hoped from these five basic tools, you need to go back and find the weak link(s) before going any further. Additional audio processing isn’t going to help in a major way — or at all — so don’t even go there until you have ruled out the more likely culprits.
Unless you hear something glaringly wrong with your mix, don’t do anything at all. Trust me. Just don’t. I know it’s fun to muck around with plugins, and experimentation is great for learning. But the simple fact is that our ears tend to perceive “different” as better and it’s far too easy overdoing it. Nine times out of ten you will be reverting to the old unprocessed version once the novelty wears off, so resist the temptation of adding various processing “just because” or trying to fix problems that you’re not totally clear on.
Equalization in an orchestral scenario should always be about corrective surgical edits. Like, say, removing rumble from sections that have no musical content in the low register and thus no business rumbling at all. Or adding a slight high end boost to a section that needs a little more bite and presence. If you’re like me and use a multitude of different orchestral libraries in combination, EQ can be very useful — necessary even — for making stuff fit more seamlessly together.
Let’s say you have two different violin libraries that you’d like to use in the same arrangement. A is warm-sounding, B is more trebly and thin. Combining articulations from A violins and B violins will be difficult without boosting the highs of one or damping the highs of the other. Or whatever frequency really; a good spectrum analyzer can be handy for determining exactly what is making the two libraries sound so different, and for pinpointing problematic frequencies.
A little goes a long way though. Unless you’re for some reason going for a weird, synthetic sound, you should avoid any major sculpting of an instrument’s frequency response. Your samples are what they are. Getting creative with an EQ will not magically transform them into something new and different. They will just sound unnatural and over-processed.
As for when, why and how to use corrective EQ, it all depends on the source material. It’s impossible to give any specific suggestions on what frequencies to look for and by what amounts they should be adjusted without hearing the samples in question and knowing what you’re after. If you’re looking for more general tips on how to use equalization, there is plenty of guides available on the internet.
Compression should be used with even more caution than EQ. In fact I’m going to go out on a limb here and say that unless you have some experience with compressors and a good grasp of how they work — as well as a very clear idea of why you would want to use one — spare yourself a lot of headaches and don’t use them at all. Don’t get me wrong here, a compressor is a wonderful tool. But like all other tools you need to learn how to use it and when to use it. An orchestral arrangement is not the place to learn the intricacies of compression, so practice on a lot of other material before even thinking about inserting a compressor into your orchestral project. And even then, tread carefully.
One of the defining characteristics of orchestral music is its huge dynamic range. When you start tampering with this dynamic range by artificial means, you’re on thin ice. Artificial being the operative word here. A compressor limits dynamic range by mashing the input signal down in relation to a certain threshold value. How much it mashes the signal depends on compression ratio. For example, if the threshold is set to -20dB and the ratio to 2:1, every 2dB that shoots over -20 will be lowered to 1dB. There is more to it than this obviously, but that’s basically how compression works. If you’ve spent a lot of time trying to make your virtual orchestration sound natural and somewhat realistic, you might understand that this is not an optimal way of dealing with dynamics.
First of all, we’re working with MIDI here, right? You have some really fine control over every note of every instrument playing, so that should be your first resort. If a part is too soft or loud, use velocity and/or Expression to bring it up or down to the desired level. If you don’t want to alter the part in any way (after all, changing note velocities WILL affect the expressiveness of the performance), then use velocity and/or Expression to adjust the other parts playing. If none of these options seem to do the trick, a compressor isn’t going to help either. You need to have a look at the arrangement itself and mixer channel levels to determine why the part in question sounds wrong.
Another scenario where one might think that a compressor could potentially be the solution is samples with unbalanced velocity layers. You know, the ones where lower velocities are very soft, and playing just a little harder makes it jump up into a higher velocity layer that is REALLY loud. Yes, a compressor might just help a bit, but this is usually an issue with the velocity curve of your sampler and keyboard and the levels of the velocity layers themselves. A compressor would just be a band aid-type of solution and you should really go to the root of the problem.*
In a nutshell, stereo imaging (sometimes spatial imaging) alters a stereo signal in different ways, allowing you to make it wider or narrower, louder in the middle than at the sides, or vice versa. This is done through various techniques such as the Haas effect, M/S processing and flipping the phase relation between left and right channels.
Just like compression, this is a type of processing that is deceptively easy to overuse. It sounds cool at first, applying it in heaps on the master mix and getting that super-wide, otherworldly sound. But it also comes with a lot of unwanted side effects that may not be immediately apparent. If you find that your mix has sort of turned inside out and the first violins are playing somewhere behind your left shoulder, you’ve probably overdone it.
Stereo imaging can be an either subtractive or additive effect, and as a rule of thumb the former is usually quite safe while the latter should be used with caution. As mentioned here, making sections or solo instruments narrower — mono, even — is sometimes crucial for making things sit well toghether in a mix. Additive stereo imaging on the other hand is never a crucial form of processing. It’s more of an aesthetic thing, a final touch, best served in very small portions.
Adding a bit of width to a section that sounds too narrow — that’s usually fine. Widening a send reverb (or group of send reverbs) by a small amount — sure, as long as you make sure the imaging plugin doesn’t do weird things with the panning of the wet signal. Adding it as a global effect to the master output — no. If you’ve done everything else right, this would be totally superfluous. It takes only very little to give your mix an unnaturally hollow and smeary sound, so if you want more overall width, adjust the panning instead.
There’s an even bigger gotcha when it comes to stereo imaging, and that is mono compatibility. In the past I’ve sort of shrugged off checking for mono compatibility in my tracks, as I mostly make game music and my tracks will in almost all cases be played back on stereo-capable systems. There’s not a whole lot of mono computer speakers, laptop speakers or headphones around, you know?
With the smart phone and tablet boom, however, I’ve been forced to rethink this. My tablet has only a single tiny speaker on the back. My smart phone outputs audio through both front and back speakers, but I’m far from sure it’s in stereo (and even if it is, you can never get a proper stereo sound field from it as the speakers are pointing away from each other, and sound radically different).
Widening a signal can cause the left and right parts of the signal to become out of phase with each other, which is usually fine in a stereo mix. But what happens when the playback system sums both channels to mono? The phase inversion will cancel the signal out. This might result in a thin, flangey sound or even the part in question disappearing completely (in audio lingo this is known as nulling).
So, if you’re going to use any kind of stereo imaging in your mixes, make sure to check them for mono compatibility.
Other types of processing
As mentioned in the intro, there are countless plugins and techniques for processing audio out there, and the types listed above are only the three most potentially useful ones in an orchestral mix. We could talk about exciters, expanders, limiters, multiband compression, dynamic EQ’s, tube saturation, tape saturation, bit reduction, chorus, flanger, phaser, tremolo, pitch shifters and so on — but that would have me typing until the cows come home about stuff that is, when it comes down to it, very rarely useful for mixing this type of music.
Follow the checklist above, adhere to the KISS principle, and you will find that virtual orchestra mixing is more about learning to make the most of basic tools than adding layers upon layers of additional audio processing (and, in turn, unnecessary complexity).
* First off, try changing the velocity curve of the sample patch. Most if not all samplers lets you set this on a per-channel basis. If this doesn’t help, or if it’s global problem affecting all velocity layered instruments you try, then try adjusting the velocity curve of your MIDI keyboard. Finally, if you’re still not getting anywhere, edit the sampler patch in question so that it has a more balanced response. This might involve changing at which velocity level layers are switched, increasing the volume of the soft layer(s), decreasing the volume of the loud layer(s), or all of the above. A compressor, however, is not necessary.
Pingback: New article on virtual orchestra mixing and processing |
Thanks a lot for this post! It really helped me
You mentioned in another article mentioning something about a mix sounding fuzzy when many instruments are playing a same part in regards to how archrestra music is arranged.
Maybe I’m stupid but how much and what should overlap and what shouldn’t…for instance, should the bass line and high melody overlap or play the same line of music or be completely separate…would overlapping low brass to overlap with the same part played by violins (string instruments). what instruments tend to compliment eachother instead of drowning out the other part. Should fast parts overlap or be mixed with slow parts such as straight chords? How would you go about overlapping melody parts with harmony parts? Does havibg both high and low pitch frequency parts compressed down to one part cause the sound to get Fuzzier. Etc thanka
Funny, how true that article is. I compose soundtracks for documentaries and everytime when I am tempted to “improve” the tracks I end up with the unprocessed sound.
Al Toe, the answer to your question is: Steal. Listen to scores you like, analyze them! Ask yourself what you like about soundtracks and listen how THEY did it. In fact, any soundtrack tells you how things can work together. Adopt those techniques with your own melody.
Consider purchasing any classical orchestration treatise, such as those of Rimsky-Korsakov, or by Casella. You might also enjoy the book by Henry Mancini. Also Paul Gilreath has a great book, however, it is speciphic for virtual instruments. And what you obviously need is general orchestration principles. I would go for Rimsky-Korsakov in your case.
You can of course analyze the woprks yourself. You will re-discover the wheel, however, and it will take you years instead of days.
read those basic books first, listen all the examples they quote (now it is soooooo easy) and analyze later on your own. This is my point of view. Must not be yours.
My composition teacher at Florida State (many years ago) was Dr. John Boda, a brilliant musician and human being. He made all of his composition students study the works of what he called “the five greatest orchestrators in history” – Ravel, Berlioz, Wagner, Richard Strauss, and….. Rimsky-Korsakov!
Your post reminded me fondly of his teachings.
Thanks for very helpful information. I would make just this brief comment:
If used, the order in which various “effects” or “treatments” are applied is important, because they typically “add” to or “alter” what is already present. Most advice I have received regarding processing suggests taking a subtractive approach, particularly with EQ. If there’s too much treble, for example, don’t add bass. Instead, subtract treble, etc…. In other words, diagnose what’s wrong and try to adjust accordingly; …kind of like dieting….