Thursday, April 10, 2014

How To EQ – Mixing and Mastering - CDSoundMaster.com – 12 EQ Issues Part 18

Audio Recording Issues – Multiple Microphones Setup to Track the Same Instrument and Need the Right Balancing Together Part Two

Balancing the drums is about the volume levels in comparison to the mix, but it is also about balancing the top and bottom head microphones of each drum if using both. You may have more of certain drums captured in overheads and room mics than the others.
Once you are mixing and blending things together, you may find a more intricate process developing. Let's say that the direct mic blend for the snare is perfect when it reaches a certain volume, but it tends to have too much of a certain Frequency when you feed the overheads into the mix. Or, the complex stereo capture of the overheads are great for the whole set in the context of the mix, but they lean too heavy on certain Frequencies. The Equalization decisions may be very tricky. Each drum and each mic choice may have areas that take away from other instruments. Before you go carving away chunks of great tones, consider creating an order of preferences and levels of importance. You may find that some Frequencies are best to reduce from a track other than the drum or overheads, and in other situations you may find that some Frequencies appear to stand out because of positioning or even due to timing. Often, the small delays that build up from the distance of one mic to the next can create a Frequency that results from this distance between the two. It isn't always necessarily phase, as we usually call it. Sometimes, it is more about what I call the reach of the microphone more than the timing element.
Let's pretend that I set up two of the exact same mic to record a single drum sound. Both mics are in an omni mode. The first mic is placed one foot away from the drum and the second is ten feet back from mic number one, eleven feet from the drum. Once recorded, there is not only the difference in the sound of the instrument, but its effect on the walls around and the ceiling above and floor beneath. With both mics in omni mode, they are both picking up a lot of information from their surroundings. The sound is going to travel a distance before it reaches the second mic, but also in that time, the first mic has resounded across the room, so there is an initial attack from the drum, no doubt captured with more intensity and clarity by the first mic, and the wash over, or reflections of the room, whether small, short, tight or big, long and open. That second mic will hear a residual attack very shortly after, with a different resonance and a larger blend of the room, along with the resulting room sound that the hit creates and the blend of the room from a distance. We could align these two responses in time so that the later recording is brought forward to play at the same time as the closest mic, but this is not always the best decision.
Sometimes, the mic choices lead to a phase correlation issue that is best resolved by aligning the timing of mics. Other times, it is the distance itself that we want to measure in the sound. Still other times, the range of Frequencies can be complicated by the reach of the mic, meaning that the sound waves that develop in that distance are being captured by the distant mic and not just the alignment of the time, meaning that any change in distance and position can affect Frequencies that otherwise would not exist in reality. They are a combination of the sound created, the room reaction, but also an equation of the sound captured by mic one added to the sound captured by mic two, the result being additive and sometimes the results can cancel out some Frequencies and boost others, all the while responding largely accurate to the source and distance. In this case, we can adjust timing, which can have a negative affect on the timing between other instruments or drums, or we can reduce the residual bumps in the recording from one mic or the other, or a little bit of both.
http://CDSoundMaster.com

Thursday, April 3, 2014

How To EQ – Mixing and Mastering - CDSoundMaster.com – 12 EQ Issues Part 17

Audio Recording Issues – Multiple Microphones Setup to Track the Same Instrument and Need the Right Balancing Together Part One


Learning how to Equalize Frequencies within a range of different contexts requires numerous skills and the ability to define one's style with cohesiveness. There is a single process that serves as a testing ground for a contextual series of abilities, all inclusive, and that is the process of recording a single instrument with multiple microphones.
The drumset is the perfect instrument to serve as an example for this common occurrence. Live drums have been successfully captured as part of an ensemble of instruments in a pleasant room with a single mono microphone, a perfectly positioned pair of matched stereo mic's, and with a 3-mic array. All of these techniques have been used with remarkable success and many engineers utilize these purist approaches with incredible results. More often than not, we find a more comprehensive approach taking place. Project studios may use very inexpensive dynamic mic's on the top heads only, with a pair of high quality but inexpensive cardioid condensers as overheads, and skip on the room mic's.
This same project studio may have an 8-input audio interface with decent built-in preamps employed. The high end studio aiming for the standard contemporary approach to recording a drumset may use similar dynamics on the toms and snare, on top and bottom heads, with a mic on either side of the kick drum and an expensive large diaphragm at a small distance on the kick, an expensive pencil condenser on the hi hat, a pair of room mic's, a pair of overheads, all running on boutique outboard preamps. Whether you are at the lowest budget production or at the world's finest facility, you are likely to be facing some of the same issues when it comes time to mix these drums together.
Let's fast forward and assume that we've tracked our drums and everything else in the song is ready to mix. Now, we've got to decide how this all comes together. Realize, it may be one thing to set all of those mic's up and compare levels, angles, positions, and everything else, to get the best capture of the set. Now, we have the challenge of deciding how loud the snare should be in comparison to the guitar and bass, and how much of that should come from the direct mic and how much from the overheads. Is the song calling for an open and ambient set or a tight, punchy, up-close set? Is the stereo spread and distance of the overheads consistent with the feel of the song or is it too complex sounding? Are you best to use the directs and supplement them with the natural reverb of a room and/or overheads, or vice versa? The approach to tracking and mixing largely define how the performance should be brought together. In part two, we'll take a look at how EQ helps us with this process in multiple ways.
http://CDSoundMaster.com

Monday, March 31, 2014

How to Mix and Master with EQ – CDSoundMaster.com – 12 EQ Issues Part 16

Audio Recording Issues – Nice sound when tracking, but way too much "-----" when blending parts together, Part Two


So, what happens when we get too much overlap of a given range of Frequencies? Obviously, it sounds wrong. The overall volume of a mix is compromised because it has to make room for a lot of a certain Frequency range, thus the other Frequencies are too low in comparison. Some people attempt to rectify a good mix that is too heavy in a given range by squeezing it flat with compression. This results in one of several bad endings. The mix may distort in reaction to an abundance of spectral tones, or we may get pumping and breathing from Frequencies that were fine otherwise. The point being, that if there is too much build-up in a certain range of Frequencies, they have to be dealt with before a good mix can happen. It is better to identify any conflicting tracks before mixing down, or the result will have to be dealt with at mastering, at which point other parts of the mix may be compromised that didn't have to be.

Why does this build-up tend to happen? As I mentioned in the previous post, sometimes it is the mere fact that multiple parts of the instrumentation or music performance involve instruments or vocalists that are in the same range as each other. This can also happen by using the same microphones and preamps over and over again. It is likely that some of your favorite “go-to” tools are not only the super-flat, super-precise ones, but often are chosen because of their personality. “I love the ----- for bass and the ----- for vocals, etc.” These choices can create intentional boosts and cuts in our mixes by preference, but come mix time, the same hills and valleys are already there. If we don't know when this is a good thing, we might end up doing some truly awful things to the mix.

Sometimes people have no reason to run into the issue until mix time, because of their routine mix practice. Some people like to use high or low pass filters on every track. They say that it cleans things up and always leads to a better mix. I understand the logic and agree that this process is logical and people that are successful with it have their reasons. It is a well planned process for selecting certain Frequencies to cut out of each portion of a mix, so that there is plenty of space for each instrument in the blending together of the mix. However, this can also be the first stage that a person realizes the over-use of other Frequencies, or there may have been better mixing options before removing certain sections of extreme spectral ranges, and now the focus brings rough central frequencies to the surface. I personally avoid using high/low pass on all tracks as a practice for one main reason: there is a lot of intricate timing information captured across all Frequencies. I tend to only change or eliminate things if I know I want the color, musical, or surgical result, and only if it is not sacrificing details that give the brain lots of feedback about timing, placement, distance, etc. But, regardless of what got you there, we are talking about the situation where nothing was a mistake, but there is simply too much x, y, or z happening. So, the toughest question: “what do you get rid of?” You like the balance and you like the individual sound. You don't really want to get rid of any of it, but the mix is simply too heavy on certain Frequencies. I recommend going to the center of the issue, which is to identify the bump.

“Huh? Identify the bump?” It is very likely that the overlap that is happening has a complex texture that is not all built up in a perfect slope across all of the exact same Frequencies. There are bound to be smaller patterns within the overall offending range. For instance, where an earlier example there was too much mid range coming from a lead vocalist, guitar, and tom tom section that were all awesome but sharing lots of mids, I recommend listening to only these elements and follow the rhythmic choices for the specific song. How often is each element running in unison? Which of these are more prominent? Is it possible to turn an instrument track down to lessen the load, or does it need to be up in the mix? Can you carve only one small part of each of them out to make room for the other? Although I will go into that in more detail with a future post, what I recommend is to think of this like the range of Frequencies that you are dealing with is a mountain. We are looking for the molehills, or to put it more accurately, we want to dig little chunks out of the least important sections within that mountain range.

Let's pretend the mountain starts at 500Hz, peaks at 2kHz, and recedes back down at 5kHz. This is not a consistent bell shape, but has the most build-up towards the middle Frequencies. The highest peak represents the Frequencies where the most overlap occurs between the multiple tracks. The greatest issue occurs from notes that sustain the longest, but if any notes stick out that are harsh or cause peaks that force the rest of the mix to a lower average than needed. The goal is to isolate how much to reduce, only from the Frequencies that overlap the most. You may want to only reduce a little of the guitar in one part of the range, a little from the vocal, and a little from the tom toms. This will lead to an overall reduction of the problem Frequencies without having to reduce them all the same amount, or to reduce any other parts of the mix by the same amount. You can also do a small amount of multi-band compression to slightly reduce the peaks in these areas, especially if any of the instruments are short notes instead of sustain. This allows short peaks to come into balance with less noticeable effect on the mix. The entire mix can increase by the resulting change. I have spent many years developing my own process of a more complex complementary Frequency process as well, but for the sake of staying on topic, let's stick with the effective process of reducing the bumps of Frequencies from combined sources by identifying them to their sources, and reducing them at the source, in order to make the mix smoother.

 

Wednesday, March 26, 2014

How to Mix and Master with EQ – CDSoundMaster.com – 12 EQ Issues Part 15

Audio Recording Issues – Nice sound when tracking, but way too much "-----" when blending parts together, Part One

In the past several posts, I have been writing about scenarios where you have a great sounding mix in progress and a great sounding track you just recorded, but the track doesn't work in the context of the mix. Now I want to discuss a similar scenario where this is happening across several tracks, or even across every track in the mix. I'm not necessarily talking about a situation where every track is completely wrong for the mix, as this would suggest something wrong in the technique, or maybe this song wasn't ready for recording at all. I am rather talking about the actual process in mixing where we are blending all of the creative elements together.

Whether it is at a stage of working with raw tracks, in the process of editing, or through the process of adding effects and making changes, that it seems as though tracks are simply not working well together. It is easy to run into issues during the tracking and mixing process that are simply not relevant until real mixing begins. Sometimes we are working under the pressure of time constraints and other times we have to play multiple roles, where we are thinking in terms of doing everything as an engineer to capture the best sound, and later flipping to the role of critical listening in the context of a mix.
 
Regardless of the reason, often we find that when it comes to balancing levels, panning, EQ'ing, adding compression, reverb and other effects, we run into a situation where the overall spectral balance is totally out of balance. How does this happen? You may have a very well-tuned room. You might choose great microphones and preamps and have the best singers and musicians with awesome equipment. You might even be using everything correctly. But, now that you are in the mode of serious, critical listener Mixing Engineer, you start to pile on the tracks and the Frequencies are simply not working well with each other. We often run into this with the “too much of a good thing” scenario, where the exact reason that everything seemed to go so well in tracking is now the downfall, as every layer is adding the same room elements, the same subtle boosts and cuts of microphone and preamp combinations, or it may not be anything technical to blame.

The chances are good that you have the same excellent sounding mid range Frequencies pounding the tom toms as you do screaming from the Marshall Cabs, and if the lead vocals are roaring through at the same notes as the rhythm guitar guy, then you are going to have a lot of mid range Frequencies in your mix. This might sound like it is ideal. Sure, just make sure the kick and bass guitar have some killer low Frequencies and the hi hat is spitting out a beautiful high end, and it should all come together, right? Well, the problem isn't necessarily about this kind of balance.

That sounds like the perfect imaginary land we have all pictured before reality sets in during session work. The truth of the matter is that every studio session that involves multi-tracking and mixdown has some element of surprise that will be dealt with in some unique manner. The way we react, the way we hear things, and the skills we acquire will be the parameters that affect the end result. This is where we put our signature to our sound in the mix.

CDS

Monday, March 24, 2014

How to Mix and Master with EQ – CDSoundMaster.com – 12 EQ Issues Part 14

Audio Recording Issues – It Sounds GREAT On Its Own, but... Part Five

Maybe it is a difference of room dimensions making it difficult to adjust a track properly within the context of a mix. Maybe it is a timing issue related to similar Frequencies. But, maybe it is about dynamics and distance. Along with the wonderful things that our brain does with sound interpretation, it pays close attention to where sound is coming from. The human ear, as limited as it is compared to what some animals can hear, is designed to send an incredible amount of information to our brain for approval. Complex timing elements are combined with location to tell us not only what we are hearing, but whether it is close to us or far away, to our left or right, and whether it is an obvious sound that we recognize or if it is very subtle and hard to identify.

In a mix, the context of these things work together for our approval or distaste. We may have a music track that is a little washy and distant, with a lead vocal that is extremely loud and dry. Some people like this; others don't. Usually, a good mix brings it all together in one form or another, but sometimes we intentionally bounce one character off of another to get a new response. What happens if a track is exactly what we want, but in the context of the mix, it sounds weak? We turn it up but now it is too loud. We turn it down where we think it belongs regarding volume, but it sounds weak. We solo the track, and it sounds perfect! Is it possible that we are dealing with a symptom of conflicting dynamics? What I mean is that we may have consistent performance levels from everything else in the song, or we may have already compressed other elements in a song individually, but the natural dynamics in our new track sink down too low in some parts of the arrangement and maybe sit just loud enough at other times.

If this is the case, we may be able to resolve the issue with a simple limiter. By raising the average volume up by a few decibels or reducing the peaks, we may get a consistent performance that sounds more full all of the time. What if we try that and now it is way too loud or sounds different than we want? Or, it just doesn't fix things? Sometimes, the reason comes back around to Frequency adjustment. Let's say that we have the example of another mix on the same album. Using the same approach, everything is great. So, why is it not working here?

You listen to tracks and suddenly you realize that one song is more up-beat than the other. Why should that matter so much if the process worked so well? Shouldn't that always be the case? It is possible that the performance is different from the working mix to the troublesome mix? This might mean that the drummer is tapping at the bell of the hi hat in the working mix, but is sizzling at the edge of the hi hat in this one. The change in performance can change the length of time it is resonating those powerful high Frequencies. You may have a beautiful EQ boosting the pristine recording of that hi hat in the same amount on both mixes, but this time around, the fact that it sustains for a long time instead of gentle taps, means that the Frequency just isn't available to your other track now. Having both tracks contribute to the same sound range makes for a busy neighborhood! You can try to select different Frequency options to get this under control, but the chances are that one change will lead to another and so forth, you find yourself changing elements that you used to be happy with. You might try lowering the hi hat volume, but it may make the rest of the drums sound unbalanced. What can you do?

Maybe you can try very small changes in the stereo field, making a little room for the hi hat just a tiny bit to the left or right. Or, you can try narrowing or widening the stereo field of just the track or the offending part of the mix so it gives a different location for the Frequencies to sit in the mix. Or, you can see if trading one high Frequency for another just on the track in question helps. All of these things are good ideas, but what if none of them work? You may consider a very small amount of several options. Try things I have mentioned that deal with timing, balance, dynamics, ambient rooms, Frequencies, and location. Try them in combination with each other and with different, small increments. Are any of these helping a little bit?

Most likely, some very small changes to multiple pieces of the equation will resolve the issue and help you to be pleased with that track once again. If that is not the case, you might actually be dealing with a case of “if you can't beat 'em, join 'em.” By this, I mean that you may need to incorporate that new track more into the room environment that the rest of the mix resides in, or vice versa. Is it possible that your new track is simply too dry and needs a tiny bit of reverb to get it to sit closer to the context of the song? Is it possible that the song sounds great dry but might work well with a tiny bit of the same kind of reverb that worked well for the new track?

Add these different elements together and see if some combination leads to a better result. I have a feeling that it will. There are plenty of other scenarios that affect the outcome of mixes, but this should give you an idea of some ways that tracks interact with each other and hopefully it can inspire you to spend that extra time listening and tweaking mixes that leave you less than inspired. I don't encourage you do overdo anything that is already mixed the way you like, but if you are left unimpressed with a mix, there are things you can do to potentially bring things to life that are not drastic, and leave very little changed from track to track.

CDSoundMaster.com

Wednesday, March 19, 2014

Audio Recording Issues – It Sounds GREAT On Its Own, but... Part Three

Our poor track. It sounds awesome, but either the rest of the mix is a bully, or it doesn't want to play fair. Before getting into a political topic of individualism versus collectivism (don't even get me started!), let's stick with a finite list of situations that we can identify as a root cause for mix injustice. It can be a timing issue. The song is not being mixed wrong, and the track is excellent, but we may be dealing with the way that the brain interprets sound signals. If we record something incredibly precise in a dry environment with very little character from the room, then we can get a recording that is amazingly present, in your face, intimate, and measuring somewhere between realistic and super-realistic. If we record something else that has some distance to it, then the complexity of the sound that bounces around in that environment will get measured in the context of the whole mix, and this may not be a good thing. One problem can be that when added, our brain says “nope, that isn't realistic.”

I don't mean to say that it is fake or bad sounding, but that the idea that a lot of sounds came from one place and another is from somewhere else that does not fit, can mean that a perfectly blended mix is not working from a completely conceptual, functional standpoint. If this is the intention, then obviously we don't need a solution. But, if you think the problem with balance is coming from two environments that do not belong together, then we may be on to something. But wait, what if there is some room presence, reverb, or liveliness that is making the new track conflict? Now what?

It may be possible to reduce only the part of the room's character that is feeding the majority of information to our brains. This can have multiple benefits, but some of it is covered in a later topic. We may be tricking our ears into re-interpreting tracks that we were happy with before, because those Frequencies that are the most obvious in carrying the room's qualities may also fluctuate in a different rate or pattern than they occur in the balance of the mix, so now our brain says “not only is it coming from a different location, but it carries information that doesn't fit into the groove of the song.”

Isolate your Frequencies with a narrow boost signal, control your output with a limiter for safety to your ears and monitors, and see if the issue is the most noticeable in the lows, mids, highs, or all of the above? Find the problem Frequencies, figure out the width of their “Q” if necessary, and reduce only the amount that reduces the complexity in the context of the mix. This means to solo the track and also check it with the mix, both while making adjustments all along. Did this help? Then, maybe the only problem was timing from the complexity of a room signal. Excellent! Did it help, but not enough? Likely so. Maybe we should see if there is something else going on here. I will cover these other possibilities in Part Four.

CDSoundMaster.com

Audio Recording Issues – It Sounds GREAT On Its Own, but... Part Four

Maybe the room was part of the issue in our track that won't work quite right in context. Maybe there are other timing issues as well. What if we are looking at the way we interpret timing as it relates to dynamics. What I mean here, is that there is the natural flow of the song, and there is the rate of expression that comes from each track in the song. You may have punch from drums and melody from bass and vocals, or you may have sustain on drums and cymbals and more rhythmic elements from percussion or busy bass and guitar. There are numerous things in the musical arrangement that affect our interpretation of sound. When we put it all together, there may be the wrong punch or sustain in Frequencies that otherwise sound wonderful. For example, a great vocal is intentionally recorded up close with a cardioid pattern large diaphragm condenser microphone.

The presence of the recording fills out the low Frequencies of an amazing vocal performance, we also have this incredible sustaining bass guitar with energetic sub-bass Frequencies that sit beautifully on top of a clean, clear, punchy kick drum. But now, the smooth low end of the vocal makes you reinterpret the perfect blend of sustain and punch that was there before. Should we reduce some of the bass on the bass guitar? Should we take a little out of the vocal and bass? Or maybe, a little compression on the vocal would serve well? Maybe the vocal compression should be grouped with the bass and kick? Perhaps this group compression could lock the timing together and re-orient our listening to hear these elements as a unified process?

This works sometimes, but usually we have more do deal with. I've found that often the simplest solution is also the best solution. I have developed a process that I will write about with more detail at a later time (would anyone read a full book if I wrote one?). I will mention it briefly here. Using a little low shelf EQ in this instance may be the perfect solution. You can use the same wide slope, like something found on the “Cooltec EQP-1A3S” or the “ARQ,” or you can try a combination of two different slopes, like the “115HD” for one instrument and the “AMK9098” for another.

The idea is to reduce a very small amount of a very wide Frequency range down to its lowest point so that we still feel the energy that is there, but it reduces the focus and allows us to concentrate on the other instruments in that range. What I add to this, is to listen to other mid and upper mid Frequencies on the same tracks and see if there is something that has a similar quality that impresses you the same way as the lows. For instance, the low may emphasize an incredible “pluck” of a pick on the bass guitar or the moody sustain of a vocal. Is there another place in the different Frequency registers that are complementing this trait? If so, there is a good chance that you can reduce the lows slightly for one instrument and boost a tiny amount somewhere else that gives the same energy but from a different range. A “pluck” may sound great in a fast attack at low Frequencies, but it may also give precise information in the upper mids, and we might re-orient the conflict from an offending Frequency to one that doesn't clash, and now we can have everything that sounded so nice without conflict.

Monday, March 17, 2014

How to Mix and Master with EQ – CDSoundMaster.com – 12 EQ Issues Part 12


Audio Recording Issues – It Sounds GREAT On Its Own, but... Part Three

Our poor track. It sounds awesome, but either the rest of the mix is a bully, or it doesn't want to play fair. Before getting into a political topic of individualism versus collectivism (don't even get me started!), let's stick with a finite list of situations that we can identify as a root cause for mix injustice. It can be a timing issue. The song is not being mixed wrong, and the track is excellent, but we may be dealing with the way that the brain interprets sound signals. If we record something incredibly precise in a dry environment with very little character from the room, then we can get a recording that is amazingly present, in your face, intimate, and measuring somewhere between realistic and super-realistic. If we record something else that has some distance to it, then the complexity of the sound that bounces around in that environment will get measured in the context of the whole mix, and this may not be a good thing. One problem can be that when added, our brain says “nope, that isn't realistic.”

I don't mean to say that it is fake or bad sounding, but that the idea that a lot of sounds came from one place and another is from somewhere else that does not fit, can mean that a perfectly blended mix is not working from a completely conceptual, functional standpoint. If this is the intention, then obviously we don't need a solution. But, if you think the problem with balance is coming from two environments that do not belong together, then we may be on to something. But wait, what if there is some room presence, reverb, or liveliness that is making the new track conflict? Now what?

It may be possible to reduce only the part of the room's character that is feeding the majority of information to our brains. This can have multiple benefits, but some of it is covered in a later topic. We may be tricking our ears into re-interpreting tracks that we were happy with before, because those Frequencies that are the most obvious in carrying the room's qualities may also fluctuate in a different rate or pattern than they occur in the balance of the mix, so now our brain says “not only is it coming from a different location, but it carries information that doesn't fit into the groove of the song.”

Isolate your Frequencies with a narrow boost signal, control your output with a limiter for safety to your ears and monitors, and see if the issue is the most noticeable in the lows, mids, highs, or all of the above? Find the problem Frequencies, figure out the width of their “Q” if necessary, and reduce only the amount that reduces the complexity in the context of the mix. This means to solo the track and also check it with the mix, both while making adjustments all along. Did this help? Then, maybe the only problem was timing from the complexity of a room signal. Excellent! Did it help, but not enough? Likely so. Maybe we should see if there is something else going on here. I will cover these other possibilities in Part Four.

CDSoundMaster.com