Phasing??? wha the #^^@%#^&%!!!

fistula

Producer/Mixing Engineer
Jul 18, 2006
516
1
18
every one is talking about phasing

i understand what is it but really scare of it!
what is it exactly? and what if i've recored with diff phases? what can i do then?

how can i prevent phase shifting?
 
:lol:

No dude, sound waves (the visual ones within your daw) go up and down, basically when something is out of phase two sources of audio are not going up and down torgether.

Another way to look at it: Speakers, the cones push air back and forth togher at the same time, however if you were to put them out of phase one speaker would push out while the other was coming back in.

To prevent it just make sure your sound waves are going up and down together.
 
I'm going to come through this as simply as possible, as I tend to ramble, include strange comparisons to Smurfs and/or public transportation, and generally not make the most sense possible.

Sound waves are combinations of sinusoidal functions. The simplest sine and cosine pictures are here:

Sine_Cosine_Graph.png


I'll be referring to the sine function (solid red line) exclusively from here on out; cosine (dotted blue line) is the same thing but shifted over a wee bit. The reason you hear stuff is because there is a variation in pressure in the air around your ears - the pressure increases, then decreases, then increases, and so on and so forth, at specific frequencies, and if you were to graph out the simplest possible pattern of increase and decrease it would look like that sine graph. 'Real' noises, though, are never exact sine waves, and instead are combinations of them. The combinations don't look like a single sine wave when you look at them in your DAW because there's a bloody lot of them, and they combine in silly ways.

The first thing to note is that the sine waves being combined are never very similar - lower sounds have lower frequencies (the 'humps' get fatter, as there are less of them filling a given period of time) and louder sounds have higher amplitudes (the 'humps' get taller), and there's just a ton of shit happening any time you have a 'real-world' sound. When it comes to actually combining the waves, it's really very simple - at any given point in time, you're basically adding the height of all of the different waves constituting your sound. Phase cancellation comes when you have two different waves being combined in such a way that one is high when the other is low, and vice versa. This happens everywhere naturally (when a guitar is out of tune, a 'beating' will be heard where the sound gets loud and then really soft at a given interval, when you listen to two speakers put in different places you'll hear some things adding and some things subtracting, the Fredman mic method enhances some low frequencies while killing some high frequencies, et cetera) and can be controlled very well for interesting effects; phase 'problems' come when a desired something-or-other is added or canceled out in a way that isn't desired (speakers wired backwards, microphones creating a 'comb-filtered' effect, waves bounce back from a wall into a microphone to make a guitar track sound 'boxy' or a room mic sound 'cheap') and that's about the end of that.

From a practical perspective, the only solution to phase issues is to move your microphones (when recording) or monitors (when playing back) if things seem to 'stack' in a funny way; you're not going to calculate the wavelength of the stuff that's going byebye, pull out the calipers, and figure out exactly how far things need to go, you're just going to move shit around until it's fine. With monitors, you can check polarity (as GuitarGodGT said earlier, one speaker 'pushes' while the other 'pulls' and you have a polarity problem), and with microphones you either move a mic or try to shift the track forward or backward a little so that it all adds up in a nicer way. Preventing phase problems is as simple as proper location (don't put amps against a wall or in a corner, or at a right angle to either, and be careful where you put drums and room mics) and carefully checking your sound when you use more than one mic - try monitoring drums in mono so you can tell early on if something is cancelling, don't use too many mics, et cetera. Hope that helped.

Jeff
 
Really well put JBroll.

I'm sure I'm not alone when I say you can normally feel phase problems,well in some cases it'll give me a momentary pain, like a rapid change in air pressure, which is exactly what it is. For example when you flip the phase of one overhead against the other.

In a simpler way, think about phase issues when mixing in every respect. Phase issues will occurs between two signals whenever you add one signal to another, regardless of their source, phase is mixing, adding multiple signals and outputting them through a lesser number of outputs.

For me there is nothing worse than fucking up the phase on drums, it just sucks the brilliance and power out of them, that is why I try to create one anchor and use that as my reference point for phase, doesn't always work, as you may not always have that time.

Although a good starting point 'rules' like the 3:1 rule does not take into account the source type, the sound YOU are after, maybe you want to use phase cancellation as an advantage. Example being using more than one mic on a guitar cab, and changing the levels to get your sound.

I suggest reading up on the boring elements of sound transmission and basic theory, they are the only rules ever to learn, everything else is experience. I may be 10 years away from being shit hot, but doing things this means I can be adaptable and not chained to techniques, enabling me to be creative with the tools infront of me.

So, someone mentioned smurfs! Now papa smurf was definately doing daisy smurf, no doubt!
 
Excellent lesson Jeff!

Here's a couple of practical tips:
If you're too busy to check the phase problems during soundcheck/tracking (like me) make sure you do it when you start mixing. During soundcheck use the phase shift button on your preamps (if they have them). I use the snare as the starting point. Then I start adding individual tracks and listen to the track with the snare with phase inverted and not inverted. For example: add the overheads first -> if the snare clearly loses low end, invert the phase on the overhead tracks.... and continue this way with other tracks. You can sometimes also figure these out by just looking at the waveform. You can also move the tracks manually so they are aligned perfectly with whatever they should be aligned with.
If your DAW sequencer doesn't have ADC: When you start adding plugins on tracks, make sure you check if they are causing any latency. If they are, shift the tracks back as much as the plugins are causing latency. Without doing this you quickly run into phase problems that were not present before mixing.

I hope I made any sense here...
 
every one is talking about phasing

i understand what is it but really scare of it!
what is it exactly? and what if i've recored with diff phases? what can i do then?

how can i prevent phase shifting?

All answers are really well said and intelligent and all, but if you have NO IDEA what the hell they're talking about... Here's the short version. When it sounds good, it's done right.
 
When it sounds good, it's done right.

This is true, but about as far from helpful as a coherent sentence is likely to get. Everyone is here because they want their work to sound good; since that is the sole criterion for success it should be more than obvious that it's what we're going for. Details and explanations go miles for telling people what to look for and how it works, but you're essentially saying that 'it's good when... it's good' and not adding anything to this. By no stretch is what I said on phase a complete explanation, and if what I said were the extent of what the forum had to offer then we'd all be fucked, so can we try to add to the discussion and not 'me too' this poor thread to death?

Jeff
 
http://www.sfu.ca/sca/Manuals/ZAAPf/p/phase.html
should give a good visual start to how to perceive phase

then start thinking about how using phase cancellation can effectively change the eq of a sound. Using two mics that are setup to where certain frequencies are 180 degrees out of phase can do a great job of cancelling those frequencies, while being less than 180 degrees will give less of an attenuation to those frequencies.

To show this effect, which as JBroll mentioned is called comb filtering based on its look on the frequency spectrum you can see the pictures from here.

Once you've got your head around these ideas come on back with more specific questions and I'm sure you can get more detailed answers that you can then understand.

edit: there is no correct phase for sounds, like said before a correct phase is a phase that sounds good to you. Not having two signals in phase is one of the ways good engineers can get sounds they are looking for. Phase is also one of the ways people use older analogue eqs to get the coveted depth in sounds that people say you can't get with digital. *cough* Bullsh*t *cough* it just takes figuring out the tools to do it digitally
 
WOW JBroll,guys!!!

really thanks for this answers! it really helped me!
here are really good tips!

just one more question - maybe you can put some phasing examples - for example snare\overhead and guitar (Fredman method maybe)...

if you never heard it how do you find it? hehe)

one more time - THANKS
 
WOW JBroll,guys!!!

really thanks for this answers! it really helped me!
here are really good tips!

just one more question - maybe you can put some phasing examples - for example snare\overhead and guitar (Fredman method maybe)...

if you never heard it how do you find it? hehe)

one more time - THANKS

I think in ozzie's thread about acoustic drum recording there's a part about phasing...
 
This is true, but about as far from helpful as a coherent sentence is likely to get. Everyone is here because they want their work to sound good; since that is the sole criterion for success it should be more than obvious that it's what we're going for. Details and explanations go miles for telling people what to look for and how it works, but you're essentially saying that 'it's good when... it's good' and not adding anything to this. By no stretch is what I said on phase a complete explanation, and if what I said were the extent of what the forum had to offer then we'd all be fucked, so can we try to add to the discussion and not 'me too' this poor thread to death?

Jeff

I didn't mean to be a smartass, I just wanted to add a none-scientific explanation that I used to embrace simply because not all phase issues are evil. Your explanation was as usual very informative and clear, but...

No hard feelings, ok?
 
A quick and dirty...

two signals originating from the same source (required) but one is moved in time, are not considered to be in phase.

Example: micing a source with two mics and one mic is a bit further from the source. the mic further away will pick up the signal ft/(1100ft/s) seconds or meters/335(m/s) later

a mic that is 1 inch back will pick up the signal
ft/1100(ft/s) = (1/12)/1100 = 75 e-6 seconds later

a mic that is 1 ft back will pick up the signal
ft/1100(ft/s) = (1)/1100 = .9e-3 seconds later or close to 1ms

I loose rule of thumb is 1ft = 1ms of time

You can use this to your advantage, or it can really hurt you.

I hope my math is correct...
 
I didn't mean to be a smartass, I just wanted to add a none-scientific explanation that I used to embrace simply because not all phase issues are evil. Your explanation was as usual very informative and clear, but...

No hard feelings, ok?


No, it's fine, it's just that I doubt people are going to trade in what some fuzzy mess says to be the scientifically proven method for what obviously sounds better, if you can make more sense than me while giving an objective view of what to look out for and how to fix it go right ahead.

Jeff
 
For guitar cabs for rhythm, I was getting around 15 samples/6" of different distances for two mics, which is 30 samples/foot, which is not too far from 40 samples/foot. 40 samples being the rule for 44.1k sampling. 44100/1100 = about 40, which is about 0.9ms. Sonic speed is both temperature and frequency dependant, so I think that accounts for the difference.

Since phase is frequency dependant, can any two summed signals - one of which is time delayed - matched by time shifting, ever be truly in phase if it's not a sine wave?
 
I remember my first lesson in phase, didn't require a classroom for me to get it:

I bought the highest end cab in a store on account it was the only one with vintage 30 speakers, and it was rugged. Well I did plug it in while in the store but for some stupid reason (I was probably 17 at the time) I could not hear what the fuck was wrong with my cab. Anyhow got it home and I and my band was like "why does that cab sound like it has no balls when the bass on your head it maxed as well as your mids"? Well I didn't know anything about sound or whatever, but back to the store it went. Funny thing, nobody there knew either! So they were like "hold on we got out tech here maybe he will be able to know". So the tech comes over and is like "hmm ok, can someone please go get me a 9v battery?" So one of the sales dudes goes and graves a 9v battery, he pulles the speaker cord out of the back of the head and places the 9v battery on it.

100% obvious at this point, speakers are out of phase. One shot out and one shot in (and they stayed there).

So simple fix pop the back panel off (witch was cool I never did that, nice to see that there are actually v30's back there) and flip a couple wires around. It was amazing.

JBroll putt it in better terms though, in other words you can tell someone something like that but it doesn't really exsplain the phenominon.
 
For guitar cabs for rhythm, I was getting around 15 samples/6" of different distances for two mics, which is 30 samples/foot, which is not too far from 40 samples/foot. 40 samples being the rule for 44.1k sampling. 44100/1100 = about 40, which is about 0.9ms. Sonic speed is both temperature and frequency dependant, so I think that accounts for the difference.

Since phase is frequency dependant, can any two summed signals - one of which is time delayed - matched by time shifting, ever be truly in phase if it's not a sine wave?

I haven't the faintest idea what you're talking about in the first thing - your recording frequency isn't going to be what you're trying to keep from canceling, it's the stuff below that threshold that's going to be affected by improper placement...

As for your second question... there will always be a combination of constructive and destructive phase interference unless your waves are identical (it can be sine, square, whatever - two things that are the same will add up as the same, anything else will have both cancellation and addition) and that's why we multitrack and record from several places instead of just copying and pasting things over and over again.

Jeff
 
For the first paragraph, what I was saying was that one mic was up on the grill, and I pulled back the other and measured the time delay, which was about 15 samples for a mic distance difference of 6". Note that I was not actually attempting to cancel out anything. Since sound travels at about 1130'/sec, it is actually possible to calculate the time delay between two mics of different distances picking up the same source. Well, that wasn't enought for me, I decided to get empirical about it, and I came up with 15 samples of delay between the two signals. What this means is I have to shift one signal by 15 samples to match the waveform of the other, if I am trying to phase them up. Since the speed of sound is dependant on the temperature, I knew I would not get an exact figure, but close enough for my science.

My second question was rhetorical.
 
First, for temperature, you're going up by under a foot per second any time you raise the temperature by one degree Fahrenheit, and given the speed it's already at you're not going too far off unless you're in the Arctic. Second, 'samples' of delay is what I thought didn't make sense - you're not going to be 'phasing in' things based on the sampling rate, you're going to be looking at the summation of things at their own frequencies; trying to adjust phase based on what you're recording at will align things that are power-of-two multiples of your sampling rate and not help anything else. Third, if you're trying to get technical about where, ideally, you should have microphones then you'll have to be precise within hundredths of inches or you might as well not even bother.

Jeff