Sorta multi-part question about levels (normalisation/compression/crossfading)

NeglectedField

Wyrde Wiðstondan
Jul 10, 2006
3
0
1
Farnborough, England
Right, this is probably gonna sound like a pretty convoluted way of asking something that's pretty elementary to you guys but as someone who's gonna home-record a full-length and wants to do as good a job as possible with the gear/software I have (Cubase 5, Line 6 Toneport/Gearbox etc, don't laugh), I wouldn't mind seeing how the 'pros' go about things.

So say I'm recording some metal guitar in the ol' bit-by-bit fashion and I've got one part of the song which has some heavy palm muting then followed by a part with open picking/strumming.

I roughly recorded a sound to make a wave image just for demonstration purposes so, something like this:
Waveimages01-Crossfadedwavefileswithoutnormalization.jpg


Now, my force of habit is to normalize all audio recordings. I dunno why it's just something I was told in Music Tech lessons and I never really thought about it, just something that I thought brings everything up to a 'workable' level or something. But if I do it with this example, this happens:
Waveimages02-Crossfadedwavefileswithnormalisation.jpg


As you can see the second part is significantly louder and that's no good.

So is the best thing to:
1. Normalize and add compression (tried this, sounds rather wrong but haven't toyed with settings)?
2. Not bother normalising and add compression?
3. Not bother normalising, no compression?
4. Am I completely barking up the wrong tree?

If you do normalise, what settings do you use? Ditto for compression if applicable. Also, slightly less urgent but with crossfading two parts together, how wide do you make the crossfade itself, and do you use equal gain/equal power or any toying with extra options as such?

One more thing, what is everyone's preferred sample and bit rates for recording?

I hope that's somewhat understandable,
Cheers!
 
Normalization's use is right in its name, its to normalize tracks to a same reference level (defined as the max peak). You use normalization for nothing else as it just increases the noise floor. If you wanted the tracks louder, you should have tracked hotter. Guitars typically you want to peak at -12dBfs and ad sit at about -18dBfs. If when you are mixing and the guitars are too quiet, everything else it too loud.

And why the hell would you compress heavy guitars? That is generally refereed as a big no no. Multiband compression is sometimes used on the low end to control the low end, however if you track correctly, that won't be an issue.
 
shouldnt need to normalize anything. its literally just boosting the gain on the audio untill the highest peak reaches a certain point - this wont make the tracks the same percieved volume, as you could have a really quiet track with a loud peak, and a loud track thats pretty consistent. I wouldnt worry about it unless you are having REAL problems.

compressing distorted guitars is fine if thats what you are after. LA3a type compression can work nicely sometimes. but only do it if thats the sound you are after. I track at 44100 and 24 bit, but im tempted to try higher sample rates. tracking at 24 bit is more important than what sample rate you are using - i dont think your work will sound significantly worse if you track at a higher sample rate, and I would have thought its better to track at a higher sample rate if you are editing a lot.
 
And why the hell would you compress heavy guitars? That is generally refereed as a big no no.

It's not something I've done before, but I have no clue whether it's the 'done thing', even if it's just say, a touch of compression for the odd little unwanted 'peak', should it be a problem.

also, i dont quite understand the problem with one part having palm mutes and the other being strummed? thats why the parts sound different, you shouldnt expect them to look the same in waveforms.

Nah it's not that different parts have a different playing technique to begin with, nor I wouldn't expect the waveforms to look the same, that's normal, it's just when you normalise something like that, it normalises each 'take' individually rather than going by the single highest peak out of the two takes (even if you 'glue' the two parts together), which causes the strummed part to sound a lot louder than the muted part. But you said there's no need to normalize anyway in which case it's not an issue.

Do people tend to use any kind of makeup gain? I wouldn't have thought it necessary with sufficient recording levels (not a problem for me when I take the time to get shit right) but just wondering...

Just so you know, what I would like to do preferably with this next release (whether or not it's the 'done thing') is that I do all the editing and effects/plugins myself but export all tracks individually for someone else to mix so they can mix them and add any subsequent individual EQ'ing or whatever to each track as they see fit. That's how much 'reign' I'd like over the post-production process, mastering duties definitely would be handed over to someone else no matter what. Having someone else mix it as well depends on money and naturally I have to get the recording process right anyway so I just wanna know as much as possible. Even if it means people are all like "what the fuck are you on about, why would you even do that?" :p
 
i wouldnt worry so much about normalising, worry about getting a good signal in the first place.

as long as you track at 24 bit and do it at a level that isn't too loud or quiet, you'll be fine. may be worth you checking out that thread on gain staging. its really about getting a feel for whats a good signal, then you can always drop the fader down.
 
i wouldnt worry so much about normalising, worry about getting a good signal in the first place.

as long as you track at 24 bit and do it at a level that isn't too loud or quiet, you'll be fine. may be worth you checking out that thread on gain staging. its really about getting a feel for whats a good signal, then you can always drop the fader down.

Aye, the recording level I've got was perfectly reasonable before normalising, and because I'm DI'ing guitars/bass things are relatively simple and I'm OCD about consistency and all that. I've got a bit of time to experiment if need be as it's gonna be a while before I knuckle down to recording for proper. No doubt before then I'll be back with some inane questions.

Cheers for the help!
 
It's not something I've done before, but I have no clue whether it's the 'done thing', even if it's just say, a touch of compression for the odd little unwanted 'peak', should it be a problem.

for guitars depending on the amp you could use a mild compression to even out the peaks a bit but you wouldn't really want the peaks to disappear completely either

Do people tend to use any kind of makeup gain? I wouldn't have thought it necessary with sufficient recording levels (not a problem for me when I take the time to get shit right) but just wondering...

Just so you know, what I would like to do preferably with this next release (whether or not it's the 'done thing') is that I do all the editing and effects/plugins myself but export all tracks individually for someone else to mix so they can mix them and add any subsequent individual EQ'ing or whatever to each track as they see fit. That's how much 'reign' I'd like over the post-production process, mastering duties definitely would be handed over to someone else no matter what. Having someone else mix it as well depends on money and naturally I have to get the recording process right anyway so I just wanna know as much as possible. Even if it means people are all like "what the fuck are you on about, why would you even do that?" :p

normalization just normalizes each clip not the whole track as a whole. If correct tracking levels where used, then you wouldn't need makeup gain, besides makeup gain is used for a process where you loose gain such as a compressor.

Its very normal for a process of tracking and adding mild effects and processing and then sending it off for more hardcore mixing, if you are trying to shape the sound a certain way or add a particular effect to give the mixing engineer the idea of what you are going for rather than giving them something completely blank. I have seen a few videos of Hollywood producers and mixers mentioning how they received sessions with mild effects on them. This is why the mixing engineers couldn't fix death magnetic because the raw tracks they received where already brickwalled with a limiter. Use the effects and editing as a shaping tool, but let the Mixing Engineer do his job.
 
By normalising all you're doing is eating up headroom in your mixer, and forcing yourself to turn down the faders. You're raising the noise-floor too. It's not very good practice to do it, and whoever told you that doesn't know how to mix.
 
Its very normal for a process of tracking and adding mild effects and processing and then sending it off for more hardcore mixing, if you are trying to shape the sound a certain way or add a particular effect to give the mixing engineer the idea of what you are going for rather than giving them something completely blank. I have seen a few videos of Hollywood producers and mixers mentioning how they received sessions with mild effects on them. This is why the mixing engineers couldn't fix death magnetic because the raw tracks they received where already brickwalled with a limiter. Use the effects and editing as a shaping tool, but let the Mixing Engineer do his job.

That's great. I take it when sending stuff to a Mixing Engineer the normal thing is to export each individual track in a suitable lossless format with the fader at 0.00 and keeping your left locator right at the beginning, etc?