Are these RMS average ok?

Wow, calm down, no need to heat the thread.

I'm in a playful mood today :)

I think that there´s no difference between boost with normalizing (or fader) or L2 as long it doesn´t clip. As I set the out ceiling to -0.2 on L2, I think that if I normalize to 0db before L2 the sound will already go clipping 0.2dB.

Maybe my experiment's methodology was not good, but you can hear the difference in the file I posted few posts above.

Anyway, this is not a big problem as clipping will always occur when trying to hit a -9 RMS average. My main doubt is if that´s how a "good" mix would look like before mastering: -6 peaks and -19 RMS average (or 0 peaks and -13 RMS average, if you normalize it). I mean, a mastering engineer would get this mix and say "oh, that´s exactly what I would expect from a pro mix. I LOVE to work with levels like these".

Looks good to me, but you can send a small clip first to make sure he likes it that way.

As people always tell to send your mixes for others to master, it´s pretty important to me to know what a mastering engineer is expecting in terms of RMS.

Don't worry about it too much - IMHO a good mastering engineer (not me hehehe) will have no problem with your mix.
 
So nobody's answered CO's question. If I'm not mistaken -18 RMS isn't totally unreasonable at all, correct?
 
Here's what Bob Katz says:

I'll give you two reasons [why I don't normalize]. The first one has to do with just good old-fashioned signal degradation. Every DSP operation costs something in terms of sound quality. It gets grainier, colder, narrower, and harsher. Adding a generation of normalization is just taking the signal down one generation.
The second reason is that normalization doesn't accomplish anything. The ear responds to average level and not peak levels, and there is no machine that can read peak levels and judge when something is equally loud.
 
Here's what Bob Katz says:

I'll give you two reasons [why I don't normalize]. The first one has to do with just good old-fashioned signal degradation. Every DSP operation costs something in terms of sound quality. It gets grainier, colder, narrower, and harsher. Adding a generation of normalization is just taking the signal down one generation.
The second reason is that normalization doesn't accomplish anything. The ear responds to average level and not peak levels, and there is no machine that can read peak levels and judge when something is equally loud.

Amen...With something that's as dynamic as a full metal mix, normalizing achieves nothing.
 
Thank you very much Aaron !

Bob Katz said:
I'll give you two reasons [why I don't normalize]. The first one has to do with just good old-fashioned signal degradation. Every DSP operation costs something in terms of sound quality. It gets grainier, colder, narrower, and harsher. Adding a generation of normalization is just taking the signal down one generation.
WHAT ???
He must have talked about 16bit audio.
In a 64bit environment this is pure bulshit.

Bob Katz said:
The second reason is that normalization doesn't accomplish anything. The ear responds to average level and not peak levels

Yes it is true that the ear responds to average level and not peak levels .
Hmmm - maybe he is talking about a different kind of normalization ?
In all hosts I know - normalization is accomplished by first scanning the entire wave, then finding the highest peak, then making the entire wave equally louder by the difference between this peak and the desired new value of that highest peak (usually 0dB).
So it gets the entire wave equally louder - peaks, silent moments and average level.

Bob Katz said:
and there is no machine that can read peak levels and judge when something is equally loud.

Adobe Audition statistics feature does exactly that.


So my question still stands:
Why is it a bad practice to normalize a mix.
(I'm talking about my way of mixing - 64 bit host, entire album in one big project, mastering plugins on the master bus ready to engage with a single click to hear how it will sound when mastered (if bad then I can always fix something at the mixing stage)).
 
Adobe Audition statistics feature does exactly that.

So using this you will be able take two audio files of completely different dynamic structure, just as an exagerration, something like a classical music piece that is very dynamic, and a clip of white noise which is not dynamic at all, and make them the same perceived volume, all the way through the tracks, so at any point during the tracks, your ears will hear them as the same volume? Because the way I see it, this is what peak normalization if it worked properly would achieve.
 
So using this you will be able take two audio files of completely different dynamic structure, just as an exagerration, something like a classical music piece that is very dynamic, and a clip of white noise which is not dynamic at all, and make them the same perceived volume, all the way through the tracks, so at any point during the tracks, your ears will hear them as the same volume?

Yes !
 

Well this is definitely news to me, so every album you mix, the tracks sound the same volume all the way through, from a clean guitar alone, compressed to high hell and back to a full on 200bpm+ blast beat with guitars tuned down to B or lower? Well done then, I honestly didn't think this was achievable.
 
Edit :- Numerous sites online say that normalizing is bad for the mix, for lots of reasons, but I think the general consensus is that processing audio digitally degrades the signal in some way, and because normalizing doesn't really achieve anything except a louder version of the same file, it's not worth it. So why normalize, when you can compress/limit during the mastering stage and achieve a "better" master, because the limiter or compressor is taming the peaks at all the strongest frequencies, rather than just the one that is the loudest? Believe me I used to normalize synth tracks to try and get them the same level, and it was more accurate to trust your ears and just ride the faders, listening to the whole mix and adjusting as you go along. I think basically the only reason I'd ever normalize something would be in the mp3 realm, where you are trying to make all your songs "approximately" the same level, to stick on your iPod, etc.
 
The way I see it:
Normalizing isn't evil, it's just useless. But hey, that's jut me.
 
I used to put an L2 or a Timeworks in the master bus to ear how it'll sound...but now I prefer to separate the 2 things: First I mix the project (if you wanna ear how it'll sound simply raise up the volume because if you use only an L2, you can't call it "mastering") and when I finish the mixing process I start the mastering process.
Anyway you can use L2 in your master bus, but not all the time when you're mixing. If your concept of mastering is only put L2, no problem: put L2 and mix. But if you wanna do a serious master, mix without L2 and after that start your mastering session: apply all the compressors, limiters, eqs you need and in the final stage the limiter.
 
Well this is definitely news to me, so every album you mix, the tracks sound the same volume all the way through, from a clean guitar alone, compressed to high hell and back to a full on 200bpm+ blast beat with guitars tuned down to B or lower? Well done then, I honestly didn't think this was achievable.

I did not say that I do it - only that I would be able to do it.

Lets say that we have 2 measures:
1 Solo clean guitar arpeggio.
2 Full on blastbeat,chugga distorted guitars,bass and synths.

How to make em equally loud ?

1 Select the first measure.
2 Use the statistics feature to determine its average loudness.

3 Select the second measure.
4 Use the statistics feature to determine its average loudness.

5 Use volume automation to smoothly (but quickly) lower the volume between the 2 measures by the difference of their average loudness.

Lots of work, but clearly possible to do it.

Is that what you meant ?
 
I did not say that I do it - only that I would be able to do it.

Lets say that we have 2 measures:
1 Solo clean guitar arpeggio.
2 Full on blastbeat,chugga distorted guitars,bass and synths.

How to make em equally loud ?

1 Select the first measure.
2 Use the statistics feature to determine its average loudness.

3 Select the second measure.
4 Use the statistics feature to determine its average loudness.

5 Use volume automation to smoothly (but quickly) lower the volume between the 2 measures by the difference of their average loudness.

Lots of work, but clearly possible to do it.

Is that what you meant ?

Yes, that's what I meant, I'd love to hear audio examples of this to back it up though.