Are these RMS average ok?

I used to put an L2 or a Timeworks in the master bus to ear how it'll sound...but now I prefer to separate the 2 things: First I mix the project (if you wanna ear how it'll sound simply raise up the volume because if you use only an L2, you can't call it "mastering") and when I finish the mixing process I start the mastering process.
Anyway you can use L2 in your master bus, but not all the time when you're mixing. If your concept of mastering is only put L2, no problem: put L2 and mix. But if you wanna do a serious master, mix without L2 and after that start your mastering session: apply all the compressors, limiters, eqs you need and in the final stage the limiter.

The only problem with this is that I find most "mastering" plugins will fuck with the mix in a way that isn't easy to fix with eq. I normally find they increase the bass in the mix and also the overheads, this is why I mix with my amateur "mastering" chain on the mix at all times, so I don't have to go back and adjust the levels of the instruments afterwards.

Edit :- And just to go OT even more, I realize you're Italian, but how come the whole of your post was good english, with great punctuation, but you forgot the H off the start of hear both times?? :p
 
Edit :- Numerous sites online say that normalizing is bad for the mix, for lots of reasons, but I think the general consensus is that processing audio digitally degrades the signal in some way

Not if you are using a 64bit host.

and because normalizing doesn't really achieve anything except a louder version of the same file, it's not worth it.

OK I did not give you my reason for normalizing between mixing and mastering:
I first like to hear how the wave sounds at its fullest loudness without any mastering compressors, limiters etc - simply to judge how much compression it will need.
Maybe thats because I am an amateur newbie and sometimes a track can sound completely different to me at -15dB average and at - 25dB average - we all know that often louder sounds better and in my dictionary better = different.
First I hear a mix during mixing and I (like any other newbie) can make a mistake that is impossible for me to hear at -25dB average (for example a little too loud crashes).

So why normalize, when you can compress/limit during the mastering stage and achieve a "better" master, because the limiter or compressor is taming the peaks at all the strongest frequencies, rather than just the one that is the loudest? Believe me I used to normalize synth tracks to try and get them the same level, and it was more accurate to trust your ears and just ride the faders, listening to the whole mix and adjusting as you go along. I think basically the only reason I'd ever normalize something would be in the mp3 realm, where you are trying to make all your songs "approximately" the same level, to stick on your iPod, etc.

Whoa ! :loco:
Do you all think that I like to normalize instead of using a limiter ? :lol: :lol: :lol:
Haaahahahahaha !
What a misunderstanding ! :cool: :lol:
 
Whoa ! :loco:
Do you all think that I like to normalize instead of using a limiter ? :lol: :lol: :lol:
Haaahahahahaha !
What a misunderstanding ! :cool: :lol:

I think that was where the misunderstanding was coming from, if you normalize simply to see how a mix will sound at it's loudest, before you master, and you UNDO the normalization BEFORE you master, then that's fine. But I believe we were thinking you normalize first and then master AS WELL.
 
I think that was where the misunderstanding was coming from, if you normalize simply to see how a mix will sound at it's loudest, before you master, and you UNDO the normalization BEFORE you master, then that's fine. But I believe we were thinking you normalize first and then master AS WELL.

Hehehe
Even better !
What I do is exactly like a normalization but much better:

Host is Reaper.
When I think I'm finished mixing I like to use Elemental Audio Systems Inspector to determine the highest peak and then use a volume fader on the mix bus to "normalize" the mix before it goes to mastering plugins in master track !

I can always move the fader without loosing a single bit of data (remember that Reaper audio engine is 64 bit) even right before final rendering - yes I know that would fuck up all compression settings, but it is still a good thing to have that possibility.
 
well, I actually track and mix with 24 bits / 48 Khz and master to 16 bits / 44.1 Khz. I guess that, destroying or not the audio, normalize is just pointless on a mix stage (and on mastering too, as you will probably use a limiter and never reach 0dB). I think that´s good to have a nice headroom on your mix for the master have freedom to boost frequencies, compress/expand, throw some exciter and master reverb, etc.

But that´s ok, it seens that my mix LOOKS just fine. Thanks again for the replies and don´t be afraid to derail the thread :kickass:
 
Edit :- And just to go OT even more, I realize you're Italian, but how come the whole of your post was good english, with great punctuation, but you forgot the H off the start of hear both times?? :p

Ahahah...you're right...it was my mistake..but I'm very happy you find my english correct :)

Master: First I try to obtain the best mix I can. When I'm happy with the result I start to master and here you have to use the right plugins and only what you really need. You have to use the plugs that improve the lacks of the mix, the punchyness, the dynamic and finally a good limiter/maximizer.
I don't like L2: it's simple but it's not very "transparent". I prefer Timeworks or Izotope (very good plugin)
 
(or 0 peaks and -13 RMS average, if you normalize it)

I think -13dB rMS is pretty loud...usually my mixes have like -16-18dB RMS

if you're in 24 it doesn't really matter that much where you're peaking...as long as it's not too extreme.



normalizing is a nono cause it's calculation and every calculation adds some mistakes/errors....

so if you don't need to normalize (and in this case you don't) you shouldn't do it..
 
I don't follow this RMS discussion.
RMS is average

How can a RMS value say if it is ok or not?

Different playing styles give different RMS values.
Palm muting gives a lower RMS values than open chords with the same peak values.

If you are playing a riff that has pauses the RMS value goes down.

I had this discussion with a guy several years ago because I had a Heavy Metal song already mastered to include in a compilation with several different styles, he told me that a song should have an RMS value of X no matter if it is classical music or Heavy Metal!!! :zombie:
He wanted me to normalize :) it like 3-4dbs lower because of some acoustic songs.

I then took his Vegas Pro Audio and made an experiment to show him he was wrong. I just faked as if the songs had lower volume parts in the middle and his GodtoldmetherightRMS value dropped. :lol:


You should listen more and look less to the numbers :saint:

Maybe I don't understand what you guys are talking about :rolleyes:
 
I dont understand when you look at the wavefoms of modern music and they are basically at the ABOSLUTE peak just clipped off. Its like its esentially clipping. Im having a rough time trying NOT to clip but more to compress with dynamics. Its such a damn trial and error.
 
The main reason I can see for normalization is for doing it prior to actually bouncing your mix (so you'd have to analyze the maximum peak level in real-time, then use your master fader or whatnot to adjust).

This way your mix retains as much fidelity as possible (remember, the lower your amplitude, the less bits you are using in the digital medium) prior to getting to the mastering engineer, who's job it is to get all the tracks to the same perceived volume.

If I already had my mix printed at a certain volume level, there would be no need to normalize as that would just put it through another process that would degrade the overall quality.

That's my understanding at least.
 
normalize as that would just put it through another process that would degrade the overall quality.

Arrrrrrghhhh !

Everyone here (except the poor old stupid Mutant) says that normalization degrades sound quality !

I thought to myself: "maybe I am wrong and they are right"

So I made 2 polls at
KVR
and
Harmony central

Then an idea came to me:
Why not use the old trusty nullification method to determine who is right ?

Results (copy-pasted from KVR):

Mutant said:
I made a 64 float wave that peaked at +50,0001 dB

Imported and then normalized it to 0 and rendered.

Then imported the second wave, entered "+50.0001" in a numerical field (fader goes up only 24dB) and rendered it.

Imported and [edit]substracted[/edit] both waves.

SILENCE

+100dB

SILENCE

+ another 150dB

I can hear errors (the difference) at ~-20dB !

So it looks that the error is really really small and noone (not even Bob Katz) will convince me that he can hear normalization error at 64 bits (-270dB !)

Everytime you use your hosts volume fader or normalize your audio - a tiny tiny tiny impossible to hear error is introduced.
 
Everytime you use your hosts volume fader or normalize your audio - a tiny tiny tiny impossible to hear error is introduced.

Sweet, then who cares? Normalizing has made mixing easier for me. When I track drums, normalizing is great for the kick and snare. I can just leave their faders at 0 and boom, their levels are good (not perfect, but a great starting point). Same principle with guitars.
 
normalizing a whole mix is a no no


personaly, this is a very good point! purely for the fact that you would be boosting and adding decibels that werent there in the first place. i would rather record in as loud as posible without clipping and cut out sound rather than try and add the sound/gain that doesnt really exist.

its the same when eq'ing tracks. you cant add frequencies that wernt recorded. that why i feel its right to use many mics in differant positions on a guitar amp and take out what you dont need after..

sorry if this all sounds odd... i am really not good at explaining things with words. i'm crap at it haha!
 
I would be REALLY interested why so many people thing normalizing a mix before mastering is supposed to be bad. The quote of Bob Katz that Aaron posted is technically correct but still nonsense.
EVERY loudness altering operation in a DAW (no matter if it is normalization or touching a fader) introduces another floating point (or integer) multiplication/division of the wave data and therefore rounding errors. If Bob Katz really thinks normalization is bad, he should not touch a single fader when he works with a DAW. Who cares/hears a difference if the wave data's loudness is changed 100x or 101x during mixing of a song? Either I am really missing something here or Bob Katz does not even know the basics of this stuff.:loco: