Hey new guys, come in this thread!

i know that man, but even if i'd use monitors, my raw record data going to stay same..i mean..bah, my english ends up here, i think i cant explain what i exactly want to mean :cry:

for example; i'm presuming that u've listened both records and i say, even if i'd use couple of monitors, "theoritically" i'd not get a "centered" sound like "with redline", right?

This is down to EQing everything properly. Thinner guitars, like Clayman for example, require a huge bass tone to make up for 'em and vice versa. Same goes for the drums, you always gotta mix in regards to the big picture.

I started out with this EQ guide : http://noise101.wikidot.com/eq-guide
and then managed to know the frequencies and to remember them.

Just go with your ears and try not to mix too much with headphones if you're doing so, usually they're more coloured than nearfield monitors and they can sound too large for your speakers' good.

Even though you'd like it to sound more centered.
 
Any advice on mixing synths (violins, trumpets, choirs) with heavy guitars? I always got the feeling that i have lots of frequency's clashing with each other.

Do you have any tips or idea's on how to approach something like this?

Well, the arrangement plays a big role here... You know the "middle C" range (On notes it's between both lines of treble and bass cleffs)? Don't play on it, or play as little as possible. And it's that fucking ~400hz range that sounds super boxy too. That is where the vocals and majority of the guitar parts reside. With guitar it's pretty easy to figure out what is the rhythm guitar and what is lead guitar domain; The strings are different (wound or open) and vocals usually reside around there. So with strings, just avoid that octave range as much as possible.

I always try to arrange stuff so that the synths are always either below or above the guitars, or then they play unison stuff with the guitars. Like for example if the guitar plays from stuff that would be on A and D strings, the synths play stuff that you can would be on G-E strings or on the E string below. With strings and violins it's usually above. If they play the same stuff, either one of them has to duck a bit.

Children of Bodom "Angels Don't Kill" is pretty good example of this as it has almost all the possible variants. When the song kicks in at 0:20 the synth plays both lower and higher than the guitars. Then at 0:40 the guitars are playing in unison with the guitars and the guitars are ducked just a bit when the synths come to the same octave range, and during the solo 2:10 forward the synths play an octave higher because then the focus point is the guitarsolo. And on the second solo 3:15 forward both the guitar and synth play the same thing(!) and they are panned hard left and hard right.

Another example from the same album "Needled 24/7"; The thing starts with synths 2 octaves higher and 1 octave below the guitars on the intro and verses. And on the theme lick 1:00 forward (atleast I have always thought it was keyboard as it sounds so clean) the guitars only play powerchords on the lowest strings when the keys play the melody an octave higher. But if you are going for the Bodom type sound you really need to carve the guitars as thin as you just dare, there is like almost nothing below 150hz, like you can hear at 2:30 where the guitar plays by itself. Except "SixPounder" has maybe the biggest sounding rhythm guitars there are. Just after Metallicas "Sad But True" and Rammsteins "Sonne"

And speaking of "Sonne", there the synths play some droney shit above guitars on the verses, but it sounds mainly like radio static. On the choruses they are layered on the spectrum like this: bass - guitar - synths - vocals - synths
 
Wow, a decent topic for the nabs like me :p I've got couple of questions for you buddy..Here they come..

1- This is my latest project..As i dont have any reference monitors and use an AKG K55 headphone, i'm also using a plug-in called "112dB Redline Monitor", which gives a depthness and stereo image unto my recordings..BUT, when i apply it into any record, it sound kinda weird in most sound systems, but sounds very perfect in ear/headphones..When i dont apply it, it sounds crap in headphones, but sounds good in any other sound systems..Here are the with and without versions of the same song..Can you tell me, which one is better and why? Everything is same in both tracks, i just bypassed the vst ;)

http://dl.dropbox.com/u/7305927/with redline.mp3
http://dl.dropbox.com/u/7305927/without redline.mp3

2- I'm making my guitar tracks like this: open an FX Channel in Nuendo-> press F4 and go fx-channel options-> right click and "separate into mono tracks"-> create two audio tracks and name them "right" and "left"-> route their outputs into this fx-channels separate right/left channels-> insert POD Farm and other dynamic things as fx -> Voila, two tracks bound in one common track, a system-friendly solution..

YET, i'm questioning myself, if i'm doing right with this..As u may hear, the track "without redline" sounds "scattered", u know, on the contrast, "with the redline" sounds more "tidy".. Without this plugin, how can i make it sound "tidy"? Most probably, u have way better headphones and speakers than mine and you can easily tell me what is/are wrong with these tracks..Also they sound kinda "weak", which frequencies should be boosted to make it stronger?

Waiting for your answers man, see ya..

Get rid of the Redline program. What you need is a monitoring system that will be honest with you, like a system with two satellite speakers and a sub. Listen to all of your favorite mixes on them; you will get used to how they sound. Once you are familiar with the system, your mixes will improve. Headphones are deadly for mixing. A good pair of computer speakers will even do fine until you're ready to make a serious upgrade.

Any advice on mixing synths (violins, trumpets, choirs) with heavy guitars? I always got the feeling that i have lots of frequency's clashing with each other.

Do you have any tips or idea's on how to approach something like this?

Panning is important, as is where you choose to high-pass, and the amount of reverb you add to these elements. Trumpet I'd use a light plate verb and HP a bit higher than your guitars. Violins can use a bit of reverb, and choirs can use a lot. I would put a healthy pre-delay on violins to give them space, and less of a pre-delay on the choirs to push 'em back. Choirs I'd high-pass a little bit LOWER than your lead vocal. Violins are really mostly mid based instruments. Synths depend on your choice of voice. Finding a guitar tone that cooperates with all the elements isn't easy; you have to make a decision when it comes to what element you want to stand out.
 
Wow, a decent topic for the nabs like me :p I've got couple of questions for you buddy..Here they come..

1- This is my latest project..As i dont have any reference monitors and use an AKG K55 headphone, i'm also using a plug-in called "112dB Redline Monitor", which gives a depthness and stereo image unto my recordings..BUT, when i apply it into any record, it sound kinda weird in most sound systems, but sounds very perfect in ear/headphones..When i dont apply it, it sounds crap in headphones, but sounds good in any other sound systems..Here are the with and without versions of the same song..Can you tell me, which one is better and why? Everything is same in both tracks, i just bypassed the vst ;)

http://dl.dropbox.com/u/7305927/with redline.mp3
http://dl.dropbox.com/u/7305927/without redline.mp3

2- I'm making my guitar tracks like this: open an FX Channel in Nuendo-> press F4 and go fx-channel options-> right click and "separate into mono tracks"-> create two audio tracks and name them "right" and "left"-> route their outputs into this fx-channels separate right/left channels-> insert POD Farm and other dynamic things as fx -> Voila, two tracks bound in one common track, a system-friendly solution..

YET, i'm questioning myself, if i'm doing right with this..As u may hear, the track "without redline" sounds "scattered", u know, on the contrast, "with the redline" sounds more "tidy".. Without this plugin, how can i make it sound "tidy"? Most probably, u have way better headphones and speakers than mine and you can easily tell me what is/are wrong with these tracks..Also they sound kinda "weak", which frequencies should be boosted to make it stronger?

Waiting for your answers man, see ya..

Hello again..I've doubled my guitar track with the ENGL in POD Farm, and panned %60 to both sides..And didnt use Redline :p

http://dl.dropbox.com/u/7305927/bleed me inside (d.guitar).mp3

seems and sounds better than other 2 versions :p but i think it needs a little more slip-editing, as it sounds like, as if there's a weird reverb-thing used in guitars, u know..
 
Well, the arrangement plays a big role here... You know the "middle C" range (On notes it's between both lines of treble and bass cleffs)? Don't play on it, or play as little as possible. And it's that fucking ~400hz range that sounds super boxy too. That is where the vocals and majority of the guitar parts reside. With guitar it's pretty easy to figure out what is the rhythm guitar and what is lead guitar domain; The strings are different (wound or open) and vocals usually reside around there. So with strings, just avoid that octave range as much as possible.

I always try to arrange stuff so that the synths are always either below or above the guitars, or then they play unison stuff with the guitars. Like for example if the guitar plays from stuff that would be on A and D strings, the synths play stuff that you can would be on G-E strings or on the E string below. With strings and violins it's usually above. If they play the same stuff, either one of them has to duck a bit.

Children of Bodom "Angels Don't Kill" is pretty good example of this as it has almost all the possible variants. When the song kicks in at 0:20 the synth plays both lower and higher than the guitars. Then at 0:40 the guitars are playing in unison with the guitars and the guitars are ducked just a bit when the synths come to the same octave range, and during the solo 2:10 forward the synths play an octave higher because then the focus point is the guitarsolo. And on the second solo 3:15 forward both the guitar and synth play the same thing(!) and they are panned hard left and hard right.

Another example from the same album "Needled 24/7"; The thing starts with synths 2 octaves higher and 1 octave below the guitars on the intro and verses. And on the theme lick 1:00 forward (atleast I have always thought it was keyboard as it sounds so clean) the guitars only play powerchords on the lowest strings when the keys play the melody an octave higher. But if you are going for the Bodom type sound you really need to carve the guitars as thin as you just dare, there is like almost nothing below 150hz, like you can hear at 2:30 where the guitar plays by itself. Except "SixPounder" has maybe the biggest sounding rhythm guitars there are. Just after Metallicas "Sad But True" and Rammsteins "Sonne"

And speaking of "Sonne", there the synths play some droney shit above guitars on the verses, but it sounds mainly like radio static. On the choruses they are layered on the spectrum like this: bass - guitar - synths - vocals - synths
This was not the advice i'm looking for, but this really opened my eyes (and ears). It's kinda funny, I was a huge bodom fan and in the past i listened to them a lot. But whenever i listen to their records now i experience it in a complete other way and i hear lots of new things. Cool! Thank you!

Panning is important, as is where you choose to high-pass, and the amount of reverb you add to these elements. Trumpet I'd use a light plate verb and HP a bit higher than your guitars. Violins can use a bit of reverb, and choirs can use a lot. I would put a healthy pre-delay on violins to give them space, and less of a pre-delay on the choirs to push 'em back. Choirs I'd high-pass a little bit LOWER than your lead vocal. Violins are really mostly mid based instruments. Synths depend on your choice of voice. Finding a guitar tone that cooperates with all the elements isn't easy; you have to make a decision when it comes to what element you want to stand out.
I always have the feeling that the synths (violins strings/horns/choirs) are not fitting with the rest. They dont "glue" with the rest of the track.

I tried it out, the reverb works ok. I lowpassed everything and i find the synths are more in their own section.
I'm having trouble with getting the delay right. The standard cubase delays aren't working for me, so i'll have to find a other freeware delay.
 
which is the frequency you normally put guitars/bass/drums to sit in? (drums may be everywhere far as i know, minding spectrum of course...) do you make room for guitars and bass (or anything) in the main drum buss? or tuning every piece of the kit? or both?
 
How does a spectrum analyzer look when a "standard" good tone is achieved? i don't mean "gimme the preset" i know the bass has a huge influence, i guess i'm just trying to figure out the "dont's".!

Here is PAZ Frequency Anazlyzer from a single solo'd track from the Reamp I did of Jeffs Rose of Sharyn DI's. You can judge for yourself from the MP3 below if you think this is a good tone.

Typical setup:
TS > 5150 > V30s > SM57 > Saffire Pro 40

http://www.jasoncohenitservices.com/Rose_v30.mp3

paz2.jpg
 
How does a spectrum analyzer look when a "standard" good tone is achieved?
and actually with higain tones the analyzers are pretty useless because they even look really similiar on the spectrum analyzer even if you cut 10dB it doesn't change much because of the massive amounts of compression, but it should look something like /¯¯¯¯

Just sayin'. And the shape was /¯¯¯¯\, the quote tag just destroys it.
 
bump to this thread cause it should always be up top, and stickied!

I have a question, concerning tracking guitars:

Let's say hypothetically that I'm going to record a band with two guitarists and will use quadtracking for rhythm guitars (Guitarist 1 does L100% and R100% and guitarist 2 does L80% and R80%). Now everything is cool when there's one "rhythm guitar" line in the song, but on some parts I have two separate rhythm parts, not like a lead part on top, it's in the same octave as the rhythm but doing a different type of riff/melody than the other rhythm track. How do you go about tracking that? Doing 2 and 2 instead of 4? or 4 and 4? 4 and 4 seems like overkill, but wouldn't 2 and 2 make the guitars sound a bit smaller in that part? An example of the parts I'm talking about is on one song I have one guitar doing a simple 5th chord repeatedly, and the other guitar does an arpeggiation of a 9th chord, same octave.
 
lolzreg: I'd love to know, if you use any kind of attenuator to lower the loudness of the amp during your recording? If yes, is the quality you reach with atten. comparable with recording without attenuator? Thanks:)
 
bump to this thread cause it should always be up top, and stickied!

I have a question, concerning tracking guitars:

Let's say hypothetically that I'm going to record a band with two guitarists and will use quadtracking for rhythm guitars (Guitarist 1 does L100% and R100% and guitarist 2 does L80% and R80%). Now everything is cool when there's one "rhythm guitar" line in the song, but on some parts I have two separate rhythm parts, not like a lead part on top, it's in the same octave as the rhythm but doing a different type of riff/melody than the other rhythm track. How do you go about tracking that? Doing 2 and 2 instead of 4? or 4 and 4? 4 and 4 seems like overkill, but wouldn't 2 and 2 make the guitars sound a bit smaller in that part? An example of the parts I'm talking about is on one song I have one guitar doing a simple 5th chord repeatedly, and the other guitar does an arpeggiation of a 9th chord, same octave.

http://www.ultimatemetal.com/forum/9201683-post42.html
 
bump to this thread cause it should always be up top, and stickied!

I have a question, concerning tracking guitars:

Let's say hypothetically that I'm going to record a band with two guitarists and will use quadtracking for rhythm guitars (Guitarist 1 does L100% and R100% and guitarist 2 does L80% and R80%). Now everything is cool when there's one "rhythm guitar" line in the song, but on some parts I have two separate rhythm parts, not like a lead part on top, it's in the same octave as the rhythm but doing a different type of riff/melody than the other rhythm track. How do you go about tracking that? Doing 2 and 2 instead of 4? or 4 and 4? 4 and 4 seems like overkill, but wouldn't 2 and 2 make the guitars sound a bit smaller in that part? An example of the parts I'm talking about is on one song I have one guitar doing a simple 5th chord repeatedly, and the other guitar does an arpeggiation of a 9th chord, same octave.

What AHJ Said, and for me its all about the "feel" i am going for. I rarely quad track guitars, but if I do I usually have one guitarist 100%L and 80%L and the other guy 100%R and 80%R then each one play their own individual parts.
 
bump to this thread cause it should always be up top, and stickied!

I have a question, concerning tracking guitars:

Let's say hypothetically that I'm going to record a band with two guitarists and will use quadtracking for rhythm guitars (Guitarist 1 does L100% and R100% and guitarist 2 does L80% and R80%). Now everything is cool when there's one "rhythm guitar" line in the song, but on some parts I have two separate rhythm parts, not like a lead part on top, it's in the same octave as the rhythm but doing a different type of riff/melody than the other rhythm track. How do you go about tracking that? Doing 2 and 2 instead of 4? or 4 and 4? 4 and 4 seems like overkill, but wouldn't 2 and 2 make the guitars sound a bit smaller in that part? An example of the parts I'm talking about is on one song I have one guitar doing a simple 5th chord repeatedly, and the other guitar does an arpeggiation of a 9th chord, same octave.
I think that it will add something to the song. I mean, 4 rhythm tracks blasting through the whole song is pretty cool. But nothing is more important in music then a rest. The "thin guitar parts" in your song are making the "big guitar parts" sound even bigger.

Do you understand what i mean?
 
hey guys, one question that bothers me for a while now when compressing vocals:
I always find that the breaths and noises get far too audible when i compress them heavy. how should I set the threshold perfectly to avoid this? is there a magic rule to deal with breathing? or should i edit the breaths to be quiter afterwards by hand?
help would be really appreciated!

also, i saw people compressing the single vocaltracks as well as some guys compressing just the vocals-bus as a whole. what are your thoughts on that?
 
hey guys, one question that bothers me for a while now when compressing vocals:
I always find that the breaths and noises get far too audible when i compress them heavy. how should I set the threshold perfectly to avoid this? is there a magic rule to deal with breathing? or should i edit the breaths to be quiter afterwards by hand?
help would be really appreciated!

also, i saw people compressing the single vocaltracks as well as some guys compressing just the vocals-bus as a whole. what are your thoughts on that?
You should always try to avoid breathing and noises at the source. So when you're recording you should always try to avoid as much noises as possible, try using a wind shield/pop filter or sing "next to the microphone". There is also something called "the pencil tip".
Harsh S'es can be removed with de-essing.
 
You should always try to avoid breathing and noises at the source. So when you're recording you should always try to avoid as much noises as possible, try using a wind shield/pop filter or sing "next to the microphone". There is also something called "the pencil tip".
Harsh S'es can be removed with de-essing.

yes, I am aware of that when tracking and I don't think the breathing is too obvious ... until i start compressing :)
 
hey guys, one question that bothers me for a while now when compressing vocals:
I always find that the breaths and noises get far too audible when i compress them heavy. how should I set the threshold perfectly to avoid this? is there a magic rule to deal with breathing? or should i edit the breaths to be quiter afterwards by hand?
help would be really appreciated!

also, i saw people compressing the single vocaltracks as well as some guys compressing just the vocals-bus as a whole. what are your thoughts on that?

Try to use an expander before the compressor.
 
yes, I am aware of that when tracking and I don't think the breathing is too obvious ... until i start compressing :)

Edit them out, but be careful not to edit too closely or you lose the start/end of the word.

Sometimes leaving the breathing noises help to keep the excitement in the track, so volume automation post compressor helps to keep them from becoming too obvious.
 
Edit them out, but be careful not to edit too closely or you lose the start/end of the word.

Sometimes leaving the breathing noises help to keep the excitement in the track, so volume automation post compressor helps to keep them from becoming too obvious.

This.